View: session overviewtalk overview
Keynote presentation: Emily Lancsar
Valuing health
10:30 | Simulation based method for the identification of non-trading behaviour in stated choice studies PRESENTER: Petr Mariel ABSTRACT. The aim of this paper is to propose an alternative procedure to test non-trading, lexicographic or inconsistent behaviour based on a simulation of hypothetical choices generated by the use of existing data. The procedure fundamentally differs from alternative procedures proposed in the literature, as it does not focus on the number of times a specific level of an attribute has been chosen or choose a flexible parameter distribution to accommodate this behaviour. It is based on the expected frequency of appearance of choice patterns in a given data set. The common thread of lexicographic behaviour, non-trading behaviour and inconsistent choices are respondents’ choice patterns in repeated choices. These choice patterns can be indicators of any of these issues and they are directly related to the parameters of a Random Utility Maximisation (RUM) model. This relationship of the choice patterns and parameters in a RUM model is used by Sælensminde (1998) to define a choice consistency test and it is further developed by Rouwendal et al. (2010) to propose a half-space method to assess the distribution of the RUM model parameters. Assuming a RUM model setting, the probability of observing a specific choice pattern, depends on specific values of attributes and parameters and can be easily computed if the parameters were known. If all underlying assumptions of the model are met and these parameters are consistently estimated, the observed proportions of the choice patterns should be consistent estimations of the population probabilities. If the observed proportion of a specific pattern deviates significantly from the population probability, then the choices have not been made according to the RUM principles (e.g., lexicographic or non-trading behaviour). Using this idea, we proposed a testing procedure that aims to identify the difference between the expected and the observed probability of a specific choice pattern. The procedure does not distinguish between cases in which the specific choice pattern has been produced by lexicographic, non-trading or inconsistent behaviour. That is, the procedure does not identify the reason behind an abnormally high incidence of a specific choice pattern. We performed two sets of simulation exercises. The first was devoted to the analysis of the empirical size and the second to the power of the proposed test. The empirical size of the test in all simulated settings was always lower than the assumed significance level, showing expected behaviour of the test under the null hypothesis. The analysis of the power of the test led to the following result. The higher expected frequency of individuals with the tested pattern, the lower the percentage of additional individuals with this pattern needed to reject the incorrect null hypothesis. Generally, we can set a rule of thumb stating that if the number of individuals with a tested choice pattern doubles the expected number of individuals, the power of the test is close to 100%. The proposed test was applied to environmental valuation case study data collected by the means of a discrete choice experiment in South Delhi, India, that focuses on the severe air pollution issue within this region. The aim of this study was to measure individuals’ preferences for air quality improvement in an urban metropolitan district context. This valuation study was a typical DCE focused on four attributes related to air pollution: infant mortality, morbidity, reduced visibility and cost. The data contains the responses of 485 anonymous adults collected in a face-to-face survey at different sites in South Delhi between July and September 2019, representing 2,425 observations as each respondent faced five choices. The demographic structure of India is highly specific as the population is divided with clearly differentiated social classes. Individuals from highly differentiated social classes are expected to react to the proposed cost vector in a completely different way. We identified eight testable choice patterns related to the applied experimental design of ten rows (two blocks of five rows) corresponding to two possible reasons of their presumably high frequency. The first one was “always the same alternative chosen”. This behaviour was expected due to the fact that the alternatives were labelled with a specific policy for the reduction of air pollution. Respondents who are willing to decrease the cognitive burden of the choice tasks may focus only on a certain specific policy and do not trade between attribute levels. The second reason was “always the lowest non-zero cost chosen”, which was expected to be chosen by extremely low income people that do not choose the no-action alternative. The results of the test indicated that the choice patterns related to the first reason appeared an unexpectedly high number of times in the two blocks and the choice patterns related to the second reason appeared an unexpectedly high number of times only in the second block. To at least partially support our results, we applied the flexible procedure for representing the distribution of random parameters in mixed logit models proposed by Train (2016). In this approach, the probability of each parameter value is given by a logit function with terms that are defined by the researcher to describe the shape of the distribution. Train (2016) discusses different ways to capture the shape of the probability mass function of the parameters based on the use of higher-order polynomials, step functions or splines. We used the latter to obtain the distribution of the utility parameters. The estimated distribution of the cost coefficient seems to present three different peaks that could support the results of the proposed test for the identification of non-trading and lexicographic behaviour. References Rouwendal, J., de Blaeij, A., Rietveld, P., & Verhoef, E. (2010). The information content of a stated choice experiment: A new method and its application to the value of a statistical life. Transportation Research Part B: Methodological, 44(1), 136–151. https://doi.org/10.1016/j.trb.2009.04.006 Sælensminde, K. (1998, June 25). The impact of choice inconsistencies on the valuation of travel time in stated choice studies. World Congress of Environmental and Resource Economists in Venice. Train, K. (2016). Mixed logit with a flexible mixing distribution. Journal of Choice Modelling, 19, 40–53. https://doi.org/10.1016/j.jocm.2016.07.004 |
11:00 | Disentangling choice behavior using eye-tracking and self-report questionnaires PRESENTER: Stephanie Fernandez Pernett ABSTRACT. Choice modelers usually assume that all people behave similarly and use the same decision rule. However, different studies suggest that these assumptions are not actually fulfilled. Preferences, how information is collected and processed, and the decision rules used by individuals differ. Although taste and preference heterogeneity have been widely studied, less reference is made to the different decision rules, which may vary among individuals when facing the same choice situation. What is often defined as taste heterogeneity could be related to using different decision rules (Hess, Stathopoulos, & Daly, 2012; Campbell, Hensher, & Scarpa, 2014). When specifying choice models, the most common hypothesis is that people process information using a random utility maximization paradigm (RUM) (McFadden, 1981), considering all the information, comparing all the options, and choosing the highest utility alternative, following a rational and compensatory behavior. Despite the above, it has been shown that individuals use different decision rules, depending on their characteristics, the choice scenarios and other aspects that deserve investigation (Payne, Bettman, & Johnson 1992; Ortúzar & Willumsen, 2011). If these assumptions are not fulfilled, the predictions of the choice models based exclusively on RUM could be biased (Ortuzar & Williams, 1982). Therefore, more research is needed to identify when an individual uses a particular choice mechanism and which factors influence this decision. Some authors have identified that characteristics of the context and the choice scenario can influence the decision rule used (Johnson & Meyer, 1984; Mano, 1992). When people face complex choices, for example, those that involve too many alternatives, the strategies may not be compensatory (Johnson, Meyer, & Ghose, 1989; Onken, Hastie, & Revelle, 1985) since people can use elimination by aspect (Payne, 1976; Lussier & Olshavsky, 1979) or apply other simpler heuristics. People could also change the choice strategy to deal with time pressure (Zakay, 1985). Although these factors have been discussed, it is crucial to incorporate them in the modeling and the choice experiment design. An obstacle that usually does not allow the inclusion of these factors is that the decision processes behind the elections are hardly observable. The eye-tracking technologies have opened the door to understand and characterize these decision processes using direct observation of the order in which the information is acquired, the time spent processing each element, and the not attended attributes. Thanks to eye-tracking technology, many studies have examined the relationship between individual stated preferences and visual attention measures in recent years. (Balcombe, Fraser, & McSorley, 2015; Meißner, Musalem, & Huber, 2016; Spinks & Mortimer, 2016; Krucien, Ryan, & Hermens, 2017; Balcombe, Fraser, Williams, & McSorley, 2017). This method can provide an exogenous and observable variable indicative of, for example, the information acquisition process and the security of the election. This information can improve the management of uncertainty and the performance of discrete choice models (Uggeldahl et al., 2016). The consideration of attribute non-attendance will also contribute significantly to the literature when researchers try to explain why it happens, and this requires an understanding of the cognitive processes behind the decision (Balcombe et al., 2017). Some authors suggests that it would be possible to infer the heuristic if the distribution of attention is known (Riedl et al., 2008). Although eye movements do not specifically show the strategy used, they reveal processes that can be organized as steps that finally describe a choice heuristic (Orquin & Loose, 2013). This paper aims to identify the heuristic used by individuals in different choice experiments. Also, to what extent do specific characteristics of the choice scenario induce the use of certain heuristics. To do so, we used a fractional factorial design to obtain a cellphone purchase choice experiment. The surveys were performed using Eye-tracking technology (Tobii pro glasses 2) and a self-report questionnaire where individuals had to describe their thought process in detail while facing the choice scenario. To analyze their effect on the choice mechanism, we varied the number of attributes, number of alternatives, alternatives labeling, and time constraints. The alternatives available changed depending on the cellphone type that the respondent owned at the moment. The indicators used to characterize the thought process and the choice heuristic used were the attribute non-attendance, order of information processing, time to complete the task, hesitation between alternatives/attributes, and the fixation duration. A total of 100 respondents were surveyed. Results of the eye-tracking analysis suggested that the number of alternatives and attributes do influence respondents the thought process. As the number of alternatives and attributes increased, the fixations were focused on less information. As choice situations were presented, respondents looked for the same subset of information, as if they learned what information to attend. Also, in the situations where the alternatives were labeled, some individuals looked at fewer attributes, suggesting that knowing the brand made the choice simpler. When using time constraints, they focused on the most relevant attributes, such as price and memory. A comparison between the eye-tracking and the self-report method showed that in many cases, respondents fail to completely and correctly describe the information acquisition. However, further research should be conducted to determine if self-report methods can be considered reliable in this type of experiment. This is the first step of broader research that aims to formulate and estimate discrete choice models incorporating heterogeneity and multiple heuristics. More experiments are being carried out to elaborate on the presented conclusions. |
11:30 | In-depth, Breadth-first or Both? Toward the Development of a RUM-DFT Discrete Choice Model PRESENTER: Gabriel Nova ABSTRACT. . |
12:00 | Satisficing and a new interpretation of alternative specific constants PRESENTER: Erlend Dancke Sandorf ABSTRACT. Economic theory is built on the assumption that people are omniscient utility maximizers. That they have complete information about all available options, knowledge of their preferences and the ability to calculate their expected utility from choosing any one option. While these assumptions are useful for welfare analysis, they may not fully describe how people make choices in real life. Indeed, people routinely make decisions that cannot readily be described by the standard model of rationality (Chorus et al., 2020, 2008; Lapersonne et al., 1995; Sandorf and Campbell, 2019; Simon, 1956; Tversky, 1972). In this paper we set out to develop a simple satisficing choice model that is equally applicable to revealed and stated preference data. A satisficing individual will choose the first alternative (option) with a utility higher than some threshold level of utility. The usefulness of the model proposed in the current paper lies in its ability to explain choices. The model has the desirable property that it nests a no deliberation, or choose-the-first, strategy on the one hand and a secondary decision strategy on the other hand. The secondary decision strategy can be any the analyst deems appropriate. We develop and empirically test our model using real data gathered through a novel stated preference survey. The novel experimental design procedure allows us to control the search path. Specifically, in one treatment participants received alternatives sequentially. At each point in time, respondents decided whether to choose among the currently revealed alternatives or keep searching. The sequential search process is contrasted with the standard way of displaying all alternatives at once. Respondents were randomly allocated to see 4, 7 and 10 alternatives including a “buy none” option, or a treatment where they could reveal up to 10 alternatives. An important finding following from our work is the implication for how to interpret and think about alternative specific constants. In standard choice models that do not account for satisficing, the ASCs capture the general downward trend in choice proportions from the leftmost alternative to the rightmost alternative. But in the satisficing model, this is captured by the threshold parameter, leaving the alternative-specific constants to capture the average influence of factors that are not explained by the attributes or the left-right processing of alternatives. Furthermore, excluding all ASCs bar the one for the opt-out alternative and estimating the utility threshold suggests that the utility threshold can be viewed as a generative constant in that it captures and explains the part of the ASCs that are associated with ordering effects. Depending on the data generation process, the gain in explanatory power for estimating the threshold can be quite substantive. Furthermore, from a practical decision-making standpoint, the satisficing model is better equipped to identify the optimal order of alternatives to present to a decision maker to maximize the likelihood of an alternative to be chosen. For example, from a store owners’ perspective, what is the optimal order in which to place bottles on a shelf to maximize revenue? We show using simulation that a store owner assuming that her customers are satisficers can expect somewhat higher revenues compared to one that assumes they are utility maximizing. |
10:30 | Accounting for distance-based correlations among alternatives in the context of spatial choice modelling using high resolution mobility data PRESENTER: Panagiotis Tsoleridis ABSTRACT. Accounting for similarity among alternatives is of paramount importance for having unbiased estimates and for capturing behaviourally accurate substitution patterns that would lead to accurate demand forecasts. Similarity among alternatives is highly dependent on the choice context itself. On a mode choice context, similarity among alternatives such as car, public transport and walking, can depend on the level of comfort, privacy and flexibility that each mode alternative can provide to the decision maker. Similarities on a destination choice context, however, can be highly more complex, since they can depend on parking spots or other specific amenities, the existence of other competing neighbouring locations etc. and a range of characteristics that the analyst might not be in a position to measure explicitly. Furthermore, there is not a clear consensus in the literature whether similar nearby locations would increase the utility of a destination, due to agglomeration effects, or decrease its utility due to spatial competition [1]. Capturing unobserved correlation among alternatives, necessitates the need to move away from the commonly used Multinomial logit (MNL) models due to the IIA principle and into more advanced modelling specifications. Nonetheless, current correlation structures of Nested Logit models, applied on on the spatial context, usually discretise space into a number of disjoint nests containing alternatives of the same geographical area, while ignoring the influence of alternatives belonging in other areas/nests. We argue that such an approach will lead to uncaptured correlations in a spatial context, since as according to Tobler's First law of Geography "everything is related to everything else". Previous studies aiming to address that limitation have proposed specifications based on the Paired Combinatorial (PCL) [2,3] and the Error Component (EC) Logit models [4], which they partly address the issue, but both lead to limitations of their own. The PCL model requires the specification of every possible pair of alternatives as their own nest, which quickly leads to a significant increase of the number of nests that need to be specified. On the other hand, the EC model requires simulation during the estimation process and it could also result in identification issues [5]. Addressing the computation issues of PCL and EC models would require to limit the analysis on a subset of the choice set, either by relying on adjaceny or by random sampling of alternatives. This motivates the current research, where a novel, efficient and operational Cross-Nested Logit modelling framework with a flexible correlation structure is proposed, where space is treated as continuous. The proposed CNL structure is applied in the context of a destination and a joint mode and destination choice model for shopping trips. In those specifications, each destination has its own nest and each destination can belong to any other nest in the choice set with a non-zero probability. The allocation parameters are parameterised based on the distance among the destinations. Therefore, each destination will still belong with a higher probability to its own nest. To the best of the authors' knowledge, this is the first time that a CNL specification is applied to the context of a destination choice model and the first time that the influence of neighbouring destinations is captured in a spatial CNL model, in general. The purpose of the destination model developed is to analyse the individual behaviour for choosing an intermediate shopping destination S between a previous origin O and a next destination D, while the joint model aims to capture both the location of that intermediate shopping destination, as well as the modes used to travel to that and to the following location. The proposed specifications are empirically tested on trips captured through smartphone GPS tracking and performed across the region of Yorkshire, UK, which were collected as part of the research project "DECISIONS" [6]. The trip diary captured the participants’ mobility choices for a period of 2 weeks and it is coupled with a household survey capturing their socio-demographic attributes. Estimation results indicate that the proposed CNL structure is able to capture significant unobserved correlations among the destination alternatives and provides RUM-consistent structural nesting estimates. The results prove that, in general, there is a higher correlation between the error terms of alternatives located closer together than with more distant ones. For the joint mode and destination model, the results show that mode also has an impact on the allocation parameters. Walking leads to higher allocation parameters for the nest of the target destination, while mechanised modes, i.e. car and PT, result in more balanced allocation parameters between the target and the neighbouring clusters, potentially due to the flexibility those modes can provide to the decision-maker compared to walking. List of References 1. Schüssler, N. and Axhausen, K.W. 2007. Recent developments regarding similarities in transport modelling. In: 7th Swiss Transport Research Conference. Ascona, September 2007. 2. Bhat, C.R. and Guo, J. 2004. A Mixed Spatially Correlated Logit Model: Formulation and Application to Residential Choice Modeling. Transportation Research Part B: Methodological. 38(2), pp.147–168. 3. Sener, I.N., Pendyala, R.M. and Bhat, C.R. 2011. Accommodating spatial correlation across choice alternatives in discrete choice models: An application to modeling residential location choice behavior. Journal of Transport Geography. 19, pp.294–303. 4. Weiss, A. and Habib, K.N. 2017. Examining the difference between park and ride and kiss and ride station choices using a spatially weighted error correlation (SWEC) discrete choice model. Journal of Transport Geography. 59, pp.111–119. 5. Walker, J.L., Ben-Akiva, M. and Bolduc, D. 2007. Identification of parameters in Normal Error Component Logit-Mixture (NECLM) models. Journal of Applied Econometrics. 22, pp.1095–1125. 6. Calastri, C., Crastes dit Sourd, R. and Hess, S. 2020. We want it all: Experiences from a survey seeking to capture social network structures, lifetime events and short-term travel and activity planning. Transportation. 47, pp.175–201. |
11:00 | Route choice set generation using variational autoencoders PRESENTER: Rui Yao ABSTRACT. Choice set generation is a challenging task, since the consideration set is generally unknown to the modelers, and the full choice set could be too large to be enumerated. The proposed variational autoencoder approach (VAE) is motivated by the idea that the chosen alternatives must belong to the consideration set. The proposed VAE method explicitly considers maximizing the likelihood of including the chosen alternatives in the choice set, and infers the underlying generation process. The VAE approach for route choice set generation is exemplified using a real dataset. VAE-CNL model has the best performance in terms of goodness-of-fit and prediction performance, compared to models estimated with conventional link-penalty choice sets. |
11:30 | Endogenous choice set formation model: Implications on willingness-to-pay indicators PRESENTER: Basil Schmid ABSTRACT. The abstract is attached as a pdf! |
12:00 | Representing mode and location choice within activity-based models PRESENTER: Nicolas Salvadé ABSTRACT. The abstract is uploaded as a pdf file |
10:30 | Choice of vehicle technology and its usage- Joint analysis of the choice of plug-in electric vehicles and miles traveled PRESENTER: Debapriya Chakraborty ABSTRACT. A variety of public policies are being used to increase the adoption of plug-in electric vehicles (PEVs) with the goal of reducing greenhouse gas (GHG) emissions in the transportation sector. These include fuel efficiency standards, mandates, and purchase subsidies. However, emissions depend on how the vehicles are driven, so policies that only focus on sales could have unintended consequences. A well-known example of unintended policy outcomes is the so-called “rebound effect,” where fuel efficiency improvements that reduce the cost of driving lead to an increase in vehicle miles traveled (VMT). While the relationship between household vehicle choice and usage has been frequently studied for gasoline vehicles, much less is known for PEVs (Allcott and Wozny, 2014; Brownstone, 2008; Busse et al., 2013; Fang, 2008; Sallee et al., 2016) While PEV technologies do reduce driving costs, they also differ from ICE vehicles in a variety of ways, so the impact on VMT is far from clear. The few existing studies on PEV usage yield highly variable and contradictory findings. Using data from the 2017 National Household Travel Survey (NHTS), Davis (Davis, 2019) analyzes annual VMT from PEV owners and finds that it is 30% lower than other fuel types. In contrast, a recent analysis of data from a cohort of California PEV owners suggests that PEVs are driven at least as much as gasoline vehicles, and more so among Tesla owners (Chakraborty et al., 2022). Both studies rely on simple statistical analyses that treat VMT decisions as independent of PEV choice decisions, offering little to inform policy analysis. Our study develops behavioral models of joint vehicle and usage choices of households that are more suitable for policy analysis. This will allow us to consider key questions, such as: • What are the factors that drive PEV technology choices at the household level, and how do they interact with VMT decisions? • How will households respond to changes in these factors due to policy interventions, e.g., subsidized workplace charging, or special electricity rate structures for PEV owners? This work is just getting underway and will be applying a Bayesian Multivariate Ordered Probit & Tobit (BMOPT) model system (Fang, 2008) to estimate a joint model of vehicle choice and use for household vehicle fleets. Estimated models will then be used to evaluate the impact of alternative policies scenarios, for example: • Increase in the gasoline tax and EV rates for all - analyze the impact of an increase in operating cost of gasoline cars on PEV choice and use in household vehicle portfolios. Similarly, analyze the impact of special EV electricity rate plans. • Stringent fuel economy standards-analyze the impact of more fuel-efficient gasoline cars on PEV choice and use in household vehicle portfolios. This study uses data combined from two different sources: (1) the residential subsample of the 2019 California Vehicle Survey (CVS) administered by the California Energy Commission (CEC), and (2) a survey of PEV owners in California administered by the PH&EV Research Center at UC Davis. The two surveys have different characteristics that complement each other when combined. The first (“CVS data”) was a cross-sectional survey collected between March and July 2019. It includes both a random sample (3,637 households) and a choice-based sample of 611 households that own (or have owned) zero-emission vehicles (ZEVs). The UC Davis data comes from resurveying cohorts of PEV owners that were originally surveyed between 2015 and 2018. This data (“PEV cohort”, ~4,000 households) collected during November-- December 2019 can be viewed as a choice-based sample. Both surveys contain the usual variables on household demographics and vehicle holdings required by household vehicle choice and usage models. A well-known challenge in this type of work is the accuracy of VMT measurements. CVS data include odometer readings and self-reported estimates of annual VMT for all vehicles, but the surveyors also recruited about 33% of the sample to provide second odometers after 2 months. The PEV cohort, as a repeat survey, collected second odometer readings for vehicles that were still owned by households in 2019. An important feature of our study is to develop models using multiple imputation methods to correct self-reported annual VMTs, yielding more valid inferences (Steimetz and Brownstone, 2005). The two data sets will be combined and treated as a cross-section to estimate the BMOPT model combining a multivariate ordered probit model describing the choice of vehicle with a multivariate Tobit model to estimate vehicle usage, both at a disaggregate level. CVS data has a random sample of the population, capturing data representing the overall market. The PEV cohort has a similar-sized (choice-based) sample of PEV owners, enriching the sample with critical information to support our research objectives. |
11:00 | Activity duration dependent utility in a dynamic scheduling model PRESENTER: Stephen McCarthy ABSTRACT. See PDF file |
11:30 | Modelling pedestrian route and exit choice in a multi-story building PRESENTER: Yan Feng ABSTRACT. Introduction Everyday pedestrians choose between a number of routes to reach their destination in complex and multi-story buildings, a process referred to as route choice behaviour (Schadschneider et al., 2011). On their way out, pedestrians are furthermore required to choose an exit. This exit choice behaviour features the choice of one exit within a set of alternative exits to leave certain places (Prato, 2009). The complexity of finding one’s route and exit in multi-story buildings increases by the multiple floor layouts, complex spatial structures, many indoor objects, and moving along vertical distances (Andree et al., 2015; Kruminaite and Zlatanova, 2014; Kuliga et al., 2019). For many disciplines, such as architecture, fire safety, and civil engineering, it is vital to have a thorough understanding of pedestrian route and exit choices in multi-story buildings in order to ensure pedestrian safety and create safe building designs (Feng et al., 2021). Discrete choice models and revealed preference (RP) data have been widely used to understand pedestrian route and exit choice. Yet, there are only a few studies that have proposed solutions to model pedestrian route and exit choice in a multi-story building, most studies focused on modelling pedestrian route and exit choice on a single level (i.e., horizontal level) or in simplified environments. One of the major issues is the difficulty to collect pedestrian route and exit choices in multi-story buildings due to the financial, ethical and privacy constraints of traditional data collection methods. Compared with these traditional data collection methods, Virtual Reality (VR) provides possibilities to obtain complete experimental control and collect accurate behavioural data related to route and exit choice behaviour. VR can also capture brief actions such as small steps and hesitations that are difficult to observe in the real world. Meanwhile, previous studies showed VR is capable of collecting valid behavioural data, which means people behave similarly in VR and in the real world. In combination with questionnaire data, VR provides opportunities to acquire complementary information to further our understanding of pedestrian route and exit choices. This study aims to estimate pedestrians’ route and exit choice determinants in multi-level buildings under both normal and emergency situations. In particular, this study specifies pedestrian route choice in four different settings, namely (1) pedestrian route choice only on a horizontal level, (2) pedestrian route choice across the horizontal and vertical level, (3) pedestrian route choice across the horizontal and vertical level with higher complexity, and (4) pedestrian route and exit choice during an evacuation. Data collected with 141 participants through a VR study is used to identify the determinants that influenced pedestrian route and exit choice in a multi-story building. Data set Pedestrian movement trajectory data in a mulita-story building was collected from a VR study which was conducted from 27th November 2019 to 18th December 2019. The experiment collected a sample of 141 participants with a total of 725,000 trajectory points. The experiment aims to investigate pedestrian wayfinding behaviour in a virtual multi-story building. Four different wayfinding assignments with increasing complexity were deliberately designed, namely (1) a within-floor wayfinding assignment, (2) a between-floor wayfinding assignment (i.e., across the horizontal and vertical level), (3) a more complex between-floor wayfinding assignment, and (4) an evacuation assignment. The virtual building consists of three intermediate floors and one exit floor with eight emergency exits. Each floor comprises two main corridors, five staircases, and five elevators. During the experiments, two types of data were collected: participants’ movement trajectories (i.e., x, y, z coordinates) and personal characteristics (i.e., age, gender, familiarity with the building, highest education level, previous experience with VR, familiarity with any computer gaming). Individual movement trajectories were collected with a time interval of 10 milliseconds. It is necessary to include both observed choices and matching sets of non-chosen alternatives in choice modelling (Menghini et al., 2010). Therefore, it is essential to construct the network model of the building and generate the set of the alternative route and exit choices. Compared to using choice set generation algorithms (e.g., k-shortest paths or labelling) that might cause false negative and false positive errors, this study employs a data-driven approach to identify the alternative set of choices. The above-mentioned data will be used in the estimation and validation of a pedestrian’s route and exit choice model in a multi-level building. Route choice modelling and outlook At the conference, this paper will present the findings of a pedestrian’s route and exit choice model estimated for a multi-level building to identify the determinants that influenced pedestrian route and exit choice under normal and emergency conditions. More specifically, this paper will identify the determinants that influenced (1) pedestrian route choice only on a horizontal level, (2) pedestrian route choice across the horizontal and vertical level, (3) pedestrian route choice across the horizontal and vertical level with higher complexity, and (4) pedestrian route and exit choice during an evacuation. Discrete choice models were estimated on the basis of detailed behavioural data collected through VR, which included 141 participants and more than 725,000 trajectory points. |
12:00 | Using Choice Modelling to Develop Interpretable and Actionable Vehicular Greenhouse Gas Emission Prediction at Link-Level PRESENTER: Roderick Zhang ABSTRACT. As a means to help systematically lower anthropogenic Greenhouse gas (GHG) emissions, accurate and precise GHG emission prediction models have become a key focus of many researchers. The appeal is that the predictive models will inform policymakers, and hopefully, in turn, they will bring about systematic changes. Since the transportation sector is constantly among the top GHG emission contributors, substantial effort in the field has been going into building more accurate and informative GHG prediction models. Extensive amounts of research based on state-of-the-art Neural Network(NN) methods have populated the field. Other methods commonly used in the transportation sector for GHG emission prediction uses inference based estimation from causally related variables, which mainly center around vehicle fuel consumption predictions. Most of those GHG emission prediction studies quote that better emission estimation leads to better-informed decision (often policy) makers. So that these decision-makers can, in turn, curb down the emissions through appropriate series of actions. However, if the eventual goal of predicting emission is to inform decision-makers, then the researchers would have to consider the aspect of interpretability and actionability of their models. All the mainstream advanced GHG emission predictions in the field is focused on adopting more and more sophisticated structures to improve their prediction precision to the decimals, yet, the continuous value results are not immediately interpretable. And the complex model structures only hinder the interpretable-information-to-immediate-actions transition further. Additionally, in most cases, the precision of the exact emission value predicted is not going to change the decision maker's perception of what actions to take. As far as high-level decision-makers are concerned, in a real-time live scenario, they only need to know, to good accuracy, if the GHG emission is at a high level, which urgently needs to be addressed, or if the GHG emission is at a tolerable medium level which can be addressed now if necessary, or if the GHG emission is low enough that no changes are required from the decision-makers. Furthermore, in a preemptive planning stage, the decision-makers would also want to know supporting information such as what are the key traffic, built-space, and geometric variables that contribute the most to high GHG emission levels, which is a dimension of information that those more sophisticated models lack. Hence it is not best suited to use complex and expensive prediction methods to pursue the decimals if the true purpose is to inform decision-makers. In this work, we seek to establish a predictive framework of GHG emissions at the road segment or link level of road networks. The key theme of the framework centers around model interpretability and actionability for high-level decision-makers. We show that, for the first time, Discrete Choice Models (DCM) are capable of predicting link-level GHG emission levels on road networks in a parsimonious and effective manner. We also illustrate that the DCM-based framework provides easy gateways to sensitivity analysis which is crucial for preventative planning of higher road emissions. We argue that since the goal of most GHG emission prediction models focuses on involving high-level decision-makers to make changes and curb emissions, the DCM- based GHG emission prediction framework is the most suitable framework for high-level decision-makers. when a parsimonious model could suffice all the practical functions. The key contributions of this paper can be summarized as: 1)First attempt on using Discrete Choice Models to make GHG emission level predictions at link (road segment) level in road networks. 2)Established an analysis framework to produce interpretable predictions that induce direct actions from decision-makers To illustrate the capability of DCM when applied to predicting GHG emission in a clear manner, we adopted a Dynamic Multinomial Logit (MNL) model as our main model since it is the most iconic and fundamental model of all the DCMs. We note that there are other DCM models that are useful in addressing specific issues to improve the performance, in particular models as the Ordered Logit (OL) model could lead to better performance when there is inherent ordinal relation within the categories of the response variable. Thus, we also implemented a Dynamic Ordered Logit Model to provide further supporting evidence to illustrate DCM's prediction capability in the GHG emission area. Finally, to provide further testimony to the prediction accuracy of the DCM models, we compared our results to a prior study that also predicted GHG emissions using at link-level using the same dataset as us. In the prior work, Alfaseeh et al. 2020 applied a state-of-the-art Auxiliary Long-Short Term Memory (LSTM) model, which is a type of Recurrent Neural Network that is extremely powerful in predicting time-series data. All the variables are generated by a traffic simulator named End-to-End-CAV-based simulation (Farooq and Djavadian 2019) and Motor Vehicle Emission Simulator (MOVES). The traffic conditions are assumed to be that of an average working day’s morning rush hour in the Downtown network of Toronto. The raw dataset features contain information regarding the speed, density, flow, and road geometry. The most important variable in the dataset is vehicular GHG emission rate in links, which is a continuous variable that is an aggregation of multiple different types of emissions. In order to make full use of the MNL and OL models' prediction capability in the GHG emission prediction scenario, we established two key operations: 1) We discretize the continuous GHG emission rate (in links) variable into Low-Medium-High emission level categories and 2) we make the links (road segments) the choice-maker. So that the alternative (emission level categories) with the highest likelihood of being chosen is the equivalent of the predicted emission level on the link. We compared our predictive accuracy to the findings of Alfaseeh et al.2020 's LSTM model and found our models to have around a 5% higher predictive accuracy. Combining the promising prediction accuracy with the parsimonious nature of the models and the ease of interpretation, we conclude that the nature of the DCM-based model framework can provide the decision-makers with ease and clarity to address road network GHG emission levels with immediate and impactful strategies. |
10:30 | Welfare, Redistributive and Revenue Effects of Policies Promoting Fuel Efficient and Electric Vehicles PRESENTER: Doina Radulescu ABSTRACT. Overview The private transport sector still accounts for a large share of worldwide CO2 emmissions. Despite generous subsidy programs and ambitious policy goals, the adoption rate of electric (EV) and hybrid (HV) vehicles remains low. In order to achieve global emission reduction goals and to make road transport more energy efficient and environmentally friendly a significant increase in EV and HV adoption is necessary. We employ an extensive dataset of individual socio economic and car specific characteristics to assess which factors drive household preferences for an EV or HV in particular and for more fuel efficient cars in general. We model household’s decisions for a car type based on revealed preference data for newly purchased cars in 2017-2019 in the Swiss Canton of Bern and match it with consumer specific informations based on administrative data from tax filings. We estimate mixed logit models applying a control function approach following Petrin and Train (2010) to control for potential car price endogeneity. In addition to unobserved heterogeneity through random coefficients, we can also control for observed heterogeneity in the valuation of certain car specific characteristics based on our household specific information. We find significant negative preference for higher car prices with quite substantial heterogeneity based on income. Agents tend to undervalue future variable costs in contrast to upfront car prices. We find that homeownership and solar panel ownership increases the likelihood of an agent to purchase an EV substantially. An increased public charging infrastructure density in close vicinity of the home also has a positive effect on EV adoption probabilities. Based on the estimated preference parameters we conduct two counterfactual policy scenarios that could be employed to promote more fuel efficient vehicles. We simulate the introduction of a CHF 0.12 fossil fuel levy on each liter of gasoline and diesel purchased. Furthermore, we also simulate the effects of EV purchase subsidies of CHF 4000. We find that the subsidy is more efficient in reducing carbon emissions but from an overall welfare perspective the fossil fuel levy performs better, due to increased public revenues. However, both policies feature regressive effects. The share of income spent on carbon taxes is substantially higher for lower income households and a larger share of subsidies is paid out to higher income households. Hence, both policies promoting fuel efficient cars would have averse distributional impacts. We furthermore simulate potential combinations and levels of the two policies to describe the welfare and distributional impacts of the policies and simulate an optimal policy mix based on a constrained welfare maximising social planner. The first constraints is an environmental target and on the second constraint implies non decreasing income generated by fuel taxation to secure the financing of the road infrastructure. We contribute to several strands of the literature. First, the general estimation of demand on the car market and specifically agents preferences for EVs and HVs. Second, the valuation of variable costs in contrast to the valuation of upfront costs. Third, the impact of government policies such as subsidies, tax credits, fuel taxes and emission standards on the car market and emission abatement. Fourth, the specific outcomes of policies promoting fuel efficient vehicles and fifth the distributional impact of fossil fuel taxes and subsidies. Methods We use a discrete choice model to analyse the factors that drive consumers preferences towards the different cars and fuel types. The probability of household i purchasing vehicle type j is specified as: P_ij=∫▒exp(V_ij )/(∑_(j=1)^J▒exp(V_ij ) ) f(β│θ)dβ With Vij being a deterministic utility function for each household car combination. The utility function is a function of car characteristics (i.e. price, variable costs, weight) and a number of interaction terms of car characteristics and household attributes. Furthermore, households are allowed to randomly deviate from the estimated mean preference parameters. This allows for flexible substitution patterns between car models and relaxes the IIA assumption which has been found to be too restrictive for the car market (i.e. Berry, Levinsohn and Pakes, 1995). To control for potential endogeneity problems in case there are preference patterns for certain vehicle types that both consumers and sellers know of, but are unobserved by the econometricians , we employ a control function approach based on a marginal cost shifter. Conclusions Increasing concerns about greenhouse gas emissions from the road transport sector coupled with a very low uptake of environmentally friendly technologies such as electric vehicles call for a deeper analysis and understanding of consumers’ choices in the car market. We use revealed preference micro data on around 23,000 households and new vehicle purchases to analyse car choice behaviour. Based on our estimated preference parameters we propose two counterfactual exercises: an increase in the fossil fuel levy and a car price subsidy for EVs. We simulate several policy levels and combinations and examine welfare, public revenue and emission outcomes. These counterfactual exercises illustrate two challenges and an important trade-off that policy makers face. On the one hand, increasing adoption of EVs can be supported through pricing carbon or by subsidies as well as tax breaks. Increasing EV uptake leads to stronger carbon emission reductions and subsidies appear to be more powerful in supporting the uptake. On the other hand, increased fuel efficiency and adoption of EVs erode the revenue needed to finance road infrastructure and lead to a potential erosion of public road infrastructure funds. Furthermore, all tax policy instruments usually exhibit regressive features, involving a higher burden on lower income households. At the same time, it is more likely that higher income households are the main beneficiaries of subsidies paid out, which exacerbates the redistributive concerns of environmental policies. Hence, policy makers face important trade-offs between public finance, emission reductions and distributional concerns and potential EV support mechanism should be coupled with progressive financing schemes to counteract redistributive tendencies. |
11:00 | Long-distance charging behaviour and range anxiety: An adaptive choice design approach PRESENTER: Mikkel Thorhauge ABSTRACT. Introduction & methodology The paper presents a novel adaptive stated choice design for revealing EV users’ charging behaviour while accounting for range anxiety in the choice situation. The design is formulated as a rolling forward looking choice design where users are asked to consider charging alternatives on long-distance trips in a sequence of stated choice situations. For each choice situation, it is assumed that respondents look ahead and either select from a set of charging alternatives along the route or post-pone charging for a later stage. By presenting users with a consistent choice set where new charging alternatives are dynamically introduced as the respondent move forward, while other alternatives are removed as they are ‘passed’, it is possible to reveal charging behaviour and range anxiety in a joint setting. The specific choice experiment introduced here resembles a charging situation and is adaptive in the sense that the choice situation is rolled forward in time through a number of charging stages. At time t=1 the respondent is presented with three charging alternative i∈{1,2,3} and an opt-out alternative i=0 that represents the choice of postponing the charging decision to a later time. At time t=2 the driver is moved forward on his/her route. In doing so, we remove choice i=1 as this opportunity is now passed while adding a new alternative i=4. In this way, we allow for strategic forward looking behaviour in the choice model. Moreover, by monitoring their state-of-charge (SoC) while progressing through the different stages, it is possible to reveal the trade-offs between charging attributes and range anxiety. The adaptive nature of the choice set for the different stages is illustrated below in Table 1. In the present design, where charging options are unlabelled, it is assumed that there is only one random effect. The combined dynamic choice experiment is designed so that charging is always required at some point. Hence, the initial SoC level is defined such that the respondent cannot reach the final destination without having to charge. As a consequence, the final choice set will not include a choice alternative of not charging. The choice probability of respondent n choosing a charging option at time t=1 is the probability of choosing that option at time t=1. The choice probability at later stages t' involves the probability of not choosing to charge before arriving to stage t'. If we consider all sequences, we can express the probabilities that a person chooses to charge at time t as t=1: P ̃_(n,t=1) (i_1)=P_(n,t=1) (i_1) t=2: P ̃_(n,t=2) (i_2 )=P_(n,t=2) (i_2)P_(n,t=1) (i_1=0) t=3: P ̃_(n,t=3) (i_3 )=P_(n,t=3) (i_3)P_(n,t=2) (i_2=0)P_(n,t=1) (i_1=0) … t=k: P ̃_(n,t=k) (i_k )=P_(n,t=k) (i_k ) ∏_(k^'=1)^(k-1)▒〖P_(n,t=k') (i_k'=0) 〗 Hence, if a person chooses to charge at t=k, P ̃_(n,t=k) it is the probability distribution over all previous stages. It is composed of the joint probability of not charging up to that point in time multiplied by the probability of charging for t=k. We model these probabilities using panel mixed logit models where the estimation is based on maximising the simulated likelihood. Stated Choice experiment For the generation of the stated choice design, we have applied the software package Ngene. Each choice task consists of 7 alternatives (6 charging locations labelled A-F and the opt-out alternative when charging is postponed). The experiment is designed to mimic the dynamic choice behaviour for longer trips and with particular focus on the interplay between SoC levels and charging attributes. The stated choice experiment was constructed as efficient designs and contains 40 choice task divided into 20 blocks. In other words, each respondent is presented with 2 choice tasks from a simple design as well as 2 choice tasks from a complex design. With respect to charging speed, it should be noted that while charging speed is in general a non-linear function that depends on a variety of factors (such as the battery level and type), it is here assumed that the function is linear. All attributes is defined according to five levels, except for cost which has four and facilities which have three levels. Level values are presented in Table 2 below. In Figure 1, it is shown how a choice stage is presented to the users. The specific choice stage is taken from the complex design where cost, charging speed and facilities are included (Appendix A includes the subsequent pages in the rolling design for a complete overview). The data was collected by sending invitations to the approximately 2000-3000 members of the Danish electric car association, United Danish Electric Motorists (Forenede Danske Elbilister, FDEL). To avoid GDPR-issues a third party consultant was hired to handle the email distribution. The final sample consisted of 354 respondents. Each respondent were presented with 2 simple experiment and 2 complex experiments. Furthermore, to increase robustness in parameter estimation we perform a joint model estimation with a dataset from a similar study. That dataset contains 558 respondent who were presented with 1,674 stated choice tasks each. Table 3 presents key statistics of the data. Results This section presents the results of the final model. The Mixed Logit (ML) model is estimated with PandasBiogeme v3.2.8 using 10.000 standard normal MLHS draws. The model accounts for panel effects, and thus the draws are sampled per individual (not observation). Parameters and model summary statistics are presented in Table 4. Overall all parameter estimates have the expected sign and are significant at 99% confidence for the main effects. We find that young individuals and tesla owners are more inclined to postpone charging, and thus can be considered to be less risk averse. |
11:30 | Modelling Behavior of Consumers Preferences for Alternative Fuel Vehicles and its Energy Demand Implication at the National Level PRESENTER: Ayelet Davidovitch ABSTRACT. Introduction: The research analyzes the future energy transportation market based on a passenger transport representation embedded in a national techno-economic model. This representation derived from a unique behavior model of passengers' preferences. Future mass deployment of Alternative Fuel Vehicles (AFV) requires investment in infrastructure and technology. The study motivation is to better understand penetration of the different technologies and their adoption. We first focus on modelling the consumer behavior, demand patterns, and the factors affecting the transition to using such vehicles. Further, the study examines the diffusion consequences for various AFVs and their implications on national energy systems and the environment. We approach a novel set of behavioral parameters using a research design that connects bottom-up (micro) data through choice models to a (macro) model of the entire energy-economic system. At the micro level, we applied multi-segment choice modeling methodology for identifying consumer preferences to better estimate demand. At the macro level, we modified the integrative MESSAGEix energy model application to include a detailed transportation energy with a logit-based choice model, used for economic and environmental scenarios analysis. The study is based on country-level modelling and data of Israel as a test case. The data includes measures of both travel activity and of attitudes towards new transport technologies, modes, and behaviors. The study can be adopted to other regions and countries based on the same integrated model with adjustments for their specific behavior and segments characteristics. Israel has significant advantages for the adoption of electric transportation due to its unique conditions including low electricity prices, short travel distances, mixed national energy resources and a young innovative population. Methods and Data: The Behavior modelling is based on an advanced survey tool. We describe a two-stage method of pilot – and main surveys. In the first phase, we launch three pilot surveys, used to provide an initial set of parameters most relevant to consumers' behavior when buying a car, trying to identify clusters of preferred parameters, and decide whether the choice is nested, labelled, unlabeled, etc. The pilots informed the definition of the leading parameters for the survey. Following the pilot surveys, the final survey includes a two-stage survey with two parts, in which the second part of choice menus is designed based on results of the first part’s cluster analysis. The mix types of questions and the two-stage design of the survey allowed the examination of revealed vs stated preferences. The macro model is an energy model, called MESSAGEix. The model is a dynamic bottom-up technology-based optimization model designed for medium to long-term energy planning and policy analysis that provides a framework to represent energy systems with all their inter-dependencies and correlations. The model describes the entire energy system, including resource extraction, trade, conversion, transmission and distribution, and the provision of energy end-use services such as light, space conditioning, industrial process heating, and transportation. The optimization model is solved to find the least-cost solution of satisfying energy demand under various technical, economic and ecological constraints. The MESSAGEix is updated to present the Israeli Energy market incorporating the consumers' choice modelling based on a detailed Israeli consumers' survey. The vehicle choices depend on more than just techno-economic considerations. Technology adoption decisions (vehicle choices) are influenced by both financial and non-financial considerations. The financial attributes include upfront (capital) costs and expectations about future operating and fuel costs, affected by fuel efficiency. The non-financial attributes include parameters such as the available models and brands, perceived risks, comfort, vehicle range and refueling/recharging station availability. The consumer preferences for these financial and non-financial attributes are very heterogeneous, within and across segments. In the macro level model, we monetize the non-financial vehicle purchase considerations by analyzing “disutility costs” from the transport model. These “disutility costs” are added as extra cost terms to the vehicle capital costs already assumed, and they vary by technology, by consumer group, by country/region, and over time. The main model is then optimize and results are analyzed for economic and environmental variables. This work allows understanding the significance of demand from private transportation on the energy and technological supply side. The research scenarios are based on two main aspects: various energy sources (Natural Gas, Renewable Energy); various propulsion mix. Results and Expected Contributes: The research results include the consumers' behavior based on the survey and economic aspects of the energy market based on the MESSAGEix model. Using the collected survey data, we show that key factors influencing the consumers' vehicle choice are, in order of importance, car type (characterized by manufacturer, category, and safety), horsepower, cargo size, and user interface alongside of secondhand market value for the car. However, the preferences changed across different segments of the population. Based on the survey results the population is divided into five (5) main segments. The outcome of the survey is integrated to the MESSAGEix model. The anticipated results regarding the energy model with the consumers' behavior are significant and steady increase over the coming decades, in the choice of electric vehicle compared to other propulsion technologies. Nevertheless, we expect differences between the groups in two main aspects: the mix of the adopted technologies according to the consumers' behavioral preferences, and the mix of energy resources. In summary, the results will support in formulating policies to encourage the use of the AFVs with optimal diffusion levels appropriate to the planned future infrastructure. The outcome of the research can affect industry and the government policies towards AFVs. |
12:00 | User preferences for EV charging, pricing schemes, and charging infrastructure PRESENTER: Anant Atul Visaria ABSTRACT. The adoption of electric vehicles (EVs) is rapidly growing. EV prices are expected to reduce over time and many countries intend to set up policies to support the purchase and use of EVs. This makes it reasonable to expect that the EV adoption will continue to increase. However, this also means that there is a need for better planning, especially for the charging infrastructure to improve and support the transition to e-mobility. This motivates research concerning the overall user experience to facilitate both ownership and usage of EVs. While the literature has mainly focused on EV ownership in particular the influence of purchase price and driving range, there is much less evidence about how everyday facilities can also support the adoption of EVs. In particular, there is a need to identify what factors to consider while designing the supporting charging infrastructure. In this study, we analyse user preferences related to electric vehicle (EV) charging decisions. The analysis includes both a qualitative as well as a quantitative assessment. The qualitative assessment consists of a literature review of existing studies about EV charging behaviour and an analysis of semi-structured interviews with Danish EV users. This assessment identifies the most relevant factors for charging. In addition, it highlights that the time horizon of the charging decision is an important factor. Based on the overall qualitative assessment, two main decision-making situations were identified: Long-term decisions, i.e. what kind of pricing plan or charging network membership users prefer in order to meet their regular charging demands, and occasional decisions where users decide where to charge, especially on longer trips. Thus, two stated-choice (SC) experiments explicitly focusing on each of these two decision-making situations Choice experiment 1 In the interviews of the current study, it was indeed found that a common consideration that went into the decision of buying an EV was the operation and maintenance costs. While, EVs in Denmark are still relatively more expensive than internal combustion engine vehicles (ICVs), users state that the comparatively low operation maintenance costs of an EV (roughly 1/3 as that for ICVs) is what made EVs more attractive for them. The alternatives decided upon based on the existing pricing structures and hypothetical future possibilities were No Contract (NC), Flat Fee (FF), Monthly Subscription 1 (MS1) and 2 (MS2). The scenarios are described by subscription costs, home charging costs, public charging costs and whether the consumer as part of the contract has limited access to a specific charging network or not. Choice experiment 2 With the increased battery size and driving range of EVs, along with the availability of fast chargers, it is expected that more EV users have started taking their EVs for longer trips. This was also observed in the interviews conducted in the qualitative assessment. While most people still prefer using an ICV for longer trips, the tendency is shifting as EV adoption increases and charging infrastructure improves. In various studies, it has been shown that public charging infrastructure is an important safety net for users, especially at intercity locations to facilitate longer trips. In the second SC experiment, the idea is to ask users to consider that they are on a long trip and, due to low battery level (20%), they need to charge at a public fast charging option nearby. The choice scenario was created with the possibility to choose among three charging locations. This was an unlabelled experiment, so the alternatives were all similar in structure. The scenarios are described by detour, number of available chargers, charging speed, charging cost and additional facilities at the charging location. Both choice experiments experiments were based on an orthogonal design using the NGENE software. For both, SC1 and SC2, 27 choice scenarios with 9 blocks were created. This means that each respondent would answer 3 scenarios of SC1 and 3 scenarios of SC2. Data collection and results The survey was launched in March, 2020. It was distributed by the charging provider E.ON to its customer base in Denmark and posted on various Danish EV forums. A total of 686 complete responses were collected. From all these responses, 558 were BEV users and 488 BEV users were E.ON Denmark customers while 70 BEV users were non-E.ON customers. The response collection ended in April, 2020. To analyse the SC data, we apply discrete choice models based on random utility maximisation (RUM). Our final models are ML models. Based on SC1, we derive WTP measures with respect to subscription cost, i.e. we find the subscription cost that respondents are willing to trade off for any of the four other attributes. These are found using parametric bootstrap, i.e. we draw a realisation for each parameter and calculate the ratio between the relevant parameters. This is then repeated 100 times before we find the average and relevant percentiles. The WTP measures show that on average respondents are willing to trade off 110.2 DKK/month if they can lower the home charging cost by 1 DKK/kWh and 33.4 DKK/month when it comes to public charging cost. Concerning the WTP for network access, we see that there is an average WTP of 57.1 for network access in all of Denmark and that the WTP for network access within all EU is similar. Based on SC2, we derive WTP measures with respect to Detour, i.e. we find the detour time that respondents are willing to trade off for any of the other attributes. The WTP measures shows that on average respondents are willing to trade off 5.54 minutes of detour against 1 less DKK/kWh in charging cost while charging speed is valued at 0.27 minutes of detour for an extra km/min of charging. For chargers, the results show that an available charger is valued at 7.82 minutes of detour while an occupied charger is valued much less at 0.56 minutes of detour. Finally, we see that restrooms are valued at 1.2 minutes of detour, which can be compared to the availability of all facilities that is valued at 9.5 minutes of detour. |
10:30 | Comparing Water Quality Valuation Across Probability and Non-Probability Samples PRESENTER: Frank Lupi ABSTRACT. Choosing a sample source is a critical step in any survey design. Probability-based sampling is traditionally preferred for representing the general population. However, in the internet era, non-probability online samples have seen rising use due to their speed and cost-effectiveness. While non-probability online samples offer advantages such as rapid collection times and lower costs, they are criticized for potential biases that may not be mitigated by balancing samples to “represent” population demographics (Baker et al. 2013). Understanding trade-offs related to non-probability samples is particularly pertinent to stated preference (SP) choice modelling, which is often complex and costly. Although some survey literature has found that non-probability samples are less accurate and have larger variance than probability samples (Yeager et al. 2011), a recent review of best practices for SP notes that data collection mode may not considerably affect results citing several SP studies with mixed findings (Johnston et al. 2017). Since that review, some popular non-probability sample sources have undergone a “crisis” in response quality (Chmielewski and Kucker 2020). As sources and methods evolve continuing research is important since bias between non-probability online samples and traditional methods may change. This paper adds to this literature by comparing SP choice modelling results between an addressed-based probability sample and two non-probability online samples. The specific research compares values for freshwater ecosystem services in Michigan using referendum choice questions. We find that while the samples differ in some demographics, such as the relative youth of the MTurk respondents, across 95% of attitudinal variables the samples were substantively very similar. Most importantly though, the samples consistently differed in terms of key economic outcomes like total and marginal WTP for attribute changes. Our SP survey was developed according to suggestions by Johnston et al. (2017) for survey development. The probability sample is an address-based sample (ABS) of the general population from the USPS postal delivery file that was implemented in a push-to-web mail-invitation design. The survey received a 23% response rate yielding ~2,500 observations. The two non-probability online opt-in samples include 1,237 respondents from MTurk and 3,095 from Qualtrics. Amazon’s Mturk web-service consists of workers that complete tasks, such as surveys, for payments. Qualtrics recruits its opt-in panel using proprietary methods and incentives. Except for necessary differences in log in procedures, the surveys were identical across sample sources. In each sample, invitees were told the survey was about public policy issues rather than water quality to minimize any self-selection related to the good. Each used a single binary referendum contingent valuation question with respondents voting on a water quality change at a cost to their household. Water quality was described using four water-quality attributes: an ecological condition score, a recreational fish score, a clarity score, and an E-coli score. Across respondents, we used the same 30 experimentally designed scenarios for water quality changes and costs created in NGene, and the order of information on the four water quality attributes and the way they were summarized before the choice question was randomly varied across respondents. Demographically, MTurk respondents skewed younger, less retired, and more educated while Qualtrics skewed older and more retired compared to ABS, consistent with Zack et al. (2019). All samples had medians for income and gender very close to U.S. Census medians. We also compared the sample sources across the many attitudinal question scales in the survey. With the large sample sizes, 86% of the attitudinal means were significantly different, even though the means were all remarkably similar. However, only 5% of attitude measures were substantively different than the ABS sample (i.e., more than 10%). Turning to key economic results, we first examine the shares of yes votes to the various price levels in the experimental design; we then summarize econometric model results. Across the various price levels, only the ABS sample showed a theoretically consistent, strictly-monotonic decline in votes as prices rose, although the MTurk and Qualtrics samples did decline on average. At each price level, the ABS sample always had the lowest share of respondents voting yes, and the MTurk had the highest share voting yes. When we parametrically model WTP, we find that almost all parameters were significantly different across samples. Many of the marginal willingness to pay estimates (MWTP) for individual water quality attributes were significantly different across the three sources, sometimes differing by more than a factor of two. For total WTP across policy-relevant changes in water quality, the MTurk values were always significantly greater than the address sample at the 1% level with a difference that grows with the size of the water quality changes. The Qualtrics total WTP values were significantly greater than ABS for changes up to a 13-percentage point improvement but became lower than ABS for changes over 21-percentage points because Qualtrics had the lowest lower scope sensitivity (i.e., Qualtrics was least responsive to quality changes). In sum, the finding that almost all our attitude metrics were not substantively different across samples did not prevent the valuation results from differing in many important ways. These results should be of broad interest across disciplines since all have the potential to use these different approaches to obtain respondents. References Baker, R., et al. 2013. Summary Report of the AAPOR task force on non-probability sampling, J. of Survey Statistics and Methodology. 1(2):90-143. Chmielewski, M., and S. Kucker. 2020. An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science 11(4): 464-473. Johnston, R., et al. 2017. Contemporary guidance for stated preference studies. J. of the Assoc. of Environmental and Resource Economists 4(2): 319-405. Yeager, D., et al. 2011. Comparing the accuracy of RDD telephone surveys and internet surveys conducted with probability and non-probability samples. Public Opinion Quarterly 75(4): 709-747. Zack, E., et al. 2019. Can nonprobability samples be used for social science research? A cautionary tale. Survey Research Methods. 13(2):215-227. |
11:00 | Are green flood management strategies preferred by residents of the Northern Territories in Hong Kong: a Stated Preference Approach PRESENTER: Duncan Knowler ABSTRACT. Historically, Hong Kong has been susceptible to heavy rainfall and typhoons due to its geographical location. Concern has been rising over an increasing intensity and frequency of coastal flooding, enhanced by climate change and rising sea levels (Lin et al., 2012). Traditionally, the government has adopted a hard engineering approach, constructing underground receptors and drainage to temporarily store water and divert flooding to suitable discharge points. However, wetlands and other green infrastructure can help mitigate flood risk and regulate local climate (Ramsar, 2013). For example, the Mai Po wetlands in Hong Kong’s Northern Territories could help mitigate flood risk by containing flood waters and slowing the advance of flooding (Hossain et al., 2005). However, support from flood management authorities and the local population for such green infrastructure has not been studied. We address several research questions: (a) How do households located near coastal wetlands in the Northern Territories value the flood protection service of the Mai Po wetlands? Does the threat of rising sea level alter the perceived value of this ecosystem service or affect preferences for flood management, and (b) are residents willing to pay for alternatives to supplement conventional flood mitigation measures, such as improving the wetlands to control flooding? Which alternative flood mitigation measures are most preferred by residents? To address these questions, we used a discrete choice experiment (CE) that incorporated novel spatial/geographic information and explicitly accounted for sea level rise. We explored preferences for flood management in the Northern Territories using a set of four attributes to describe elements of flood management planning: (i) the area of mangroves in Mai Po; (ii) the number of green infrastructure projects combining natural and engineered structure in Yuen Long District; (iii) the number of adaptation strategies to mitigate potential flooding and storm damage; and (iv) a one-time payment for different programs. In the latter case, respondents were asked if their household would be willing to make a one-off payment to a flood prevention fund established jointly by a non-government agency and the government. Finally, an opt-out option was included in each choice set, referred to as the “Existing Situation”. We also included several additional considerations in our research. Sea Level Rise (SLR) was incorporated as a special context variable but only shown on the last three choice cards presented to each respondent. In addition, the GPS coordinates and elevation of each respondent’s residence were collected using Google Maps to add a spatial dimension. Finally, to test for heterogeneity in preferences, we used Latent Class Modeling (Lazarsfeld and Henry, 1968). LCM assigns each respondent to a discrete number of latent classes with unique parameter vectors as determined by the respondent’s choice observations and characteristics. The field survey was conducted from June to August in 2019 and produced 511 completed responses, but after removing respondents that chose to opt out of all choice tasks, our sample was reduced to 463 responses. Considering model simplicity and interpretability of the results, a 3-class model was selected from the LCM analysis. Class membership was significant in accounting for diverse preferences for the CE attributes. In addition, coefficients estimated for all non-spatial covariates were significant (p < 0.05) and could be used to “explain” the membership in the three classes. However, elevation and distance as covariates were not statistically significant. WTP results from our estimations reveal useful information about residents’ preferences for different flood management options. For example, residents of the Northern Territories prefer mangrove management as a flood mitigation alternative. Although they also support using green infrastructure to manage flood risk, the WTP is less than half that for mangrove management. The residents also showed positive WTP for improvement of adaptation strategies: close to 75% of respondents (95% confidence level) supported at least some improvement in adaptation strategies, although their WTP was significantly less than willingness-to-pay for green infrastructure and mangrove management. On a latent class basis, class 2 respondents are more willing to pay for various flood risk management strategies than other classes, despite having members with lower incomes and education. Nevertheless, members of class 1, who are from the higher income and education group, also demonstrate willingness-to-pay for various flood risk management strategies. Overall, respondents expressed positive views of mangrove wetlands and green infrastructure, yet their WTP to adopt a green approach towards flood management is not very high, particularly for those people with higher incomes and education levels. This might be expected in Hong Kong, as most people live in high-rise residential buildings. These structures are considered safe in a flood situation. Furthermore, over 74% of class 1 and class 2 (at 95% confidence level) would be willing to pay for flood management strategies, including mangrove management, green infrastructure provision and adaptation strategies. This is equivalent to over $140,000 HKD ($17,949 USD) per hectare of mangroves, a finding that policy makers should consider when studying alternatives to conventional infrastructure. As many Asian coastal cities are predicted to be vulnerable to SLR due to climate change (Hanson et al., 2011), demonstrating a relationship between flood risk management and perceptions about SLR is an important finding. When an SLR condition was presented to respondents, they were more likely to select one of the offered flood control programs than the opt-out option. Additionally, people who had more previous experience with flooding (especially class 1), also expressed more concern about SLR and flooding in general. As a result, they tended to opt for any additional mitigation program, rather than the status quo. Although respondents expressed concern about SLR, the Hong Kong Government has not carried out much research on this issue. Reports recently published by the Environmental Protection Department in Hong Kong do not address or mention coastal flooding issues. To respond to people’s concerns, the government needs to pay more attention to SLR, including commissioning more research in response to SLR scenarios. |
11:30 | What is the value of EU habitat and species maintenance policy? From model results to policy uses PRESENTER: Tomas Badura ABSTRACT. In May 2022, we will celebrate 30 years from the first draft of the Convention on Biological Diversity. Nevertheless, biodiversity has been declining at alarming rates since. Biodiversity is an intangible asset essential for ecosystems’ functioning and human wellbeing. It provides multiple welfare/wellbeing gains, but its management is very complex. Biodiversity provides direct and indirect benefits supporting the production of many essential goods (e.g. timber, fruits) and services (e.g. recreation) but also offers spiritual or intrinsic benefits, with the latter component being rarely recognized and accounted in governments and industry decisions’ making. This paper provides European-level spatially explicit estimates of biodiversity non-use value applicable in decision-making processes and appreciate the hidden contribution of habitat and species maintenance to human wellbeing. A choice experiment was conducted in four European countries that were selected to represent a range of diverse environmental and social contexts. Two policy uses of data are proposed. First, simulated exchange values, as suggested by UN SEEA EA guidelines for ecosystem accounting (UN et al., 2021), are estimated to populate natural capital accounting use and supply tables. Second, a European map of biodiversity values is produced via value transfer techniques. We evaluated citizens’ preferences for biodiversity, their reasons for protection, and willingness to pay by collecting a representative sample of 1500 responses through an online survey administered in Czech Republic (CZ), Germany (DE), Ireland (IE) and Italy (IT). The improvement in biodiversity was embodied in land use management options as main driver of biodiversity changes. The choice task(s) involved trade-offs between improved, maintained, or deteriorated agricultural practices (from agroforestry to monoculture) along with farm level size interventions, chemical use intensity, biodiversity levels and annual costs. Spatial differentiation of choice cards was achieved ex-ante, with current map of land uses embedded in the choice cards as in Holland and Johnston (2017), and ex-post, with a spatial profiling of individuals’ local agriculture conditions using Corinne Land Cover data and EU mapping of ecosystems and their conditions (Vallecillo et al., 2016) and individuals’ postcode. A set of hypothetical bias questions (e.g. possibility to revise questions due to overestimates) were included in the questionnaire but more than 80% of respondents passed the test. Panel mixed logit model results reveal heterogeneity in preferences across space and social groups but biodiversity is persistently a key attribute of land management affecting respondents’ choices. The average willingness to pay for biodiversity varies from Euro 28 to 276 per year per family and reflects the current uneven conditions of European natural environment as well as attitudes and policies in support of biodiversity protection. Overall, our results suggest that strengthening habitant and species maintenance policy is considered a necessity by the public. In fact, considering the aggregated amount Europeans are prepared to pay annually (Euro 30 billion) for biodiversity, we can anticipate that the Post-2020 Biodiversity policy committed to an annual budget of Euro 20billion would likely find public support. However, regional diversity need to be fully reflected in effective and fair policy interventions. Innovative financial mechanisms and funding streams could be promoted to stimulate the creation of a European green funding market for biodiversity. To portray policy uses of the results we firstly estimated simulated exchange values (SEV) as discussed in Caparros et al. (2017). The SEV approach entitles model parameters and simulated changes of biodiversity levels to derive demand and supply curves and determine the exchange price that biodiversity would have had in a standard traded market. The SEV measure is consistent with accounting standards and can be used in natural capital accounting in the four member states selected in our case study. Our results are however very sensitive to multiple post estimation issues and doubts are raised on the appropriateness of the applicability of SEV to biodiversity and other intangible ecosystem services. Subsequently, to investigate the spatial differences across member states we conducted a cluster analysis including all European member states to verify the environmental and socio-economic conditions of studied countries versus the other countries and upscale results using benefit transfer techniques. References Holland, B. M., & Johnston, R. J. (2017). Optimized quantity-within-distance models of spatial welfare heterogeneity. Journal of Environmental Economics and Management, 85, 110-129. Vallecillo, S., Maes, J., Polce, C., & Lavalle, C. (2016). A habitat quality indicator for common birds in Europe based on species distribution models. Ecological Indicators, 69, 488-499. United Nations et al. (2021). System of Environmental-Economic Accounting—Ecosystem Accounting (SEEA EA). White cover publication, pre-edited text subject to official editing. Available at: https://seea.un.org/ecosystem-accounting. Caparrós, A., Oviedo, J. L., Álvarez, A., & Campos, P. (2017). Simulated exchange values and ecosystem accounting: Theory and application to free access recreation. Ecological Economics, 139, 140–149. https://doi.org/10.1016/J.ECOLECON.2017.04.011 |
12:00 | Where are pollution reductions most valued? A transboundary choice experiment study for the UK and US PRESENTER: Keila Meginnis ABSTRACT. Transboundary pollution occurs when pollutants which are released into the environment affect not only the emitting country but also cause damage to other countries environments. Environmental factors like rivers, wind, and ocean currents carry pollution from the emitting country and deposit it into other countries’ territories. Greenhouse gases are (GHGs)an example of transboundary pollution since an emitting country releases GHGs into the environment which cause environmental damages to many countries worldwide (see Du et al., 2020). Acid deposition from fossil fuel emissions is a second example. Transboundary pollution creates a challenge for regulating agencies, as a country is affected not only by their own abatement and polluting actions, but also by the abatement actions and polluting behaviours of other countries. In such circumstances, incentives exist for some countries to free ride on the abatement actions of other countries. Some time ago, Barrett (1994) argued that the only way to solve these common property dilemmas is through self-enforcing international agreements which depend on multiple countries’ believing it to be in their self-interest to cooperate over emission reductions. What countries see as the benefits of reducing pollution domestically is key to this decision; whilst others have argued that citizens might also care about pollution damage reductions internationally, making cooperation more likely (Kolstad, 2014). We use a highly topical case study -marine plastic pollution - of a transboundary pollution control programme to examine preferences towards, and willingness to pay (WTP) for, pollution reductions at home, and contrast these with preferences for pollution reductions in both other countries and in international waters. Marine plastics pollution has the feature that emissions from (plastic wastes originating from) a given country have the potential to cause harm to multiple countries which are joined by the same ocean, as well as causing damage to international waters outside of national jurisdictions. This paper aims to understand individuals’ preferences for the management of a global public good; specifically for interventions which reduce a transboundary pollution – marine plastics - both at home and abroad. Cost-effective measures to curb marine plastic pollutions need to be coordinated internationally if not globally due to variations in marginal abatement costs and marginal damage costs between affected nations (Maler, 1989). Theoretical contributions on coalition formation show the narrow set of conditions which make such voluntary coalitions likely to emerge (Barrett, 2003). Among these conditions, the value of domestic reductions in emissions and associated welfare benefits are key. Yet very little is known about the costs of marine plastic pollution, or the benefits of its reduction, and how these damage costs and WTP for reductions in damages vary across potential partners in an International Environmental Agreement (IEA) over marine plastics reduction. Moreover, we know almost nothing about how domestic voters would value reductions in damages from marine plastics emitted by their nation in other countries with whom they might form a coalition, which Kolstad (2004) has argued to be of importance in predicting whether transboundary pollution reduction coalitions might emerge; and to the level of emissions that a country would choose outside of such a coalition. To start to understand better these important issues, we administered a multi-country discrete choice experiment (DCE) which focused on one global public good: the North Atlantic Ocean. The experimental design helps to understand what kinds of pollution reduction programmes domestic citizens might be willing to support, in order to help design international cooperation programmes for pollution mitigation. Due to the nature of this transboundary pollution problem, our proposed abatement programmes would reduce pollution not only at home (on beaches, and in coastal waters) but also impact pollution levels abroad and in international waters. We believe this to be the first study which provides insights into preferences for pollution abatement with these unique characteristics. As transboundary pollution control can involve a large number of countries, we narrow our focus to potential cooperation between two countries. Specifically, we focus on two countries which emit plastic wastes into the North Atlantic Ocean: one within Europe (the United Kingdom, UK), the other in North America (the United States, US). We administered an identical DCE in both countries to understand preferences in each country for pollution reduction at home and abroad. We add to the literature in several ways. First, we asked respondents directly about preferences for pollution reduction from current levels, rather than focusing on their preferences towards different abatement measures to achieve these targets. Second, we include attributes that relate to different locations where this pollution reduction can occur, i.e. at home, internationally, or a foreign country’s environment, taking into account the transboundary pollution and the spill-over effect of any abatement measures. Third, we make use of two coordinated, and hence largely identical, DCE surveys to conduct a cross-country comparison. Finally, we take into consideration that international cooperation is necessary for transient marine plastic pollution and include aspects of multi-country cooperation into the DCE itself (a cost-sharing attribute). Using latent class modelling, results show that distinct classes of individuals exist across the UK and US respondents, with the majority of respondents preferring to reduce pollution at home and abroad. However, some respondents dislike any programme initiative, while others would like a programme, but are indifferent to where, and by how much, the pollution is reduced. We additionally find some respondents in the US actually exhibit negative marginal utility from reductions which occur in the UK. Barrett, S. (1994). Self-Enforing International Environmental Agreements. Oxford Economic Papers, 46, 878–894. Du, X., Jin, X., Zucker, N., Kennedy, R., & Urpelainen, J. (2020). Transboundary air pollution from coal-fired power generation. Journal of Environmental Management, 270(June). Kolstad, C. D. (2014). International Environmental Agreements among Heterogeneous Countries with Social Preferences. NBER Working Paper Series. |
13:30 | A model of recreational demand with non-parametric representations of consumers’ heterogeneity: A case study of forest recreation sites in Italy PRESENTER: Andrea Pellegrini ABSTRACT. Within the environmental economics literature, models of recreational demand have been long employed to quantify the economic value associated with recreation sites and nature reserves. Some scholars have posited that the economic value related to a natural resource can be decomposed into its non-use value and use value (see Hausman, Leonard and McFadden, 1995). Whilst the former, if it exists, cannot be computed via reveal preference methods (it is complicated, if not impossible, for the analyst to infer from the data the value that individuals place on a natural resource uniquely attributable to non-use), the latter can stem from the services that recreation sites offer to customers. In the latter case, researchers can infer preferences that excursionists hold for a natural resource from data on leisure activities (e.g., fishing, hiking, or walking) that motivate visits to specific outdoor destinations. As such, the use value of a recreation site can be derived from consumer surplus estimates, as is common practice in standard economic analyses (see Hausman, 1981; Small and Rosen, 1981). The two-stage budgeting theory is the theoretical backbone on which the models of recreational demand are grounded on (see Gorman, 1971). Specifically, visitors first choose the number of trips to undertake (i.e., first stage), after which they decide how to allot the selected number of trips to different recreational sites (i.e., second stage). These two phases of the individuals’ decision-making process are econometrically linked through an inclusive value. Such a value is obtained from the second stage by solving a trip allocation problem and results are used in the first stage as determinants of trip demand. The main empirical challenge here resides in jointly modelling the discrete choice (i.e., a single recreation site is observed to be chosen by travelers) and count data (i.e., number of trips to the destination). To date, most applications have circumnavigated this methodological complexity by resorting to a two-stage estimation approach. First, the analyst solves the trip allocation problem in which excursionists are assumed to select only one single alternative from a finite set of mutually exclusive options. Next, the resulting model parameter estimates are used to calculate the inclusive value, which serves as an explanatory variable for the number of trips model. Either a Poisson or a Negative Binomial process is typically assumed to derive estimates of recreational trip demand, as trip frequency data are both asymmetric and integer in nature . Whilst a two-stage technique simplifies substantially the econometric analysis, it yields estimated model coefficients which are consistent but inefficient. As a result, the inference that the analyst makes post estimation are likely to misrepresent the underlying consumers’ decision-making process. This study seeks to contribute to the existing literature on recreational demand choices in two ways. First, it evaluates simultaneously the decisions made by individuals regarding the recreation site and number of trips, as opposed to sequentially optimizing the objective function. Second, it allows for non-parametric representations of individuals’ heterogeneity by the means of a Logit-Mixed Logit (L-MNL) model (Train, 2016). Traditional frameworks such those proposed by Hausman, Leonard and McFadden (1995), Rouwendal and Boter (2009), and Forrest, Grime and Woods (2000) impose model parameters be homogenously distributed over the sample. However, the assumption of homogenous preferences is quite unrealistic insofar as it implicitly assumes all individuals have the same preferences for the recreational features (i.e., attributes) and services that the recreation site provides. The appeal of the L-MNL resides in the fact that it requires no assumptions to be made a priori as to what functional forms the mixing distributions should take. This results in a great degree of flexibility in capturing random taste variation, as it leaves the data to reveal the true distributional shapes underneath. The proposed modelling approach extends that of Train in that it allows for both random and fixed parameters in a willingness to pay (WTP) space, with this specification being particularly useful for interpreting the empirical findings in a straightforward manner. Further, mean and standard deviation conditional estimates are also computed so as to investigate possible differences embedded with the sample in terms of age, income and level of education. The empirical analysis is based on an online survey of approximately 18 minutes of length, administrated to 1,500 respondents. After providing their consent to partake in the survey, respondents were asked to provide information with respect to how often they visit forest sites in a year for the purpose of outdoor recreation (we asked respondents to envision a choice situation wherein no restrictions were in place due to the ongoing pandemic), the typical composition of the visiting party, length of the trip, and mode of transport to destination. In the third section, respondents were asked to engage in a discrete choice experiment (DCE) alongside an example of a choice task. In introducing the DCE, care was taken to carefully explaining the choice alternatives, attributes, and attribute levels. Each choice task included three mutually exclusive alternatives from which respondents had to choose from, two of which were unlabeled forest sites (i.e., Forest A and Forest B). Respondents were also given the option of visiting the recreational site, without necessarily having access to one of the two forest sites described (i.e., a status quo). After selecting one of the two unlabeled options (either Forest A or Forest B), respondents were asked how often they would visit the two recreation sites if they were given the possibility to do so. Respondents were endowed with a travel budget (the overall number of visits to recreation sites over a year) which they had to distribute over the two recreation site alternatives. Rather than assigning the same travel budget to the entire sample, we randomly assign it to each choice task. Further, the travel budget was calibrated from the initial question concerning the number of times the respondent visited a forest recreation site over a year. |
14:00 | Individual posterior evaluations of tastes, mathematical form of disutility, substitution pattern and distribution of random terms with latent class structures PRESENTER: Fiore Tinessa ABSTRACT. Posterior analysis downstream estimation of a random utility model (RUM) is a powerful tool to improve individual choice prediction based on prior information, namely past or stated individual choices. We consider a RUM as a model that compute the probability that an individual, who faces a discrete choice set of alternatives in a certain choice scenario, chooses the alternative that minimize a latent variable called perceived disutility. As commonly done in the literature for utility maximization models, such disutility can be treated as a random variable that is a function of two main components: a subutility term, which embeds the effects of all the observable quantities (attributes) and relative tastes (coefficients), and a random term, for which some distributional assumption is made. Generally, posterior analysis is applied to retrieve individual posterior distributions of tastes (Revelt and Train, 2000; Train, 2009; Hess, 2010). Indeed, as well-known, the estimation of random coefficient or latent class models, assuming respectively parametric or nonparametric distributions for tastes, allows the analyst to estimate only a sample-level distribution, without any information about the most likely point (or at least range) of this latter distribution to refer for the single individual. This paper widens the application of the posterior analysis framework, by addressing other issues of choice modelling at an individual level, namely: the mathematical form of disutilities (additive, multiplicative or in-between), the difference in variance of random disutilities across alternatives (heteroskedasticity), the concavity of subutility with reference to explanatory variables, the distribution of random terms and the correlation pattern across alternatives. This kind of posterior analysis is made possible by assuming: a) a mathematical q-product form for random disutilities, as introduced by Chikaraishi and Nakayama (2016), namely a functional form that links the subutility and the random term assumed to be dependent by an operator called q-product (Nivanen et al., 2003), which generalizes the algebraic operators sum and simple product as a function of a parameter q; b) a finite mixture distribution for random disutilities (Papola, 2016; Tinessa, 2021). Therefore, the posterior analysis at the individual level is made possible by assuming a particular latent class structure, where each class is characterized by a different vector of tastes, a different value of the parameter q and a different correlation pattern. Summarizing the analytical and behavioural implications, the framework allows: 1) to assess whether the best mathematical form of disutilities for the analysed individual is closer to the additive or multiplicative form (Fosgerau and Bierlaire, 2009). This allows retrieving whether a certain individual is more sensitive to absolute difference (e.g. 10 minutes extra travel time) or relative difference (e.g., 10 per cent extra travel time) in subutility terms, respectively; 2) to introduce the impact of the different level of information (i.e. different variance of disutilities) characterizing the different alternatives of the choice set, individuals and choice scenarios, as going beyond additive specifications (q > 0) yields RUM that are heteroskedastic in disutilities; 3) to indicate which random term distribution is best suited for the analysed individual, within the Extreme Value distributions domain. This is possible by exploiting the q-product Logit model by Chikaraishi and Nakayama (2016), which generalizes other models, such as the simple Multinomial Logit, which in turn assumes reversed Gumbel (or Gumbel of the minimum) distributed disutilities, and the Multinomial Weibit (Castillo et al., 2008), which in turn assumes Weibull distributed disutilities, thus also incorporating Rayleigh and Exponential distributions as special cases of its shape parameter; 4) to account for the individual’s relative risk aversion, as Chikaraishi and Nakayama (2016) have shown that the parameter q is equivalent to the relative risk aversion measure by Pratt (1964); 5) to account for the individual’s own pattern of correlation, that how individuals perceive different similarities amongst the alternatives. This is possible by exploiting a mixture distribution for the random disutilities (Tinessa, 2021). The rationale of the latent class model in q has been proposed by Tinessa (2021). However, unlike the latter study, the model investigated by this paper also takes into account the panel nature of the dataset and the taste and correlation heterogeneity of respondents, allowing for a significant improvement in the goodness of fit. A real-world application to intercity mode choice between Milan and Naples (Italy) shows that the latent class q-product Logit outperforms other existing models, such as the latent class Logit and latent class Weibit, and that the posterior analysis further improve significantly the goodness of fit with reference to the unconditional (i.e. the sample level) model. References Castillo, E., Menéndez, J.M., Jiménez, P., Rivas, A., 2008. Closed form expressions for choice probabilities in the Weibull case. Transp. Res. Part B Methodol. 42, 373–380. https://doi.org/10.1016/j.trb.2007.08.002 Chikaraishi, M., Nakayama, S., 2016. Discrete choice models with q-product random utilities. Transp. Res. Part B Methodol. 93, 576–595. https://doi.org/10.1016/j.trb.2016.08.013 Fosgerau, M., Bierlaire, M., 2009. Discrete choice models with multiplicative error terms. Transp. Res. Part B Methodol. 43, 494–505. https://doi.org/10.1016/j.trb.2008.10.004 Hess, S., Hensher, D.A., 2010. Using conditioning on observed choices to retrieve individual-specific attribute processing strategies. Transp. Res. Part B Methodol. https://doi.org/10.1016/j.trb.2009.12.001 Nivanen, L., Le Méhauté, A., Wang, Q.A., 2003. Generalized algebra within a nonextensive statistics. Reports Math. Phys. 52, 437–444. https://doi.org/10.1016/S0034-4877(03)80040-X Pratt, J.W., 1964. Risk Aversion in the Small and in the Large. Econometrica. https://doi.org/10.2307/1913738 Revelt, D., Train, K., 2000. Summary for Policymakers, in: Intergovernmental Panel on Climate Change (Ed.), Climate Change 2013 - The Physical Science Basis. Cambridge University Press, Cambridge, pp. 1–30. https://doi.org/10.1017/CBO9781107415324.004 Papola, A., 2016. A new random utility model with flexible correlation pattern and closed-form covariance expression: The CoRUM. Transp. Res. Part B Methodol. https://doi.org/10.1016/j.trb.2016.09.008 Tinessa, F., 2021. Closed-form random utility models with mixture distributions of random utilities: Exploring finite mixtures of qGEV models. Transp. Res. Part B Methodol. https://doi.org/10.1016/j.trb.2021.02.004 Train, K.E., 2009. Discrete Choice Methods with Simulation, Discrete Choice Methods with Simulation, Second Edition. Cambridge University Press. https://doi.org/10.1017/CBO9780511805271 |
14:30 | On the power of a simple multivariate test for the distribution of random coefficients in logit models PRESENTER: Ávaro A. Gutiérrez-Vargas ABSTRACT. Mixed multinomial logit models are estimated assuming that one or more parameters follow a parametric distribution. Unfortunately, in most situations, the econometrician has little idea about what the distribution is. This is problematic, since research has demonstrated that using an incorrect distribution generates bias in the estimated parameters (Fosgerau, 2006; Hess and Axhausen, 2005). Consequently, using an appropriate distribution for the random coefficient is crucial. However, there has been limited attention to statistical testing of distributional assumptions in the discrete choice literature. One remarkable exception is the work of Fosgerau and Bierlaire (2007) that proposes a statistical test based on semi-nonparametric techniques that rely on series approximation of the unknown parametric distribution. The authors construct an approximation of the unknown distribution using Legendre polynomials and use this more general model as the alternative hypothesis in a likelihood ratio test. The authors show that the proposed likelihood ratio test has the correct nominal size while having high power for detecting an inappropriate distribution for the random coefficient. The technique, however, is only suitable for testing one random parameter at a time. Hence, it is of limited usage in applied work where it common practice to include several random parameters simultaneously in the models. In such situations, performing a univariate test means that, under the null hypothesis, we are forced to assume that all other random parameters included in the model are correctly specified. Therefore, the multivariate test appears more appealing because it can check all random parameters' distributional assumptions at once. Hence, such a test has statistical power against situations where “at least one distribution” is misspecified. In this article, in the same spirit as Fosgerau and Bierlaire (2007), we propose using another flexible distribution as the alternative hypothesis of a likelihood ratio test for mixed multinomial logit models. In particular, we construct our alternative hypothesis utilizing a method for creating flexible mixing distributions proposed by Fosgerau and Mabit (2013), referred to as the FM method hereafter. The authors, however, introduced the FM method with the sole purpose of obtaining flexible distributions when estimating models with random coefficients and not to devise a statistical test as we do. One advantage of the FM method is that it is easy to apply and does not require extra programming when using standard choice model packages such as BIOGEME (Bierlaire, 2003), Apollo (Hess and Palma, 2019), or MIXL (Molloy, Becker, Schmid & Axhausen, 2021). Furthermore, as mentioned, this method can generate more flexible distributions by including additional terms in the power series used to approximate the random parameters’ distribution. Doing so can generalize the conventional assumption of normally distributed parameters. Hence, it is suitable for the unconstrained model when performing a likelihood ratio test. The null hypothesis is that the parameter(s) are normally distributed, implying that all the additional parameters of the power series are zero. Conversely, the alternative hypothesis is that the parameters are not normally distributed, so the additional parameters are different from zero. From an applied perspective, rejecting the null hypothesis of normality leads to two possible options for the modeler: the econometrician can either select another parametric distribution if there is a plausible alternative in mind or, alternatively, use the more flexible model estimated under the alternative hypothesis. To investigate the power and size of the proposed test, we carried out a simulation study. Some of the parameters in the model are assumed fixed and others random. Additionally, each random parameter is assumed to be independent and could follow one out of four possible distributions: normal distribution to analyze the size of the test and mixture of two normal, lognormal, or uniform distributions to compute the power of the test. Lastly, for the model under the alternative hypothesis, we added one, two, or three additional polynomial terms in the power series of the mixing distribution. Using this setup, we also varied the number of individuals and the number of choice sets answered per individual. Finally, we tested whether we could correctly identify the misspecification of the model and investigate the size and power of the test under the different scenarios. The results show that the test has the correct nominal size while having high power to detect an incorrect distribution for one or more random coefficients. In particular, the simulations show that the test has high power rejecting lognormal or mixture of two normal distributions, even when we add only one additional polynomial term to the FM method. However, it has reduced power against falsely assumed uniform distribution even when including three additional polynomial terms. This situation is most likely produced given the high coverage that a single normal distribution can have over a uniform distribution, implying that the additional terms of the FM method slightly improve the overall model. References Bierlaire, M., 2003. BIOGEME: a free package for the estimation of discrete choice models. Proceedings of the 3rd Swiss Transport Research Conference, Monte Verità, Ascona Fosgerau, M. (2006). Investigating the distribution of the value of travel time savings. Transportation Research Part B: Methodological, 40(8), 688-707. Fosgerau, M., & Bierlaire, M. (2007). A practical test for the choice of mixing distribution in discrete choice models. Transportation Research Part B: Methodological, 41(7), 784-794. Fosgerau, M., & Mabit, S. L. (2013). Easy and flexible mixture distributions. Economics Letters, 120(2), 206-210. Hess, S., & Axhausen, K. W. (2005). Distributional assumptions in the representation of random taste heterogeneity. In STRC 2005 conference proceedings: 5th Swiss transport research conference: Monte Verità/Ascona, March 9-11, 2005. ETH, Institute for Transport Planning and Systems (IVT). Hess, S., & Palma, D. (2019). Apollo: a flexible, powerful and customisable freeware package for choice model estimation and application. Journal of choice modelling, 32, 100170. Molloy, J., Becker, F., Schmid, B., & Axhausen, K. W. (2021). mixl: An open-source R package for estimating complex choice models on large datasets. Journal of choice modelling, 39, 100284. |
15:00 | A new shifted log-normal distribution for mitigating 'exploding' implicit prices in mixed multinomial logit models ABSTRACT. Please see pdf attached. |
13:30 | Deep hybrid model with urban imagery: How to combine demand modeling and autoencoder to analyze travel behavior? PRESENTER: Qingyi Wang ABSTRACT. Please see attached. |
14:00 | Port choice analysis in Brazil: a comparison between discrete choice models and machine learning algorithms PRESENTER: Felipe Souza ABSTRACT. Previous researchers have revealed the most prominent factors influencing companies in port selection. The main factors found in the literature are: port tariff; transport costs; frequency of ships; port quality; port efficiency; information services (Tiwari et al., 2003; Ugboma et al., 2006; De Langen 2007; Steven and Corsi, 2012; Nugroho et al., 2016; Vega et al., 2019). Techniques such as Analytic Hierarchy Process and discrete choice models (DCM) are the most used methods on this topic (Martínez Moya and Feo Valero, 2017). The main objective of this paper is to conduct a parallel analysis of the behavior of two groups of decision-makers on port choice: i) exporters/freight forwarders and ii) importers/ freight forwarders. The study uses Stated Preference (SP) data collected from firms located in two States of Brazil: Rio de Janeiro (RJ) and Minas Gerais (MG). The main contribution of this paper lies in comparing the performance of DCM and Machine Learning (ML) algorithms (CART, Random Forest - RF, Support Vector Machine - SVM and Naïve Bayes - NB) in port choice models. Several studies have applied ML algorithms in travel demand modelling, but its adoption is more common in the passenger transport context (Zhu et al., 2018; Cheng et al., 2019; Zhao et al., 2020). In freight transport modelling, usually only truck and rail modes are considered (Uddin et al., 2021). In the transport literature, most studies that apply ML techniques used Multinomial Logit as benchmark (van Cranenburgh et al., 2021). To the best of our knowledge, application of ML algorithms to assess port selection has not yet been explored in literature. The paper also identifies the most important factors influencing port choice in Brazil according to the different models. In addition, this paper includes specific attributes for the Brazilian context less explored in the literature: cargo theft risk (exporters) and taxation (importers). In the export survey, the attributes were defined as follows: ship calls; port tariff; road freight tariff; cargo release time; cargo theft risk; congestion in access to the port. In the import survey, the attributes were: ship calls; taxation; port tariff; cargo release time; sea freight tariff; road freight tariff. The study was conducted first in the State of RJ. This study developed an efficient design (Rose and Bliemer, 2009) without priors, using Ngene software. The questionnaire for the State of MG was generated through the Bayesian efficient design technique using the estimated parameters from the survey carried out in the state of RJ as priors. The SP experiment had 10 choice tasks. There were four ports alternatives in the experiment. In total, 110 companies fully completed the survey (60 exporters and 50 importers) in RJ and MG. Having collected the data, first discrete choice analysis was used. The models tested were: Multinomial Logit, Nested Logit and Mixed logit with Error Components (MLEC) for each group. The results are based on the data collected in RJ and MG. All parameters were significant. Exporters and importers appear to have different perspectives. In addition, value of time for the cargo release time in ports was calculated for each group. The MLEC model brings improvements based on the likelihood ratio test for each group analyzed. This model was used for the comparison with ML algorithms (CART, RF, SVM and NB). We evaluate the performance of the MLEC and ML algorithms models by comparing the hit rate and kappa coefficient obtained on the validation set in each model. The training was carried out on 80% of the original databases, and the validation on the remaining 20% of the databases. The ML algorithms achieved highest performances compared to MLEC in each group analyzed. We interpret MLEC and two ML algorithms (RF and CART) to extract behavioral insights. Following the procedure adopted by Zhao et al. (2020), a ranking of factors was elaborated according to ML algorithms. In MLEC, we obtained standardized coefficients to analyze the strength of the effect of each attribute (Menard, 2004). The results were compared. As regards the behavior of firms, we verified that road transport tariff is an important factor in all models tested in the export analysis. In addition, it should be noted the great concern of companies with cargo theft risk during transport to the port. This factor is seldomly included in other port choice analysis. In 2020, Brazil was the country with the highest number of cases of cargo theft in the world (TT, 2021). In the import analysis, we consistently found that the road transport tariff and the taxation were the most important factors. The taxation on the value of the imported cargo has a significant effect on the company's decisions. Taxation is seldomly included in port choice studies. This attribute may reduce the importance of road transport tariff since firms seek to import not necessarily from the nearest port, but from the port located in a place that offers lower tax on the value of the imported cargo. This result highlights the impact of taxation on port competition in Brazil (Souza et al., 2021). This study contributes to the port choice literature focusing on the comparison of ML algorithms and DCM, additionally, we fill the knowledge gap in freight literature within Brazil. The ML algorithms obtained a higher performance than our best DCM. The application of DCM and ML algorithms each have advantages and disadvantages. Indeed, logit models typically focuses on parameter estimation. Many ML algorithms, on the other hand, have more flexible structures (Xie et al., 2003) and are built for predictive purposes, but are often considered difficult to interpret (Mullainathan and Spiess, 2017). It is intended that these initial results serve as a guide for future investigations in the sense of jointly analyzing DCM and ML techniques in the context of port selection. For further studies, the authors would like to incorporate specific characteristics about the companies (e.g., company size, product type, export/import frequency) to assess the impact on performance of the models tested. References pdf file (submission) |
14:30 | Using discrete choice models and machine learning approaches to compute the value of travel time: a comparative analysis PRESENTER: Giovanni Tuveri ABSTRACT. In our work we propose an alternative method for calculating the value of travel time through the results of machine learning algorithms. We confirm the validity of our results by using different databases containing real-life data obtained from travel surveys in Italy. Recently, an increasing number of applications of machine learning (ML) algorithms to choice modelling in transportation have been tested as an alternative to the traditional discrete choice models based on econometric theory. The higher flexibility of ML methods, which generally require no pre-assumptions regarding the mathematical formulation of the underlying relations between the variables playing different roles in explaining a certain phenomenon, is one of the strongest factors affecting this recently developed interest in their possible use in this field of study. However, ML methods, at least in their purest form, can be considered as black-box methods, which is not ideal for an analyst wishing to obtain meaningful information from their outputs. To overcome this inherent limit, some researchers have tried to find alternative ways to use such methods, by either elaborating the outputs of machine learning algorithms, or by building modified versions of them in such a manner that their results can be interpreted in the same way analysts are used to do with classic discrete choice models. While having coefficients with their associated statistical significance is practically unfeasible by using machine learning methods, it is instead possible to extrapolate values regarding elasticities and marginal effects of the considered variables, to compare them to those obtained by specifying and estimating econometric models. Some studies have already proved the validity of this method (1) (2). One element to analyse is the value of travel time (VTT): to date the argument of obtaining VTT through ML methods has been studied by few researchers (1) (2), so there is still much to be found out and discussed. The method we propose is independent from the shape and formulation of the mode-choice functions, so it can be applied to both econometric models and ML methods, given the output allows the estimation of individual level mode-choice probabilities. Such a method was necessary because ML methods do not have a mathematical formulation of the utility, hence it not possible to calculate derivatives, needed to calculate the values for elasticities and marginal effects using their correct definition. Instead, we chose to approximate the infinitesimal formulation (which uses the derivatives) with a ratio of finite differences, using a given difference in value for a variable as the denominator, while the numerator is represented by the difference of the probabilities of choosing a given travel mode, determined by the change of the same variable. This method of calculating marginal effects for ML methods, proposed by Zhao et al.(3), as far as we knowhas never been applied to compute the value of travel time. We applied this method to both econometric models and machine learning methods and then we compared the results. In particular, we estimated and specified a multinomial logit and a mixed logit as discrete choice models, while for the machine learning part we built a fully connected neural network (FNN) and a particular specification of a neural network which considers “alternative specific utility functions” (ASU-DNN), developed by Wang et al. (4). To test the validity of our methodology we employed two different datasets. The first dataset contains 1487 observations, collected through a revealed preference survey conducted in 2019 in the metropolitan area of Cagliari (Italy) among a sample of workers who commuted daily and consider as a dependent variable the choice to commute by car or by public transport. The second one has instead 2128 observations and it was collected in 2016 among a sample of public employees who lived in the metropolitan areas of Cagliari and Sassari. In this case the dependent variable was the choice to commute by using one of the following means of transport: car, public transport, bicycle and walking. Both datasets also contain personal and socio-economic characteristics of each individual. For all the models we split the datasets in a training and testing set, we then used five-fold cross validation to estimate the parameters and estimate the final results using the testing set only. Since neural networks are highly sensible to the values of their hyper-parameters, we estimated a series of models which use different sets of hyper-parameters randomly selected from given lists. Regarding the main outputs, we mainly focused on two dimensions: first, we compared direct and cross-elasticities for several key level of service variables of all the travel alternatives available, by averaging the elasticity values obtained from all the models in each category; second, we obtained a distribution of values of travel time for both car and public transport considering all data-points, outliers excluded, and again comparing the results from the different models. Our results are very promising: elasticities showed coherent values between all models, mostly when observing the signs but also when comparing absolute values; value of travel time distributions present similar shapes when comparing all methods, barring some differences due to some limits inherent to some models. In conclusion, we demonstrate that our method of calculating the value of travel time works and confirm that it is possible to obtain economic information from machine learning models; finally, we successfully apply the ASU-DNN to two different datasets, confirming its transferability. 1. Wang, S., Wang, Q., & Zhao, J. (2020). Deep neural networks for choice analysis: Extracting complete economic information for interpretation. Transportation Research Part C: Emerging Technologies, 118, 102701. 2. van Cranenburgh, S., & Kouwenhoven, M. (2021). An artificial neural network based method to uncover the value-of-travel-time distribution. Transportation, 48(5), 2545-2583. 3. Zhao, X., Yan, X., Yu, A., & Van Hentenryck, P. (2020). Prediction and behavioral analysis of travel mode choice: A comparison of machine learning and logit models. Travel behaviour and society, 20, 22-35. 4. Wang, S., Mo, B., & Zhao, J. (2020a). Deep neural networks for choice analysis: Architecture design with alternative-specific utility functions. Transportation Research Part C: Emerging Technologies, 112, 234-251. |
15:00 | Moral profiles in Discrete Choice Models: a Natural Language Processing approach PRESENTER: Teodóra Szép ABSTRACT. Introduction Choice modelers use choices to learn about people’s underlying preferences. Recently, choice modelers have extended their scope and have started using choices to analyse additional latent factors underlying decision making, such as attitudes, perceptions, and (moral) motivations. Recent work in Hybrid choice modeling shows that the joint identification of preferences and other latent determinants of decision-making is a very challenging task (e.g. Vij and Walker, 2016). Although progress is being made to advance the identification of such models, one obvious potential solution has not received the attention it deserves: the use of additional data (i.e., additional to choices and answers to Likert scale survey questions) to help identify latent behavioral constructs. This paper contributes to the literature, by showing how verbal expressions by decision-makers can help the choice modeler to identify subtle and latent aspects of decision-making. We illustrate our approach by identifying moral motivations behind choices made by politicians (i.e. their voting behavior). Moral motivation refers to the inherent reasons people behave morally, and is connected to the individual’s normative judgements about what is right and wrong. Theories on moral motivations often include concepts like care, fairness, or loyalty. Research on moral motivations find that moral motivations can be instrumental for understanding human (choice) behaviour (Chorus et al., 2021). Latent morality often has a substantial impact on the outcome of a choice task; therefore, although its identification is difficult, it is crucial to understand and identify it. In this paper, we show how, together with choice data, another potential source of information of moral motivations can be used by choice modelers to identify their latent moral motivations: verbal utterances. By doing so, we build on the intuitive notion that verbal utterances often contain expressions of underlying moral motivations. For instance, saying “the large wage gap between low-skilled and high-skilled workers can no longer be justified“ signals a moral motivation for fairness. We show how, using Natural Language Processing techniques, how such data can then be used, jointly with observed choices, to learn about latent moral motivations. In sum: in this paper, we combine choices and verbal utterances to infer moral motivations in a decision making context. We show how this novel approach can lead to new, subtle insights regarding latent antecedents of moral choice, which would be very difficult – if not impossible – to obtain using traditional choice models based on observed choices only. To test and illustrate our proposed approach, we investigate voting behaviour of Members of the European Parliament (MEPs). Methodology and case study Our proposed method relies on Moral Foundation Theory (MFT; Graham et al., 2009); a well-established theory from moral psychology that postulates that morality has five main domains (namely: care, fairness, loyalty, authority and sanctity), and people adhere to these to a different extent. MFT has a well-developed dictionary (words which can be categorized into one of 5 x 2 moral domains: five main domains, either as virtue or vice), which allows us to connect any text to these foundations. To create our so-called moral profiles, first, we use SBERT (Sentence Bidirectional Encoder Representations from Transformers; Reimers and Gurevych, 2019) and make feature vector representations for the 10 moral dimensions based on an extended version of the Moral Foundations Dictionary (MFD 2.0; Frimer et al., 2019). Then we do the same for any piece of text at hand (for instance, a tweet). Then using cosine similarity scores, we create a moral profile for each text, which shows us how aligned the text is with each moral dimension. We collected data of 30 votes in the European Parliament. In each vote, MEPs vote "in favour", "against", or "abstain" to a proposed policy or resolution. Furthermore, we collected 328 MEPs' 100 tweets. This data includes ~32800 short text pieces in 26 different languages. To apply our proposed methodology to the specific context, we do the following steps: 1. we create moral profiles for each tweet and document under vote, 2. to create a moral profile for each MEP, we average the moral profiles of all their tweets, 3. to create moral profiles for parties, we average the moral profiles of all their members 4. then we calculate moral profile distances between individual MEPs, national parties, EP party groups and documents under vote. Figure 1 illustrates the process of creating a moral profile for a tweet and a MEP (step 1 and 2). We estimate binary logit models with MEPs' voting behaviour as dependent variable, using EP party groups and moral profile distances (between individual and EP party group and between national party and EP party group) as explanatory variables. We also estimate multinomial logits with MEPs' vote as dependent variable and moral profile distances (between individual and document under vote and between national party and document under vote) as explanatory variables. Preliminary results and conclusion Our preliminary results indicate that EP party groups have very distinct moral profiles and that moral profiles of national parties have higher explanatory power than individual MESPs’ when modelling voting behaviour. We find several intuitive yet subtle relations between moral motivations and political decision-making, and interpret them in the light of political science literature. Our results suggest that moral profiles extracted from natural language have potential in discrete choice analysis; social media of individuals, public speeches, interviews or discussions helps reveal latent moral motivations behind several kind of decisions. |
13:30 | Utility maximisation vs regret minimisation in stated choice experiments: does the design matter? PRESENTER: Jürgen Meyerhoff ABSTRACT. Motivation Research into experimental design theory for stated choice (SC) experiments has almost exclusively assumed that decision-makers make choices using (linear-additive) Random Utility Maximisation (RUM) rules. However, there is compelling evidence that decision-makers employ a wide range of decision rules (Hess et al., 2012). Thus, new design theory has been developed for one non-RUM decision rule, namely Random Regret Minimisation (RRM) (van Cranenburgh et al., 2018). RRM models postulate that decision-makers use a regret minimisation rule, in which regret is experienced when a competing alternative j outperforms the considered alternative i regarding attribute m. Furthermore, van Cranenburgh et al. (2018) report that the RUM model obtained the best model fit estimated on the design that was optimised for the recovery of RUM parameters, while the RRM model obtained the best model fit when the design was optimised for recovery of RRM parameters. They conjecture that the decision rules used to generate the design might invoke certain response behaviour. Not being able to reject this conjecture might have severe consequences as the experimental design and the recovered preferences are not fully exogenous. To scrutinise this, we conducted two SC experiments in which respondents were randomly allocated to either an experimental design optimised for the recovery of RUM model parameters only or for the recovery of both RUM and RRM model parameters. For each of the four resulting data sets we estimate a series of RUM and RRM models, in MNL and latent class forms and inspect several aspects of our estimation results. Firstly, we look at whether RUM (RRM) models fit comparatively better on the design optimised for RUM than on the mixed design, optimised for RUM and RRM. Secondly, we inspect the μ parameter of the μRRM model that governs the degree of regret minimisation. We assess whether this parameter is larger for designs optimised for RUM only than for the design optimised for both, RUM and RRM. Thirdly, we assess whether the size of the RRM latent class is larger for the design optimised for RRM than for the designs optimised for both, RUM and RRM. Stated choice data collection The SC surveys concerned climate change adaptation measures at the German North Sea and Baltic Sea coasts. The choice tasks offered three alternatives, two hypothetically designed and a status quo alternative. The RUM and the mixed RUM-RRM experimental designs were the same for the North Sea and the Baltic Sea. One attribute, however, differed as only the German Baltic Sea has cliffs. All levels were continuous, also the status quo alternative was described by continuous levels reflecting the current practise of coastal protection – which is important when estimating RRM models (Hess et al, 2014). Each design comprised 48 choice tasks, which were assigned to four blocks; each respondent thus faced 12 choice tasks. To generate the experimental designs, we used fairly small priors. All designs were created using Ngene (see van Cranenburgh & Collins 2019). Preliminary results Several observations can be made (Table 1). Firstly, for the single class models the results partially support the conjecture. In the two RUM only designs (T1 and T3), the RUM model performs best. But, in the two treatments with mixed designs, a RRM model performs best only in T2, and not in T4. Secondly, regarding the μ parameter of model 3, we see that the results for T2 support the conjecture. But the results for T4 do not. For the latent class models the results are even more mixed. We do find that the models combining RUM and RRM perform better with the mixed experimental designs based on choices stated in the treatments. While this is in T4 the μRRM, in T2 even the pure RRM (PRRM) as second decision rule performs best. And even in T1, which has a RUM only design, the RUM-μRRM two class model is better than the 2 class RUM only model. However, the size of the RRM classes suggest no clear patterns. Table 1: Estimation results (see attached file) The results suggest that it is important to account for the heterogeneity of decision rules when constructing the experimental design. RRM decision rules are found to attain a significant proportion (although results may be confounded by taste heterogeneity). In line with earlier studies (e.g., Chorus et al., 2014), it seems that accounting for the decision rule is especially important in valuation contexts such as adaptation to climate change where potential consequences are highly uncertain and performance on certain attributes, such as dyke heightening, cannot be readily compensated. However, it remains to be seen whether the type of experimental design invokes any particular choice behaviour among respondents. Next steps include modelling of taste heterogeneity using discrete mixtures of mixed logit models or model averaging approaches (Hancock and Hess, 2021). Furthermore, we will analyse the statistical efficiency across the treatments. References van Cranenburgh, S., Rose, J. M., & Chorus, C. G. (2018). On the robustness of efficient experimental designs towards the underlying decision rule. Transportation Research Part A: Policy and Practice, 109, 50-64. van Cranenburgh, S., Collins, A.T., 2019. New software tools for creating stated choice experimental designs efficient for regret minimisation and utility maximisation decision rules. Journal of Choice Modelling 31, 104-123. Hancock, T. O., Hess, S., & Choudhury, C. F. (2018). Decision field theory: Improvements to current methodology and comparisons with standard choice modelling techniques. Transportation Research Part B: Methodological, 107, 18-40. Hess, S., Stathopoulos, A., & Daly, A. (2012). Allowing for heterogeneous decision rules in discrete choice models: an approach and four case studies. Transportation, 39(3), 565-591. Hess, S., Beck, M. J., & Chorus, C. G. (2014). Contrasts between utility maximisation and regret minimisation in the presence of opt out alternatives. Transportation Research Part A: Policy and Practice, 66, 1-12. |
14:00 | Bayesian D- and I-optimal designs for choice experiments involving mixtures and process variables PRESENTER: Mario Becerra ABSTRACT. Many products and services can be described as mixtures of ingredients. In mixture experiments, these products and services are expressed as combinations of proportions of ingredients. For example, media used in advertising campaigns; components of a mobility budget such as car with fuel card and public transport card; cement, water, and sand to make concrete; the wheat varieties used to bake bread; and the ingredients used to make a drink such as mango juice, lime juice, and blackcurrant syrup. Usually, the researchers' interest is in one or several characteristics of the mixture. In our work, the characteristic of interest is the preference of consumers. Consumer preferences can be quantified by using discrete choice experiments in which respondents are asked to choose between sets of alternatives, called choice sets. The respondents perform this task several times with distinct choice sets. Hence, choice experiments are well-suited to collect data for quantifying preferences for mixtures of ingredients. In addition to the proportions of ingredients, the preference for a mixture may depend on characteristics other than its composition alone. For example, the ideal cocktail composition may also depend on the temperature at which it is served, or the most preferred bread might not only depend on the proportions of the various ingredients, but also on the baking time and the baking temperature. One practical example of such a scenario can be found in \cite{zijlstra2019mixture}, who observed that the preferred mobility budget mixture depends on the budget size. To cope with this kind of complication, the choice model for mixtures must be extended to deal with the additional characteristics, typically called \textit{process variables}. Choice experiments involving mixtures have been largely overlooked in the literature. The first known example of a discrete choice experiment concerning mixtures was published by \textcite{courcoux1997methode}, in which the authors intended to model the preferences for cocktails involving different proportions of mango juice, lime juice, and blackcurrant syrup. \textcite{goos_hamidouche_2019_choice} described how to combine Scheffé models for data from mixture experiments with the logit type models typically used for choice experiments. To the authors' knowledge, the work by \cite{zijlstra2019mixture} is the only work combining discrete choice models and mixture experiments with process variables. As discrete choice experiments in general are expensive, cumbersome and time-consuming, especially when they involve tasting cocktails or breads, efficient experimental designs are required. This way, the experiments provide reliable information for statistical modeling, precise estimation of model parameters, and precise predictions. Optimal design of experiments is the branch of statistics dealing with the construction of efficient experimental designs. In the scarce literature on optimal design of choice experiments with mixtures, two optimality metrics have been studied: D-optimality and I-optimality. The D-optimal design approach can be viewed as an estimation-based approach, because it is intended to minimize the generalized variance of the estimators of the model parameters. However, in experiments with mixtures, the goal generally is to optimize the composition of the mixture to maximize consumer preference. Therefore, in mixture experiments, it is crucial to obtain models that yield precise predictions for any combination of ingredient proportions. As a result, I-optimal experimental designs are more suitable for choice experiments with mixtures than D-optimal ones. I-optimal designs minimize the average prediction variance and therefore allow a better identification of the mixture that maximizes the consumer preference. Assuming a multinomial logit model, \textcite{ruseckaite_bayesian_2017} compared two algorithms to find D-optimal designs for choice experiments with mixtures. \textcite{becerra2021bayesian} extended the work by \textcite{ruseckaite_bayesian_2017} and embedded the I-optimality criterion in an optimization algorithm, called \textit{coordinate-exchange algorithm}, for constructing Bayesian D- and I-optimal designs. A common feature of the work of \textcite{courcoux1997methode}, \textcite{ruseckaite_bayesian_2017} and \textcite{becerra2021bayesian} on choice experiments with mixtures is that it does not involve any process variables. In our work, we study Bayesian D- and I-optimal designs for discrete choice experiments involving mixtures and process variables. We extend the work by \textcite{becerra2021bayesian} and included process variables in the coordinate-exchange algorithm assuming a multinomial logit model. We study both Bayesian D- and I-optimal designs for choice models involving mixtures of ingredients and process variables. The multinomial logit model assumes that a respondent in a choice experiment faces $S$ choice sets involving $J$ alternatives and that within each choice set $s \in \set{1, ..., S}$, each respondent chooses the alternative that has the highest perceived utility. Therefore, the probability that a respondent chooses alternative $j \in \set{1, ..., J}$ in choice set $s$, denoted by $p_{js}$, is the probability that the perceived utility of alternative $j$ in choice set $s$, denoted by $U_{js}$, is larger than that of the other alternatives in the choice set. Assuming model combining a response surface for the $r$ process variables and a second-order Scheffé model for the $q$ ingredients in the mixture, as in Chapter 6 of \textcite{goos_jones_optimal_2011}, we model the utility of alternative $j$ and choice set $s$ as a linear predictor $u_{js}$ plus an error term $\varepsilon_{js}$, that is $U_{js} = u_{js} + \varepsilon_{js}$, with where each $x_{ijs}$ denotes the $i$-th mixture ingredient in alternative $j$ and choice set $s$, and each $z_{kjs}$ denotes the $k$-th process variable in alternative $j$ and choice set $s$; for i in {1,...,q}, for k in {1,...,r}, for j in {1,...,J}, and for s in {1,...,S}. The error terms $\varepsilon_{js}$ in Equation~(\ref{eq_utility_js_pv}) are assumed to be independent and identically Gumbel distributed. As a result of the distributional assumption of the error terms, the probability that a respondent chooses alternative $j$ in choice set $s$ is ... . The Bayesian D-optimality criterion is defined as and the Bayesian I-optimality criterion is defined as where $p$ is the number of parameters, $\boldsymbol{I}(\boldsymbol{X}, \boldsymbol{\beta})$ denotes the information matrix for model matrix $\boldsymbol{X}$ and parameter vector $\boldsymbol{\beta}$, $\pi(\boldsymbol{\beta})$ is the prior distribution of vector $\boldsymbol{\beta}$, and $\boldsymbol{\beta}$ defined as Also, $\boldsymbol{W}_U$ is called the moments matrix, and is defined as with $\boldsymbol{f}(\boldsymbol{x}_{js})$ denoting the model expansion of attribute vector $\boldsymbol{x}_{js}$ and $\chi$ denoting the experimental region. This integral has a closed-form solution. We will show and compare the properties of Bayesian D- and I-optimal designs. |
14:30 | Sample size calculations for discrete choice experiments using design features PRESENTER: Samson Assele ABSTRACT. In Discrete Choice Experiment (DCE) studies, as in any quantitative research, the precision of parameter estimates and the power to reject invalid null hypotheses, are influenced by the size of the sample. When large samples are available, the statistical power is not of great concern. However, in some disciplines the number of potential respondents is quite limited, and/or the cost to reach out to them is large. Whatever the reason for the limited sample size, it is crucial to know whether the planned sample size is large enough to retrieve statistically significant results. There exist well-established methods to calculate the minimum sample size required in terms of the precision of the preference parameters (Rose and Bliemer (2013); de Bekker-Grob et al. (2015)). The methods require specification of the asymptotic variance-covariance (AVC) matrix which depends on the statistical model, the initial belief on the parameter values, and the DCE design. So to apply these methods the design has to be fixed already and sophisticated software is required. We propose a quick and easy method to compute an approximate sample size. Our method is not based on the design but on general characteristics of the design such as the number of choice sets, the number of alternatives in each choice set, the number of parameters to be estimated. Several studies have shown that the experimental design chosen by the researcher influences the power of the study significantly too. Therefore we obtain results under the assumption that the researcher will use a local D-optimal design and in case the researcher uses a (quasi) orthogonal design (computed as a local D-optimal design with zero priors). Our approach is useful to get a first rough idea of the required sample size, given some characteristics of the DCE that are either known from the start or still have to be decided at that moment. Characteristics that are known very quickly are for instance the number of continuous and nominal attributes that one wants to include, together with the number of their levels. Characteristics to be decided are then the number of choice sets and the number of options in a choice set. We derived the formula for the approximate appropriate sample size by a two-step approach. We investigate how the DCE design characteristics - the number of attributes, the levels for each attribute, the number of alternatives, and the number of choice tasks for each respondent, are related to the standard error of the parameter estimates. These estimates are then used to compute the required sample size to obtain the prespecified power. To take into account the variation because of the design, we use quantile regression to estimate a high quantile of the standard error of the parameter, which in turn are used to compute the sample size, given the significance level and the power required. More in particular, we created either a local D-optimal or a utility neutral design using the R-package idefix (Traets et al., 2020) for a large number of settings, varying the number of attributes (4-8), the number of attribute levels (2-5), the number of alternatives (2-6), and the number of choice tasks (8-16), for randomly selected prior parameter values. For each generated design, the AVC matrix is computed assuming a multinomial logit model with main effects and using the prior parameter values used in creating the design. The asymptotic standard error for the parameter most difficult to estimate ( i.e. for the coefficient with the smallest effect size) is used as the response in a quantile regression such that a high quantile of this value can be predicted based on the design characteristics. As this approach guarantees that the standard error is very likely overestimated, the computed sample size will be (slightly) overestimated. We compare our results with the true required sample size (assuming the prior is not highly mis-specified) as illustrated in de Bekker-Grob et al. (2015) and the very crude approximation given by Orme (1998) whose very easy to use rule of thumb is also based on the number of choice sets, the number of alternatives in each choice set and the maximum number of levels for an attribute. Their rule of thumb approach, however, does not consider the statistical power to determine the sample size needed. We therefore also investigated the power obtained by this rule of thumb. The results show that we are able to approximate the true sample size using only the DCE features, without the need for specifying the entire DCE design, better than the rule of thumb of Orme (1998). This study’s result can be widely applied by researchers starting a DCE study by providing an easy tool to assess the effect of various choices of design characteristics on sample size requirements. It thus enables researchers to make an informed decision on the design characteristics in light of the sample size needed. References B. Orme. Sawtooth software sample size issues for conjoint analysis studies. Research Paper Series, 98382, 1998. B. Orme. Sawtooth software sample size issues for conjoint analysis studies. Research Paper Series, 2010. D. L. McFadden. Conditional logit analysis of qualitative choice behavior, 1974. E. Lancsar and J. Louviere. Conducting discrete choice experiments to inform health care decision making: A user’s guide. PharmacoEconomics, 26, 2008. ISSN 11707690. 10.2165/00019053-200826080-00004. E. W. de Bekker-Grob, B. Donkers, M. F. Jonker, and E. A. Stolk. Sample size requirements for discrete-choice experiments in healthcare: a practical guide. Patient, 8, 2015. ISSN 11781661. 10.1007/s40271-015-0118-z. F. Traets, D. G. Sanchez, and M. Vandebroek. Generating optimal designs for discrete choice experiments in r: The idefix package. Journal of Statistical Software, 96(3):1–41, 2020. 10.18637/jss.v096.i03. URL https://www.jstatsoft.org/index.php/jss/article/view/v096i03. J. C. Yang, F. R. Johnson, V. Kilambi, and A. F. Mohamed. Sample size and utility-difference precision in discrete-choice experiments: A meta- simulation approach. Journal of Choice Modelling, 16, 2015. ISSN 17555345. 10.1016/j.jocm.2015.09.001. J. M. Rose and M. C. Bliemer. Sample size requirements for stated choice experiments. Transportation, 40, 2013. ISSN 00494488. 10.1007/s11116-013- 9451-z. |
15:00 | Statistical efficiency versus plausibility in stated choice designs PRESENTER: Danny Campbell ABSTRACT. Stated choice experiments typically require the construction of an experimental design of specific combinations of attribute levels. Often this is based on statistical efficiency criteria alone, with insufficient thought given to ensuring the choice tasks resemble those that respondents might encounter in reality. While imposing restrictions to avoid designs with choice situations that are not feasible is commonplace, some of the remaining combinations, albeit feasible, may not be probable in the real world. Thus, standard experimental designs may actually exacerbate the hypothetical nature of stated choice experiments. In this paper, we promote an experimental design procedure that explicitly considers the likelihood of each alternative profile and consequently the realism of each design candidate. Using Monte Carlo simulations we show that this leads to choice scenarios more likely to be encountered in the real world. We conjecture, that this may go some way to reducing hypothetical bias in stated choice experiments. The combination of attributes and levels in stated choice experiments are typically the result of some experimental design procedure. A natural starting point is the full factorial design. It has the desirable property that all main and higher-order interaction effects can be identified. However, as the number of attributes and levels increase, using the full factorial becomes unfeasible and is rarely seen in practice. Instead, researchers tend to rely on different subsets of the full factorial: orthogonal, fractional factorial or efficient designs. Common to all of them is that they assume that certain higher-order interaction effects are insignificant. Orthogonal designs are still popular and ensure that the main effects in the design can be identified, however, it is not always possible to find an orthogonal design for a given set of attributes and levels (Street and Burgess 2007). Over the past two decades, efficient designs have gained traction within the fields of transportation, marketing and environmental economics. Efficient designs seek to combine attributes and levels such that the amount of information gained from a given design is maximized. Often this is achieved by forcing more extreme trade-offs within the design. However, efficiency is a statistical concept and does not necessarily translate into behavioural efficiency (Louviere et al. (2008), Yao et al. (2015), Olsen and Meyerhoff (2017)) nor into realistic choices. In theory, as the number of respondents increases the sample parameters should asymptotically converge to the population parameters, regardless of the chosen design (McFadden 1973). The appeal of more more-efficient designs is that they can require smaller numbers of respondents whilst allowing researchers to obtain equally precise parameters. However, some designs may have unintended consequences in terms of behavioural inefficiencies. In addition, since they are based on statistical criteria alone, there is no guarantee that the chosen design does not include alternative profiles and combinations of them unlikely to be found in the real world. In this paper, we promote an experimental design procedure that explicitly considers the likelihood of each alternative profile and consequently the realism of each design candidate. It would be unfair to say that other types of designs do not consider the realism of the alternatives and choice tasks. It is always prudent to exclude all unfeasible attribute combinations from the candidate set and inspect any design for possible issues with respect to for example alternative dominance (Rose et al. 2018). However, no effort is usually made to ensure that the choice tasks resemble what people are likely to encounter in the real world. As such, standard experimental designs may exacerbate the hypothetical nature of stated choice experiments. In this paper, we propose an experimental design approach that maintains efficiency conditional on the pursuit of realism arguing that this may contribute to reducing hypothetical bias. Specifically, we suggest developing a set of alternative profile weights that reflect how likely we are to observe any given profile in reality and sample alternative profiles from the candidate set using these weights. Effectively, this means that more likely alternative candidates have a higher probability of making it into the design and consequently, the design should be perceived as more realistic. The intuition behind this approach is quite clear. More common attribute levels are more likely to feature in the experimental design, and more common combinations of levels are likely to make up an alternative profile, thus yielding a more plausible choice task. As such, this approach helps ensure that people are faced with choice tasks that they are more likely to encounter in the real world. |
13:30 | A link-based bicycle perturbed utility route choice model for Copenhagen PRESENTER: Mads Paulsen ABSTRACT. See attached document |
14:00 | Route choice modelling of cyclists on large-scale networks. PRESENTER: Adrian Meister ABSTRACT. 1) Motivation Cycling is becoming an increasingly popular mode of transport in many regions of the world. Apart from their positive effects on health and low land-use requirements, private (e)-bikes have very low life-cycle emissions and hence make them ideal to quickly decarbonize a substantial share of urban transport (Cazzola and Crist, 2020). The urgency of the climate crisis justifies additional research into cycling to generate up-to-date insights for policy makers. A central element is the design of urban cycling networks, which requires sophisticated route choice models and typically corresponding choiceset generation methods. This paper will present the results of state-of-the-art route choice models for cyclists and the city of Zurich. The work is part of greater efforts to incorporate a corresponding model into the agent-based simulation framework MATSim (Horni et al., 2016) to performed detailed network evaluation. We compare two fundamentally different modelling approaches, i.e. path-based and link-based choice models. For our use-case, preliminary results regarding the choiceset generation indicate that properly parameterizing corresponding algorithms is anything but trivial. The results vary substantially in dependence of the algorithm used, the trajectory and the network characteristics. The generated choicesets heavily influence the resulting estimates of model parameters (Bovy, 2009; Frejinger et al., 2009). The consistency problems with conventional path-based sampling and modelling of routes, especially in large-scale networks (Duncan et al., 2021, 2020), are the core motivation to compare the latest models from both types of approaches. 2) Data and Network The data used for this study originates from the MOBIS-COVID project (Molloy et al., 2021). It includes 8,342 raw GPS trajectories from 2020 within the city boundaries of Zurich. These were map-matched as described in Meister et al. (2021), resulting in approx. 5000 matched trajectories depending on the network complexity. The trajectories come from approx. 350 participants for which detailed socio-demographic indicators are available. Furthermore, machine-learning based classifiers have been developed to impute the trip purpose of each trajectory and whether it belongs to a regular or an e-bike. The network data is sourced from OSM. The work in Meister et al. (2021) demonstrated the problems arising from increasing the complexity of the network used for map-matching. Two networks were considered for Zurich, one with car and bike infrastructure including approx. 90.000 bidirectional links, the other with the additional pedestrian infrastructure including approx. 150.000 bidirectional links. It was shown that the matching rate substantially increases with a denser network. However, the spatial accuracy of the GPS data is not sufficient to differentiate between car, bike and pedestrian infrastructure along the same route, ultimately skewing the resulting trajectory enrichment. The final network used in this study will be based on these previous findings as well as ongoing work that investigates which choiceset generation methods are suitable for highly detailed large-scale networks. 3) Modelling Framework The navigation of travelers through a given network can be interpreted from two conceptually different perspectives (Skov-Petersen et al., 2018). On the one hand, one can consider that a traveler has a-priori knowledge about the complete network and all relevant route alternatives. On the other hand, one could also consider that a traveler does not have a-priori spatial knowledge, but rather chooses his route based on the immediate surrounding. Analogous, discrete route choice models reflect this conceptual difference as in how to model the route choice process. Path-based approaches estimate model parameters by comparing complete routes with each other. The fundamental assumption is that the traveler chooses the path which has the highest utility. They rely on choiceset sampling techniques for which various methods have been proposed (see Ton et al. (2018) for a review). On the model side, classical MNL models have been extended in order to handle the known deficiency of assuming independently distributed error terms. The most widely used are so called correction-terms models which adjust the choice probabilities to account for correlation accross routes, i.e. the Pathsize- and C-Logit (Ben-Akiva et al., 2004; Cascetta et al., 1996). The former can arguably be considered as being the basis for most published applications. Some of the latest relevant research (Duncan et al., 2021) investigates how the PS-Logit can be bounded to handle unrealistic routes. For large-scale networks, choicesets typically have to be generated large enough to capture all realistic routes, subsequently generating a lot of unrealistic ones which introduces model bias and computational problems. Link-based approaches model the choice process sequentially at each link in a network with dynamically changing characteristics. They hence do not require the generation of a choiceset as it is implicitly given through the network at each decision step. They can be interpreted as path-based model with infinite choicesets. A traveler is assumed to choose the route which maximizes his instantaneous utility at each link as well as the downstream utility up to the destination. The first published model of this kind is the Recursive Logit model presented by (Fosgerau et al., 2013). Extensions which deal with the problem of correlated path utilities are the (cross-)nested and mixed RL (Mai et al., 2015, 2018). 4) Expected Results The literature regarding cycling route choice is still in its infancy, and either models types have only been sparely applied to cyclists. Our excepted contributions are as follows: • characterization and comparison of the latest path- and link-based models, including PS and RL models. Specific evaluation of the implication arising from using large-scale networks as well as the integration into agent-based simulation frameworks. • estimation of route choice models for cyclists based on highly enriched network and trajectory data. On a trajectory level these include traveler characteristics (socio-demographic, accessibility, ownership indicators) and trip purpose. On a network level these include carand truck-traffic, speed-limits, current and future cycling infrastructure, parks and trees, on-street car parking, traffic signals, gradients, intersection and numbers of turns. • explicit modelling of e-bikes for a use-case location with distinct topological patterns. |
14:30 | Choice set formation in disaggregated spatial environments: An application to freshwater recreation in Germany PRESENTER: Oliver Becker ABSTRACT. 1. Introduction In revealed preference (RP) settings, the set of alternatives an individual actually considered on a particular choice occasion can not be directly observed. As misspecification of choice sets will yield biased and inefficient parameter estimates, this raises the question how to define choice sets. Beyond that, in many RP settings, choice sets can be very large. Often this is the case in spatial decision contexts, such as travel route (Fosgerau et al., 2013) or destination site choices (Termansen et al., 2004). Considering the complete set of feasible alternatives for model estimation in such situations can be computationally burdensome and sometimes impossible. To reduce choice sets and ease computation, many researchers have relied on McFadden’s (1977) simple random sampling (SRS). Another common practice is the use of deterministic criteria to exclude alternatives. Both approaches, however, have disadvantages: SRS likely leads to omitting relevant options from choice sets – causing inefficiency and potential bias of estimated parameters (Axhausen and Schüssler, 2009) –, while deterministic criteria fail to accommodate trade-offs between different attributes, when constructing choice sets. We test the implications of choice set formation for the multinomial logit (MNL) model in a spatial decision context with over 100,000 alternatives in individual choice sets. We compare SRS and a deterministic spatial boundary with two innovative importance sampling approaches that increase the probability of selecting important alternatives into reduced choice sets and allow to accommodate trade-offs between different attributes. Our application is freshwater recreation in Germany. We present Monte Carlo evidence that the impact of choice set formation on efficiency, bias, and prediction is substantial. 2. Choice set formation schemes Simple random sampling: SRS assings equal selection probabilities to alternatives and consistently estimates parameters for models that meet the independence of irrelevant alternatives property (McFadden, 1977). No correction terms have to be included in the models. Strategic sampling: This iterative sampling protocol developed by Lemp and Kockelman (2012) uses SRS in the first iteration to generate a subset of alternatives, and estimate a corresponding model. In every iteration thereafter, a new subset is generated that uses the estimated model by setting the inclusion probabilities (into the new subsets) equal to the choice probabilities derived form the previous iteration’s parameter estimates. Trade-offs reflecting decisionmakers’ choice of alternatives to be considered (e.g., between travel time and attainable site quality) are modelled implicitly. Fuzzy logic sampling: This method proposed by Hassan et al. (2019) allows (and requires) the researcher to explicitly model the determinants of alternatives’ attractiveness and corresponding trade-offs using a fuzzy inference system (FIS). The FIS output is then used to construct choice sets by setting the probabilities of alternatives to be included in model estimation proportional to the inferred attractiveness measure. Spatial boundary approach: This deterministic approach discards all alternatives beyond a particular distance threshold from a person’s starting point, irrespective of site quality. Behaviorally, it assumes that sites that are sufficiently distant may be discarded because they are unlikely to be have been considered. Parsons et al. (2000) report stable parameter and welfare estimates for spatial bounding at a range that corresponds to the distance travelled by 95% of the sample. 3. Data The observational basis is a national web-based survey (09/2019) in which respondents indicated their residential homes and the most recently visited water body in a map tool. To derive spatial choice sets, a geo-referenced database of German water bodies has been generated on the basis of open data sources such as OpenStreetMaps and the EU’s Copernicus project. Attributes include different natural endowment, accessibility, recreation infrastructure, and scenic value measurements. 4. Results and discussion Mean and standard deviation of prediction, bias, and efficiency metrics against size of individual choice sets are presented in figure 1. Particularly encouraging results were obtained for strategic sampling; compared to SRS, it brought efficiency gains of 91% to 80% for choice sets including between 25 and 10,000 alternatives and an average reduction of bias of 40% to 65%, compared to estimates derived from full choice sets. SRS can be easily substituted by strategic sampling in large scale MNL applications with comparably low modelling effort and substantial benefits. Fuzzy logic sampling turned out very sensitive towards the parametrization of the FIS and was – although substantially more expensive in modelling – clearly dominated by the strategic sampling protocol. In addition to Parsons et al. (2000), we compared estimates derived from spatially reduced choice sets with those obtained from complete sets – adding the trade-off between reduction of computational effort and incurred efficiency losses and bias. Their boundary recommendations could be largely confirmed. We intend to validate our results on eight waves of similar choice observation data collected in 2021. A second aim is to assess the impact of choice set formation for models incorporating preference heterogeneity in the presented spatial decision context. References Axhausen, Kay W. and Nadine Schüssler (2009). “Accounting for Route Overlap in Urban and Suburban Route Choice Decisions Derived from GPS Observations”. In: Arbeitsberichte Verkehrs- Und Raumplanung. Vol. 590. Fosgerau, Mogens, Emma Frejinger, and Anders Karlstrom (2013). “A Link Based Network Route Choice Model with Unrestricted Choice Set”. In: Transportation Research Part B: Methodological 56, pp. 70–80. Hassan, Mohammad Nurul, Ali Najmi, and Taha H. Rashidi (2019). “A Two-Stage Recreational Destination Choice Study Incorporating Fuzzy Logic in Discrete Choice Modelling”. In: Transportation Research Part F: Traffic Psychology and Behaviour 67, pp. 123–141. Lemp, Jason D. and Kara M. Kockelman (2012). “Strategic Sampling for Large Choice Sets in Estimation and Application”. In: Transportation Research Part A: Policy and Practice 46.3, pp. 602–613. McFadden, Daniel (1977). Modelling the Choice of Residential Location. Cowles Foundation Discussion Paper 477. Cowles Foundation for Research in Economics, Yale University. Parsons, George R., Andrew J. Plantinga, and Kevin J. Boyle (2000). “Narrow Choice Sets in a Random Utility Model of Recreation Demand”. In: Land Economics 76.1, p. 86. Termansen, Mette, Colin J McClean, and Hans Skov-Petersen (2004). “Recreational Site Choice Modelling Using High-Resolution Spatial Data”. In: Environment and Planning A 36.6, pp. 1085–1099. |
15:00 | Semi-compensatory probabilistic model for residential location choices PRESENTER: Abhilash Chandra Singh ABSTRACT. Residential location choice models are often burdened with missing information for modelling the choice set formation process. The information is missing either because one or more alternatives are unavailable to decision makers or because they considered (sampled) the alternatives endogenously, a consideration process that is inaccessible to the analyst. We define the consideration process as an intermediate step for filtering a few alternatives from the universal choice set, also known as choice set formation process. If this consideration process is explicitly accounted for, then the resulting subset of alternatives will hereafter be called consideration set. The consideration process cannot be empirically identified given lack of information in revealed preference data. To overcome this challenge, the analyst may assume a single-step and fully compensatory behavioural process (Bierlaire et.al., 2010) to describe a majority (if not all) of choice decisions. Such single-step fully compensatory behavioural models hypothesize that the decision makers compensate by making trade-offs between attributes and across all alternatives. However, the presence of many choice heuristics employed by economists suggests otherwise (Manski, 1977). Decision makers must practically choose from a restricted set owing to their limited processing capacity coupled with unavailability of alternatives. We can reasonably conclude that the residential location choice process is non-compensatory to a certain degree (or semi-compensatory). Discrete choice methods have primarily employed conjunctive or disjunctive rules (and their variations) for non-compensatory modelling of decision processes. A conjunctive rule dictates that an individual only considers alternatives that meet all of a given number of requirements, while a disjunctive rule allows alternatives that meet at least one of a given set of requirements. Disjunction-of-conjunctions protocol allows any alternative that meets at least one of a given set of conjunctive conditions, where each condition may differ in the number of requirements that compose the conjunction. Manski’s two-step framework is the most notable two-step choice model. Manski (1977) employs non-compensatory decision rules to derive a consideration set, followed by a compensatory choice model. Arentze and Timmermans (2004 and 2007) used generalizations of conjunctive rules to directly predict the probability of a given choice without explicitly determining the individual’s consideration set. Probabilistic Independent Availability Logit (PIAL) model (Swait, 1984, 2009) also incorporates non-compensatory rules in an individual’s decision making without allowing for dependence in consideration of any alternative with the others, thereby leading to the independence in formulation. The present contribution proposes to address the residential location consideration problem by employing probabilistic decision trees (also known as soft or fuzzy decision trees) to predict an individual’s consideration set. The decision trees resolve the ambiguity in deriving exact conditions for each observation by using disjunctions-of-conjunctions decision rules (Brathwaite et al., 2017). Our model framework combines probabilistic decision trees with a traditional multinomial (logit) choice model to account for non-compensatory consideration of choice alternatives followed by a compensatory choice decision. To the best of our knowledge, this is the first application of probabilistic decision trees to model consideration process in residential location choice data. This method ensures that no observation is ever described by more than one conjunctive condition. We estimate the probability statements as a summation of products, the output of which is the consideration probability of an alternative. These consideration probabilities across all alternatives are then utilized to form consideration sets, conditional on which a Multinomial Logit (MNL) choice model is estimated. The data for this analysis is sampled from 2018-19 London Travel Demand Survey (LTDS). LTDS is a continuous household survey of the London area with an annual sample size of approximately 8,000 households. It captures information on households, people, trips, and vehicles and therefore allows for detailed analysis of household residential location choice and its relationship to socio-demographic factors (including travel behaviour). Residential location choice was sampled for five local-authority neighbourhoods: City of London, Kensington and Chelsea, Hackney, Islington, and Lambeth. Our initial results indicate that employing nodal output from decision trees as exogenous variables for the MNL yields better model fit and inferential insight, as compared to MNL model using only observed exogenous variables. These nodal outputs explain the non-linear component of the relationship between the endogenous and exogenous variables. Empirically, a higher number of vehicles owned by a household or higher number of licence holders in a household result in families choosing to live further away from the city-centre neighbourhoods. We also infer from the results that families who prefer city centre neighbourhoods belong to the class defined by income higher than £125,000 per year and on average one or more household vehicles. Similarly, households with income between £15,000 and £40,000 per year and zero vehicle-ownership prefer Lambeth as a residential neighbourhood. However, these are initial findings based on a sample dataset and require further inquiry using the full dataset and allowing for complex decision trees. This research study further aims to build on these initial findings by (1) making probabilistic predictions with higher accuracy, (2) representing heterogeneity in a population’s non-compensatory rules, (3) accommodating large numbers of alternatives, and (4) alleviating the independence in consideration set formation. In addition to a deeper understanding of factors that contribute to residential location choice, this analysis will also provide us factors that lead to consideration or rejection of neighbourhoods based on classes described by income level, vehicle ownership and household structure, among other socio-economic and demographic factors. This information can be utilized for policies that can lead to equitable residential neighbourhoods and provide a deeper perspective into causes of residential segregation in London. |
13:30 | Modelling the joint choice of car ownership and use on income and fuel price: A panel data approach PRESENTER: Carl Berry ABSTRACT. Introduction Analysing the impact of car ownership and car travel in response to changes in income and fuel price is important for demand modelling, forecasts, and car tax policy design. Many studies have modelled car ownership and car travel as isolated choices in such analyses, and not as the joint choices they are. Such specifications probably lead to biased estimates. In this paper we therefore apply a novel model approach estimating the joint choices of car ownership and car travel using a panel data approach. We do this in order to uncover the combined elasticity of car ownership and car travel. We apply register panel data covering all registered Swedish adult and households during a 20-year period. Earlier studies modelling the joint decision of car ownership and car travel conditional on car ownership have all used cross-sectional survey data. Since they have been based on cross-sectional data, they fail to account for unobservable time-invariant preferences and spatial sorting. Since they are based on surveys, they have relied on relatively small sample sizes, possibly subject to response bias. Our novel approach and data address these weaknesses. Moreover, since previous studies using aggregated data have shown that the income and fuel price elasticities change over time, we also examine intertemporal differences in the elasticities of car use by interacting our preferred models by time period in our panel micro data approach. Method and Data We utilise a register database covering the Swedish population from the years 1999 to 2018, containing 97 million observations of 9.6 million households. The database contains information about all registered Swedish adults and their car ownership, which we aggregate to the household level. To model the combined choice of car ownership and car travel we apply a discrete and continuous modelling system, implying a joint estimation of a probit model derived from random utility theory and a continuous regression model. There are various such models. We apply the Two-Part model (Dow and Norton, 2003) and not the more common Heckman model because we believe that household without a car does not have a positive demand for car travel, as otherwise they would be able to obtain a car and demand a positive driving distance if they so wished. Therefore, we model the expected value of the error term in the continuous regression model as zero, instead of it being dependent on the coefficients from the probit model. Considering the panel structure of the data in the model specification is important since there are signs of substantial unobserved time-invariant variation in preferences. A key problem is how to take the fixed effects in the model of the discrete choice of car ownership: the marginal effects are not identified in such a model. Previous studies have not encountered this problem since they have used cross-sectional data. Therefore, we control for the unobservable household effect in the discrete car ownership choice by using the Mundlak approach, which includes the household mean values of the explanatory variables in addition to the yearly values of the variables. This relies on that the assumption that the unobservable effect and the explanatory variables are correlated through a level effect, and deviations from the household mean value of the variable is uncorrelated with the unobservable household effect. It can be shown that the estimated elasticity of the joint model is the sum of elasticity of car ownership and the elasticity of car travel conditional on car ownership. Results Our Two-Part (joint discrete-continuous) model yields the combined income elasticity of car travel 0.159. This is the sum of the income elasticity of car ownership, 0.095, and the income elasticity of car travel conditional on car ownership, 0.064. Not accounting for the panel structure of the data, by including the household mean values among the explanatory variables, we find a much larger income elasticity. This indicates significant unobservable household preference for cars and spatial sorting on unobserved preferences for driving and car ownership. Failing to account for this overstates the income elasticity of car travel and ownership. This is likely the reason why we find that the income elasticity of car travel is lower than what previous studies based on cross-sectional survey data have found. As a sensitivity analysis, we also apply a fixed effects linear probability model that finds similar elasticities, albeit with a very low explanatory power. The marginal effects from an ordered probit show that income increases do not only increase the probability of owning one car but also the probability of owning multiple cars. We find the fuel price elasticity of car travel -0.471. This is the sum of the fuel price elasticity of car ownership -0.092 and the fuel price elasticity of car travel conditional on car ownership -0.379. The fuel price elasticity of car travel conditional on car ownership is more than four times larger than the elasticity of car ownership. This suggest that household respond to changes in fuel price by primarily by adjusting their milage, and only to a minor extent by adjusting their car ownership. Our fuel price elasticity of car travel is mostly consistent with what previous literature finds. We find that the income elasticity of car travel has decreased over time. A possible explanation is that households approaching a saturation point in car travel, where they receive less utility from additional car use. Moreover, the absolute value of the fuel price elasticity has increased over time. These findings are consistent with the findings by Bastian et al. (2016). We find that the absolute value of the fuel price elasticity of car travel conditional on car ownership has increased more than the fuel price elasticity of car ownership, indicating that the former can be relatively more easily adjusted in response to changes in the fuel price. |
14:00 | Accounting for the global heterogeneity in attitudes and perceptions towards new alternatives in mode choice models PRESENTER: Arash Kalatian ABSTRACT. 1. Introduction: The mobility landscape is currently undergoing a rapid change with the advent of shared mobility. It is expected to change more radically with the advent of new modes like automated vehicles (AVs), Air Taxis, Hyperloops, etc. Understanding the perception of travellers towards these new modes and services and the underlying heterogeneity is crucial for predicting the transport demand in the future. In particular, estimation of adoption rate and willingness-to-pay and understanding the impact of new attributes associated with these new modes on users’ preferences, are some of the important topics that need to be investigated in detail for successful integration of these modes with the existing network. While there have been several studies looking at the heterogeneity in attitudes and perceptions towards new modes (e.g., Choudhury et al. (2018); Haboucha et al. (2017)), they have primarily concentrated on the effects of sociodemographic factors. However, there is evidence in the literature that people from different countries have significant differences in attitudes and perceptions that are likely to lead to differences in willingness-to-pay and different adoption rates of new transport modes in different parts of the world. For example, Hudson et al. (2019) used Eurobarometer data of 2014 with around 1000 participants from each EU country to analyze the attitudes of Europeans towards AVs and found that among other things, people from different countries show different levels of interest in AVs. The results of this study showed that, on a scale of 1 to 10 (10 = totally comfortable, 0 = totally uncomfortable), respondents from Poland, Netherlands and Sweden responded an average of around 5, while in Cyprus, Malta, and Greece, the average response recorded appeared to be less than 3. In a more recent study on the comparison of willingness-to-pay for AVs in different countries, Potoglou et al. (2020) interviewed around 6000 participants from 6 different countries: US, UK, Germany, Sweden, Japan, and India. The diverse set of countries selected, covering countries with a cultural orientation of highly individualistic (UK and US), moderately individualistic (Germany and Sweden) and collectivist (India and Japan) helps the author provide insights from heterogeneity in willingness-to-pay for AVs. The results in this study showed a high willingness-to-pay for AVs in respondents from Japan, while in Germany, only conditional automation turned out to be acceptable. While these studies explore the attitudes of respondents from different countries towards AVs, other new alternative modes of transportation, e.g., Air Taxis, are less explored within the literature regarding the heterogeneity among attitudes of people from different countries. In one of the few studies on willingness-to-pay for Air Taxis, Ahmed et al. (2021) surveyed around 700 individuals, dominantly from the US. The authors found that age and cost-related factors have negative effects on willingness-to-pay for Air Taxis. On the other hand, lower and more reliable travel time, fewer and less severe crashes, more in-vehicle non-driving activities and less CO2 emission had a positive effect on willingness-to-pay for Air Taxis. To the best of our knowledge, there has not been any study that compares the willingness-to-pay for different new modes (e.g., AV, Air Taxis, Hyperloops) alongside the current ones among different parts of the world to uncover the underlying sources of the global heterogeneity. 2- Objective This study aims to address this research gap by exploring the global heterogeneity in the perception and choices of potential users of new modes of transportation in a diverse set of countries, including Australia, Bangladesh, Canada, China, India, UK, and USA. We attempt to understand the heterogeneity in perception of autonomous vehicles, Air Taxis and Hyperloops among people living in these countries by collecting stated preference (SP) data about mode choices in presence of these new modes alongside the current ones. Detailed questions about attitudes (technology adoption, risk-taking, social-network effects), personality and information processing patterns are also collected to disentangle the different underlying sources of heterogeneity in choices among respondents from different countries. 3- Data In the SP survey, the modes are presented in three groups: 1. Regular car, ride-sharing, public transport 2. Regular car and AV and 3. Regular Train, Hyperloop, and Air Taxi. The choice contexts (e.g., going to an interview, airport, recreational trip) are varied among the SP scenarios along with attribute levels (e.g., in-vehicle and out of vehicle travel times, costs, cleanliness, and crowding levels in shared options). The participants are asked to state their choice certainties alongside the choices. In addition to traditional attitudinal questions (e.g., environmental awareness, trust in technology), detailed questions are asked about features such as information processing, personality, and social norms, which are expected to reflect the sources of global heterogeneity. 4- Modelling framework and expected results The data will be used for developing a hybrid choice model with latent classes based on geographical locations, ethnicity, and response to attitudinal questions. The willingness-to-pay values will be compared with those derived from simpler models. The results are expected to uncover the sources of global heterogeneity and will be useful for choice modellers beyond the transport domain. References: Ahmed, S.S., Fountas, G., Eker, U., Still, S.E., Anastasopoulos, P.C., 2021. An exploratory empirical analysis of willingness to hire and pay for flying taxis and shared flying car services. Journal of Air Transport Management 90, 101963. Choudhury, C. F. , Yang, L., De Abreu e Silva, J., Ben-Akiva M. (2018) Modelling Preferences for Smart Modes and Services: A Case Study in Lisbon. Transportation Research, Part A: Policy and Practice, 115. pp. 15-31. ISSN 0965-8564 Continental, 2018. Where are we heading? paths to mobility of tomorrow: The 2018 continental mobility study. https://cdn.continental.com/fileadmin/ __imported/sites/corporate/_international/english/hubpages/10_ 20press/03_initiatives_surveys/mobility_20studys/2018/mobistud2018_ 20studie_20pdf_20_28en_29.pdf. Accessed: 2021-11-20. Haboucha, C.J., Ishaq, R., Shiftan, Y., 2017. User preferences regarding autonomous vehicles. Transportation Research Part C: Emerging Technologies 78, 37–49. Hudson, J., Orviska, M., Hunady, J., 2019. People’s attitudes to autonomous vehicles. Transportation research part A: policy and practice 121, 164–176. Potoglou, D., Whittle, C., Tsouros, I., Whitmarsh, L., 2020. Consumer intentions for alternative fuelled and autonomous vehicles: A segmentation analysis across six countries. Transportation Research Part D: Transport and Environment 79, 102243. |
14:30 | Forecasting home-based telecommuting in 2050 PRESENTER: Antonin Danalet ABSTRACT. Telecommuting is more easily accepted by the users and takes a shorter time to be implemented as other congestion and pollution mitigation strategies, such as switching to alternative fuel vehicles, promoting public transport or implementing a congestion pricing. In particular, it directly reduces work trips and consequently a part of peak period traffic. However, working from home might induce other trips e.g. for leisure, increase the overall distance traveled and encourage households to move further away from their place of work. In this work, we focus on home-based telecommuting. Working from home is not the only way of telecommuting: one might work from other people’s home, from cafés and libraries or from trains. Our goal is to forecast who will work from home among the Swiss resident working population in 2050. For this, we estimate a binary logit choice model. The modelled alternatives are to work from home (at least for a bit), or not working from home (at all). The data for the estimation come from the Swiss national travel survey, the Mobility and Transport Microcensus (MTMC), 2015 & 2020. It includes revealed preference data at a national level. The MTMC normally takes place every five years. In 2015, 57’090 households were interviewed. The survey of 2020 started in January and was interrupted in March 2020 due to the coronavirus pandemic, hence the sample contains 6903 households. Determinants of working at least partially from home are taken from the literature and tested on our sample. Work related (business sector, function in the company, rate of part-time work), socio-economic (language, income, sex, education, age) and spatial factors (home-work distance, public transport quality, urban/rural typology) are significant attributes of the choice of employees to work from home. We validate our model internally (cross validation) and test the temporal stability of the model between 2015 and 2020. The choice model is also externally validated by applying it to a synthetic population of 2017 based on multiple, statistically combined datasets. The alternative specific constant has been calibrated on the synthetic population 2017, in order to reproduce the percentage of people working from home observed in MTMC. Finally, we apply the model to forecasted synthetic populations of 2030, 2040 and 2050 in order to predict the proportion of the employees doing home-based telecommuting. The projected synthetic population considers the demographic outlook provided by the Swiss Federal Statistical Office (FSO). All explanatory variables mentioned above are predicted for the Swiss population up to 2050. We predict that 39% of the employees living in Switzerland will work at least partially from home in 2050, compared to 28% in 2015. The model has obvious limitations: the effect of the COVID-19 pandemic nor possible technological developments are included. The model also only explains the binary possibility to work from home, but not the exact percentage of work done from home, even if the data are available in the survey (MTMC). Future research should estimate the share of work from home, e.g., with a fractional regression, together with the number of trips to work or even more generally with the number of trips for non-work purposes. New data are currently being collected. The Mobility and Transport Microcensus 2021 has started in January 2021 (since the 2020 edition has been interrupted and postponed to 2021) and lasts for one full year. It will include a dataset of 50’000-60’000 respondents including their telecommuting behaviour. The 2021 data have been collected during the pandemic and include a lot more people working from home. We present possible approaches to incorporate the effects of the pandemic in our model and discuss the implications for long term forecasting. Such results are important for the prediction of the number of trips to work. More generally, a synthetic population for 2050 containing the probability for each individual to work from home and its impact on the number of trips to work and for other purposes allows us to estimate more accurate transport demand and transport forecasts. Hence, the results make it possible to test the future impact of teleworking on the Swiss transport system. This in turn allows to guide transport policy and to focus infrastructure investments. |
15:00 | Fuel consumption elasticities and feebate effectiveness in India and China PRESENTER: Prateek Bansal ABSTRACT. Please see the attached file. |
16:00 | Modelling the demand side response (DSR) to energy price signals using the MDCEV approach PRESENTER: Jacek Pawlak ABSTRACT. The pressure to decarbonise energy sector calls for multiple approaches to be employed to achieve the desired governmental targets, e.g. net zero by 2050 in the UK or the EU countries. One of the most substantial means of achieving this objective has been via an increased reliance on renewable sources of energy, such as wind or solar power. Whilst meeting the decarbonisation objectives, such a shift creates the need to align times of consumption to periods of generation, themselves depending on the prevailing though inherently uncertain and volatile weather conditions: wind speed and sunshine. Whilst alternative, carbon-free sources exist that are less volatile, such as geothermal, tidal, hydro or nuclear power, they tend to strongly restricted by the requirement of specific topographical conditions, costs or public attitudes. A recent example of such an incident was observed in February in Sweden, when high consumption coupled with insufficient regional generation and transmission capacity led to the need to import of electricity from coal-fired power stations in Germany and Poland (https://www.bloomberg.com/news/articles/2021-02-11/sweden-s-power-problems-are-down-to-grid-manager-analysts-sa). At the same time, battery technology has not progressed sufficiently for large-scale industrial applications whilst infrastructure for storing large amounts of energy (pumped-storage hydroelectricity, hydrogen manufacturing plants) are multi-year projects and not always well-suited for the need to rapidly respond to changing circumstances of the network. As a result, the energy sector has been exploring means of managing the demand side. Specifically, great hopes are put in the so-called demand-side response (DSR), which relies on the notion of nudging the consumers using monetary and non-monetary incentives to decrease consumption peaks and manage consumption with respect to periods of high or low generation, whilst also requiring to account for transmission capacity. Accurate modelling of how people choose to adjust their consumption is paramount as it drives both operational decisions (launching auxiliary generation or drawing from storage) as well as strategic decision, such as expansion of the generation, storage or transmission capacity. The approaches have largely been focused on load (demand profile) forecasting using machine learning and artificial intelligence approaches (Antonopoulos et al., 2020; Johannesen et al., 2019). In the present effort, we offer a novel approach to DSR modelling by using the multiple discrete-continuous extreme value (MDCEV) framework to describe how people adjust their energy consumption during periods of the respectively low or high prices. To that end, we propose firstly a conceptual model in the form of an adapted microeconomic model of a decision-maker that chooses or not to engage in a DSR event, drawing upon our earlier developments in the field (Pawlak et al., 2020), themselves based on the goods-leisure trade-off paradigm. The model provides use with a theoretical justification for formulating the DSR engagement as a discrete-continuous decision-making problem. To the best of our knowledge, this is the first attempt to model the DSR using the MDCEV framework, despite in energy contexts, e.g. Pinjari & Bhat (2010). Thus, the current research is novel in two ways: in terms of how DSR participation is modelled and as an extension of the MDCEV framework application, including the accompanying microeconomic underpinning. Despite the intuitively appealing nature of the MDCEV formulation to the current context, the actual formulation requires operational definitions of the key components of the model: discrete alternatives with non-negative consumptions as well as the budget. We propose that the alternatives represent absolute deviations from the expected consumption (i.e. consumption that would have happened without a DSR policy invocation). Typically, DSR events can be split into pre-, during-, and post-event periods, where the price signals are set to offset the ‘during’ consumption change in the preceding and following periods. Thus, for a period characterised with higher price to reduce the consumption, the periods before and after would be characterised with lower prices and vice versa. This approach ensures that the DSR policy takes advantage of the shift and not overall reduction in consumption. Thus in the present context, an individual could choose to amend their consumption in the pre-, during, and post-DSR-event episode and accordingly choose the amount. As for the budget, we observe that residential electricity consumption is technically constrained by the main fuse (typically between 80 and 100 amps), which limits the overall consumption. For a model that is based on a single DSR event, this is a more viable budget than that related to household’s budgetary (income) constraints. To assess feasibility of the proposed approach, we begin analysis by using a synthetic dataset to assess feasibility of estimation. Using an implementation of MDCEV in R using package ‘Apollo’ (Hess & Palma, 2019). Our exploration of the synthetic results confirms that the model formulated as per above is amenable to estimation. In our ongoing work, we seek to employ the proposed methodology to experimental dataset collected as part of the Low Carbon London trials (Schoefield, 2014) between January 2013 and January 2014. The dataset comprises a representative sample of ca. 5000 households in London, UK that were recruited to participate in the project, which consisted of 1 year of passive consumption monitoring followed by a year during which part of the sample was exposed to DSR experiments, i.e. periods of higher or lower energy prices. The participants’ “default” price at £0.1176/kWh, could go to “high” rate at £0.6720/kWh or to “low” rate at £0.6720/kWh during the trial period. Participants would have received notifications a day before the DSR experiment day. The duration of price events were randomly chosen and were distributed across the trial year. We expect the empirical results to confirm those obtained using synthetic data. In addition, we expect to showcase how DSR participation is shaped by variables related to energy policy (including pricing), appliance use, seasonality and household sociodemographic attributes. We also demonstrate how the model can be used to predict the DSR amounts at the household level, following the MDCEV forecasting algorithm (Pinjari & Bhat, 2010). We conclude the study by demonstrating how the model can be incorporated into a broader framework for residential energy demand forecasting. |
16:30 | Does accounting for discrete-continuous choices matter? A case study of farmers’ preferences for practice- vs. result-based agri-environmental-climate measures PRESENTER: Katarzyna Zagórska ABSTRACT. We investigate farmers’ preferences for new agri-environmental-climate measures (AECM) that are aimed at conservation of biodiversity on arable land, with particular focus on distinction between result-based and practice-based contracts. The empirical context is important in the light of changes that European Union plans to introduce to the Common Agricultural Policy. Learning about farmers’ preferences can help create more efficient agri-environmental programs. Participation in the programs is voluntary, and therefore dependent on how the contracts appeal to the farmers. In this particular study, farmers’ adoption of AECM can be improved by advancing the available evidence on the new design features of the payment schemes. Most of the current EU AECM are practice-based, which means that farmers receive payment per ha of land enrolled if they implement specific practices. Results-based contracts allow farmers to choose actions they take on the enrolled land and their payment depends on actual improvements of environmental conditions. We chose the topic of biodiversity protection on arable land because of the biodiversity crisis in agriculture as well as an ongoing scientific discussion about how to introduce results-based payments for biodiversity on arable land. There are many examples of results-based approaches across Europe being used to improve biodiversity, but relatively few examples on arable (cultivated) land. Our results highlight benefits and challenges of using a results-based approach on arable land, as well as ameliorate the understanding of what proxy measures of biodiversity quality are acceptable to farmers. We use a stated-preference-based Discrete Choice Experiment to observe farmers’ willingness to enroll their arable land into the two different types of contracts: result-based and practice-based. In each choice set there are three alternatives (practice-based, results-based and no contract) and farmers are asked to divide all of their arable land between them. The practice-based biodiversity-conservation contract included an ambitious combination of four land-management requirements: (1) winter cover crops and stubble intercrops (catch crops), (2) five different main crop types, including the cultivation of legumes, with a minimum share of 10% each, (3) 10% covered by flowering field margins and winter bird use, and (4) 10% set-aside. The result-based contract was presented as a one in which payments depend on expert-measured multi-level biodiversity index. Such measurement considers various characteristics, such as soil life, flowering and native plants, ecological corridors, and combining them to assign a single biodiversity index. It was assumed that if under result-based contract farmers implemented the practices proposed in practice-base one, then their remuneration in would approximately the same. The advantage of results-based contracts is that achieving higher levels of biodiversity or applying more effective practices will make a remuneration larger. On the other hand, it is associated with higher risk, as deteriorated biodiversity levels or conservation indicators result in lower payments. Respondents also have the opt-out option, that is a possibility to declare that some part of their arable land will not be the subject to any agri-environmental contract. Inspired by ongoing discussion, we incorporated aspects related to collective implementation and land-tenure into the study. From the perspective of practitioners, there is interest in combining result-based and collective contracts. We include a bonus payment dependent on biodiversity level of the area surrounding one’s farm, measured at landscape level. In other words, whether or not neighboring farmers also adopt conservation measures to increase biodiversity of their farms, influences the bonus one gets. The amount of bonus payments takes a range of values for both practice-based and results-based alternatives, hence the effect of the mean or deviation can be spotted. We track how preferences differ for owned and leased/rented land. On top of that, the study involves a number of altitudinal questions about the prospective novel contract design features. The study was conducted in January 2022 in four countries: Netherlands, Germany, Poland, and Czechia using representative samples of 500+ farmers in each country. The results offer an excellent opportunity to apply the multiple discrete-continuous extreme value (MDCEV) model, which allows to account for both the – the discrete (the alternative chosen) and continuous (the area of arable land enrolled to specific contract) decisions. W implement the model in MatLab and use this opportunity to investigate, if ignoring the fact that some decisions are discrete-continuous in nature and modelling them using state-of-practice discrete choice models (such as the random parameters mixed logit model) causes bias and leads to erroneous policy conclusions. There are very few studies concerning the hot policy issue of using results- vs. practice-based agri-environmental-climate measures, and our results make a clear contribution to this literature. In addition, we make a novel investigation of the state-of-the-art econometric techniques, with likely the first application in environmental/agricultural economics. Our results have a direct impact on the formulation of the EU Common Agricultural Policy, as creating appropriate, properly balanced contracts can satisfy both farmers and the society, ensuring the sustainability of biodiverse agriculture and efficiency of economic instruments used to support it. |
17:00 | Estimating customer, product, and brand expected value using multiple discrete-continuous extreme value (MDCEV) models PRESENTER: Rodrigo Tapia ABSTRACT. [Please see attached pdf] Introduction Customer-centric companies bring the customer to their top priorities. They focus on marketing strategies to obtain the maximum value from customer relationships. The performance assessment of the implementation of such strategies may be done using a measure called customer lifetime value (Kumar & Reinartz, 2016). However, this measure considers expected cash flows per customer only, disregarding information about brands/products categories. Marketing managers also make relevant decisions at the product category and brand levels. They need to assess performance of products (Ma, Fildes, & Huang, 2016) and brands (Lehmann & Srinivasan, 2014). Consequently, decision makers need to manage resources spent on customers, products, and brands simultaneously to generate value both to and from customers (Kumar & Reinartz, 2016). In spite of this, the assessment of the expected value of customers, product categories, and brands are generally addressed separately, leading to the use of disconnected performance metrics to drive marketing efforts. Thus, there is a need to unify these three perspectives to provide a unified metric to help managers to deal with them together. Our objective is to propose and empirically apply a model to unify customer, product category, and brand expected cash flows estimation. This will allow the assessment of value of every combination among these three perspectives. To accomplish it, we sought to combine the strengths of time series models, which relies mostly on past purchasing behavior, and choice models, which can predict changes when the context (for example the price of the product) changes. The proposed modelling approach is based on the multiple discrete-continuous extreme value (MDCEV) (Bhat, 2008) and eMDC (Palma & Hess 2021) models to predict the individual purchases of each customer, and a time-series model to predict their overall expenditure (or budget, in MDC parlance) throughout the period under analysis. Different model alternatives (such as budget assumptions and complementarity or substitution among products) will be tested and compared not by model fit, but for their ability to predict the expected cash flow per customer. Data and modelling We have applied the proposed model to data from a large consumer-packaged goods distributor of products from one of the world’s largest manufacturers in the chocolates & confectionaries category operating in South America. In this business-to-business context, the retailer is the distributor’s customer. We obtained transaction data for a 60 months period from January 2013 to December 2017. It contains every product purchase from a cohort of 5,974 retailers. There are 4 product categories and there may be multiple brands within each category: (i) Drops (1 brand); (ii) Gums (5 brands); (iii) Chocolates (7 brands); and (iv) Truffles (2 brands). The monthly purchases of each customer were predicted using Multiple Discrete Continuous (MDC) models. These models, originally proposed by Hanemann (1984), are derived from the classical consumer utility optimisation problem described in eq. (1) and (2) (see pdf), where xnk represents the consumed amount of product k by customer n, pnk the product’s price, and Bn the customer’s budget. In the above equations, xn0 is the numeraire (outside) alternative, representing the expenditure in all products other than the ones of interest to the analist. Different models assume different functional forms for u0, uk and ukl. In particular, Bhat’s (2008) MDCEV assumes ukl=0, while Palma & Hess’ (2021) eMDC does not. This implies that only the second model considers complementarity and substitution effects. Functions u0, uk and ukl include taste parameters inside of them representing the preferences of customers, and can be made dependent on customers’ characteristics. Purchase decisions are assumed to be taken monthly by customers, and the amount of money they spend on each occasion (Bn) is predicted using exponential time series. This ensemble combines the dynamism of time series, with the structural flexibility of choice models. We compare different combinations of these models in terms of their predicting capacity. Results and future work So far, MDCEV models were estimated with different combinations of features: i) budget forecasting: time series for each customer disaggregated per product type, or aggregated; ii) budget expressed in monetary terms, or as the amount of product; iii) budget modelled as a percentage of total expenditure, or the units consumed; iv) consumption expressed as packages of product or as SKU units. Table 1 (see pdf) shows the prediction fit metrics for the models tested so far. The fit measured as mean average error suggests that the best approach comes when using a disaggregate budget forecast, and modelling consumption as the percentage of total SKU units (model 5). However, when consumption is expressed as monetary expenditure on product packages (model 7 and 8), the forecast showed a better correlation between the actual and estimated most valuable customers. This implies a trade-off between the metrics. Table 1: Model forecasting fit metrics The models estimated did not include any segmentation per customer or complementarity/substitution between products that will be the next types of models to be tested and compared. References Bhat, C. R. (2008). The multiple discrete-continuous extreme value (MDCEV) model: role of utility function parameters, identification considerations, and model extensions. Transportation Research 42B, 274-303. Kumar, V., & Reinartz, W. (2016). Creating Enduring Customer Value. Journal of Marketing. 80(6), 36-68. Lehmann, D. R., & Srinivasan, S. (2014). Assessing Brand Equity Through Add-on Sales. Customer Needs and Solutions, 1(1), 68–76. Ma, S., Fildes, R., & Huang, T. (2016). Demand forecasting with high dimensional data: the case of SKU retail sales forecasting with intra-and inter-category promotional information. European Journal of Operational Research, 249(1), 245-257. Palma, D. and Hess, S. (Under review) Extending the Multiple Discrete Continuous (MDC) modelling framework to consider complementarity, substitution, and an unobserved budget. Retrieved from https://www.dpalma.net/publications |
16:00 | Investigating Passenger Information Needs for Hybrid Public Transport Network Journey Planning PRESENTER: Bianca Ryseck ABSTRACT. Though previously unscheduled transport systems were seen as conflicting to equitable mobility goals, cities like Cape Town are increasingly seeking to integrate these with new scheduled public transport systems acknowledging the need for both system types to accommodate the flexibility required to respond to rapid growth and changes in urban structures and travel patterns (Ferro et al., 2015). Hybrid systems support transportation diversity through multimodality by providing potential users with a mix of modes with various service characteristics. In providing these diverse mobility options, as multimodal systems, hybrid systems have the potential to increase equity and resilience to changing mobility needs (Litman, 2017). While from a spatial planning perspective, the hybrid system’s modal mix may theoretically serve a large population and connect them with a wide catchment of opportunities, a discrepancy in knowledge of the network on the users’ part may affect their ability to best use the system to meet their needs. Information imbalances and limitations across modes create an information deficit, differentially hindering users’ ability to access information to harness the opportunities that the complex hybrid network could provide. However, despite plans for an integrated public transport network and South African national mandates to provide information to assist passengers with navigation and decision-making processes across these services, Cape Town has made little headway in providing accessible integrated public transport information. The objective of this research was to investigate which information capital and level of quality would most enhance captive public transport users’ ability to expand their mobility opportunities through travel decisions that meet their needs and preferences within Cape Town’s hybrid network for non-routine trips. To this end, a stated preference discrete choice model was used to investigate which information types and of what level of quality are most influential in enhancing users’ ability to make travel decisions. The incorporation of uncertainty in a study investigating the information quality desired for public journey pre-trip planning demanded sensitivity to perceptions of utility – in other words choosing based on the desired quality of information to make a journey choice rather than conflating information quality as a specific journey option. While the latter has been extensively studied in various situations (e.g., Zhongwei et al., 2012; Li et al., 2016; Wijayaratna and Dixit, 2016), the former has not been investigated in any context through a choice model. The survey was designed elicit responses based on information needs desired to plan a trip as opposed to the effect of information on journey choice. Multiple choice set designs were considered to best reflect choice quality as a function of utility. In the final design used, for each choice set, respondents were asked to consider which information package (alternative) is most beneficial to planning a hybrid multimodal trip. A mix of trip purposes and origin-destination (O-D) pairs was used to identify information needs for journey planning across a range of scenarios and to inform a design-for-all approach in terms of targeting information needs across a diverse population (Lyons et al., 2019). The choice sets used a hybrid labelled and unlabelled approach that entailed complete choice set scenarios to be divided between three hybrid modal combinations, while within a single choice set itself the alternatives were unlabelled. Attributes were informed through prior semi-structured interviews and narrowed down through a best-worst-scaling exercise resulting in a list of seven attributes used in this study: (1) frequency, (2) fare cost, (3) departure time, (4) arrival time, (5) safety walking to/from a station/stop, (6) safety onboard, and (7) safety while waiting at a stop. Attribute labels were qualitative (e.g. ‘exact fare’) rather than descriptive (e.g. ’10 ZAR’) representations of information certainty to avoid the risk that respondents make choices based on the value of the attribute label rather than on the precision of the level. Survey respondents were intercepted at the main public transport interchanges in Cape Town CBD, Bellville, Mitchells Plain, and Khayelitsha. Respondents were pre-screened to include only those between the ages of 18 and 55, and those who did not report having access to a private motorised means of transport (i.e. are ‘captive’). A total of 501 surveys from 576 collected were included in the mixed multinomial logit model analysis. The paper finds that estimated and exact information levels are more desirable than no information at all, with information related to safety walking to and from the station or stop being the information most desired. Across the models, when accounting for socio-demographic variables and journey planning scenarios, the significant contributors to variation in needs for non-routine trips are whether the person is male or female and whether they are planning a trip for social or appointment-based purposes. As there is a disjuncture between the information currently provided and the information needs found to be useful for planning hybrid journeys, these findings have implications for data collection and provision strategies regarding hybrid public transport passenger information. References Ferro, P. S., Muñoz, J. C., & Behrens, R. (2015). Trunk and feeder services regulation: Lessons from South American case studies. Case Studies on Transport Policy, 3(2), 264–270. https://doi.org/10.1016/j.cstp.2014.10.002 Li, H., Huizhao, T., & Hensher, D. (2016). Integrating the mean–variance and scheduling approaches to allow for schedule delay and trip time variability under uncertainty. Transportation Research Part A, 89, 151–163. https://doi.org/10.1016/j.tra.2016.05.014 Litman, L. (2017). Evaluating Transportation Diversity: Multimodal Planning for Efficient and Equitable Communities. Victoria Transport Policy Institute. http://www.vtpi.org/choice.pdf Lyons, G., Hammond, P., & Mackay, K. (2019). The importance of user perspective in the evolution of MaaS. Transportation Research Part A, 12, 122–36. https://doi.org/10.1016/j.tra.2018.12.010 Wijayaratna, K. P., & Dixit, V. V. (2016). Impact of information on risk attitudes: Implications on valuation of reliability and information. Journal of Choice Modelling, 20, 16–34. Zhongwei, S., Arentze, T., & Timmermans, H. (2012). A heterogeneous latent class model of activity rescheduling, route choice and information acquisition decisions under multiple uncertain events. Transportation Research Part C, 25, 46-60. https://doi.org/10.1016/j.trc.2012.04.003 |
16:30 | Evaluating the effects of social capital on travel behaviour: modelling the choice of a new cable car in Bogotá PRESENTER: Julián Arellana Ochoa ABSTRACT. A general view of social interactions, including social capital, might help to explain the motivation and attributes of travel behaviour, improving the analysis of modal choice, the planning and execution of activities, and travel-location-related decisions (Dugundji et al., 2011). One of the most extended definitions of social capital is the one proposed by Putnam (2000): “Social capital refers to the features of social organization such as networks, norms and social trust that facilitate coordination and cooperation for mutual benefit”. In a determined market context, the economic consumer decisions, such as the transport mode choice, are based not only on the individual’s self-interest but also on social relations and interactions, which can be measured in the form of social capital. Recognizing the relevance of social capital in understanding individual economic behaviour might help improve the resource allocation process by providing more robust tools for policy evaluation and decision making (Robison et al., 2012). Even though there is a growing interest in incorporating perceptions, attitudes, and social interactions into travel behaviour analyses, there are still comprehensive challenges in representing subjective data and latent constructs such as social networks, well-being, or social capital (Ben-Akiva et al., 2012). In transport research, there is some consensus that transport improvements facilitate social interactions and promote social inclusion by providing better access to opportunities (Östh et al., 2018). However, few studies have evaluated the effects of social capital on travel behaviour. This research aims to fill some of these gaps by assessing the effects of social capital on the willingness to use a new cable car line, located at the urban south periphery of Bogotá (Colombia), a zone characterized by originally informal settlements with high poverty levels, unemployment, low education levels, poor accessibility, and social vulnerability. The analysis relies on a survey collected before the inauguration of the cable car, in which the target population is adults living in the catchment area of the cable car stations. The survey consisted of a general questionnaire applied to 1,031 respondents to collect sociodemographic, travel patterns, and social capital information. Then, a subsample (n=340) answered a stated preference (SP) experiment to assess the willingness to use the new cable car service. We estimated an Integrated Choice and Latent Variable (ICLV) model (Ortúzar & Willumsen, 2011), considering a linear combination of social capital, cost, travel time, walking access time, and waiting time to explain the willingness to use the new system (Figure 1). Following the literature review, we defined social capital as a second-order formative latent variable caused by a linear combination of six first-order domains: social groups, networks, interpersonal trust, institutional trust, cooperation, and empowerment (see Figure 2). Using the whole sample, we modelled the latent variables via a Multiple Indicator Multiple Causes (MIMIC) structure, where the latent variable explains a set of attitudinal indicators and is described as a function of observed attributes (Bollen, 1989). Modelling results suggest that the heterogeneity of social capital mainly depends on the individuals’ age, education level and the time living in the neighbourhood. Also, as seen in Table 1, higher social capital stocks are associated with a greater willingness to use the new cable car service. Hence, social capital is a factor to consider in the decision-making process of transport investments since it defines the potential demand of new services and might help identify risks and evaluate financial projections. |
17:00 | User willingness to pay for COVID-19 mitigation measures in public transport and paratransit in developing economies: Evidence from Uganda and Bangladesh PRESENTER: Zia Wadud ABSTRACT. The outbreak of COVID-19 has significantly impacted travel behaviour, transport operations and policies in many countries across the world. In most countries measures were taken to make public and paratransit safer by incorporating various non-pharmaceutical interventions. These additional measures, which include restrictions on number of passengers (to main distancing), provision of hand sanitisers or face covering, frequent cleaning adds to the costs of operations significantly (or reduces revenue in case of distancing measures). The resulting financial pressure by the transport operators raises an important question on who pays for these additional measures. In most countries, this has been covered by one-time government allocated bail-outs or strategies to increase fare (government approved or organically driven), which directly affects the users. However, even without these interventions, there could be a demand and as such willingness to pay for some of these intervention measures from the consumers concerned about safety. Knowing such WTP will not only help operators set their fare, but can also help the governments decide the appropriate bail-out needed. This paper addresses this by estimating the user’s willingness to pay for selected COVID-19 mitigation measures in public transport and paratransit (motorcycle taxis or motorcycle ridehailing services) using survey data collected from two developing countries as case studies – Uganda and Bangladesh. The mitigation measures considered are those where the cost is initially incurred by the operator and recovered through user fares. For public transport, these are - (1) social distancing (passenger loading at half capacity), and (2) mandatory hand sanitisation plus increased cleaning/disinfection frequency of surfaces, while for paratransit, they are - (1) provision of a transparent shield between the rider and the passenger (which has been found to be an effective means to reduce passenger exposure), and (2) provision of cleaned/disinfected helmets at the start of each trip. The study analyses stated preference data using the utility maximisation framework and finds that the implementation or provision of COVID-19 mitigation measures improves the attractiveness of the associated public transport or paratransit alternatives, however, transport users make trade-offs between safety and cost when making travel decisions. Willingness to pay values have been estimated for each of the mentioned COVID-19 mitigation measures, both generally and disaggregated by user demographics. These findings are particularly useful for developing economies where the rates of COVID-19 vaccination are still very low, and especially in the context of the new Omicron variant. |
16:00 | Scaling Bayesian inference of mixed multinomial logit models to very large datasets ABSTRACT. Kindly see the PDF version for the correct display of mathematical equations and figures. Word count: 969 Figures: 2 Tables: 2 |
16:30 | Heterogeneity in inter-episode intervals for discretionary activities; covariate- dependent finite-mixture models PRESENTER: Pim Labee ABSTRACT. Please see the attached .pdf. Thank you. |
17:00 | Is your model the best? Mitigating risk through averaging across different analysts’ competing models. PRESENTER: Thomas O. Hancock ABSTRACT. INTRODUCTION: Recent work has demonstrated that model averaging using a sequential latent class process can result in significant and consistent improvements in model fit for both estimation and in forecasting with subsets of validation samples (Hancock, Hess, Daly and Fox, 2020). Previously, we illustrated that a key opportunity for the use of model averaging is when a modeller struggles to select a ‘final’ model when they have a number of advanced models (for example, not knowing which nesting structure to use in a mode-destination choice model, or not knowing which mixing distributions to use in a mixed logit model). However, in the current work, we demonstrate that the true power of model averaging lies in the fact that a modeller does not necessarily need to use their own models, and can average across a large number of models designed/specified by different modellers. This, on top of having all of the benefits of averaging across your own models, has the key additional benefit of avoiding possible biases held by the analyst, thus mitigating risk. Taking three case studies (one stated preference, one revealed preference and one simulated dataset), we invite attendees of ICMC to take part in a joint effort and competition. DATA: Three datasets will be made available. There first two will be a revealed preference dataset and a stated preference dataset, each with around 10,000 choice observations. The final dataset will be simulated, meaning that there is a known underlying model specification. As such, there is a known ‘correct’ specification for the data generation process, meaning that particular care will be required to avoid overspecification. The datasets will be made available 15 January 2022. Analysts need to submit details on their model specification, parameter estimates, and likelihoods at the individual decision maker level by 15 March 2022. EVALUATION: The CMC team will evaluate the models in three different ways: • Performance on a hold-out sample that was not available to analysts during the model building. • Ease of obtaining behavioural insights from the models. • Performance/contribution of the model during the model averaging process. To apply model averaging, we use a sequential process to estimating a latent class model. Firstly, the individual classes (models) are estimated. Then, holding all parameters of the individual models constant, the class allocation parameters are estimated. As such, to apply model averaging (i.e. to estimate class shares), we only require model likelihoods for the set of choices by each individual in the data. INVITATION/INCENTIVE: An open invitation will be sent to choice modellers in January. Modellers may apply their models to a single dataset, or all three, but are encouraged to apply the same model structure to all three datasets. Analysts with models receiving a certain share of the model average will be invited to become co-authors ahead of submitting this work to a journal, while a free registration to ICMC will be offered to one member of the team submitting the best performing model. REFERENCE: Hancock, T. O., Hess, S., Daly, A., & Fox, J. (2020). Using a sequential latent class approach for model averaging: Benefits in forecasting and behavioural insights. Transportation Research Part A: Policy and Practice, 139, 429-454. |
16:00 | The valuation of benefits from health risk reduction in three-generation households – the role of reciprocity PRESENTER: Anna Bartczak ABSTRACT. In this study, we investigate people's preferences for family resource allocation on health in three-generation families. Our main objective is to examine whether reciprocity attitudes influence preferences and willingness to pay (WTP) for the lifetime health risk reduction. We conducted a choice experiment (CE) valuation survey on 500 respondents to elicit preferences for the lifetime risk reduction of coronary artery disease (CAD) and we use a specially designate scale to elicit respondents’ positive reciprocity attitudes. The sample consisted of the middle generation members of three-generation households from Poland. Our questionnaire was based on the Contingent valuation survey by Adamowicz et al. (2017) concerning risk perception and parents’ marginal WTP for coronary artery disease (CAD) risk reduction. We extended this study to investigate a parent’s preferences not only towards their child but also towards their elderly parent. We use three CEs to estimate the parent’s WTP for reducing risks of heart disease. In one CE we ask about their child’s risk of heart disease, in another, their own risk and in the third CE we ask about their elderly parent’s risk. Respondents were randomly assigned into one of two treatments. In each treatment they faced two separate CEs. All respondents completed the CE for their own risk reductions, and then completed a CE either for their child or for their elderly parent. Hence the respondents did not make any direct trade-offs between their child and elderly parent. The design allowed for a direct comparison of our parent-child results with previous studies without the potential confound that could be introduced by the inclusion of the trade-off with the third generation. In order to elicit the respondents reciprocity beliefs we applied the scale developed by Eisenberger et al., in 2004. This is the 10 items scales witch measure positive reciprocity norms. The scale incudes items such as e.g. “If someone does something for me, I feel required to do something for them”. To assess the level of agreement with the items the Likert scale was used. To carry out our analysis, we applied a theoretically robust econometric approach, the hybrid mixed logit model. The main advantage of utilizing the hybrid choice framework is that it recognizes that answers to attitudinal questions in the survey are likely not a direct measures of the latent factor (for example, reciprocity beliefs), but rather they are the function of the latent variable, and can be also affected by other factors. This framework allows us to identify a latent variable by utilizing multiple indicator variables, which are all affected by the same latent construct. At the same time, hybrid choice framework avoids the possible endogeneity caused by a measurement error which indicator variables are likely to contain. We found that in three-generation households, the WTP to reduce the lifetime risk of CAD to child exceeds the WTP to reduce this risk to parent (the respondent). The results that parents value the relative health risk reduction to their child substantially more than to themselves is consistent with some previous literature. Additionally, we found that the WTP to reduce risk to elderly parent does not significantly differ from the WTP to reduce risk to parent (respondent). Turning to the main objectives of our analysis, we found that attitudes regarding reciprocity have distinct effects on the middle generation preferences concerning the reductions in the lifetime risk of CAD for the family members. The results indicate that the latent attitudes concerning reciprocity significantly impact the WTP for the health risk reduction for the child, the respondent, and the elderly parent. While the impact of reciprocity on the valuation of the CAD risk reduction for the respondent and the elderly parent is similar, in the case of the child is significantly higher compared to benefits from parent's health improvement. Parents' strategic considerations may explain the higher impact of reciprocity beliefs on the WTP for children's health risk reduction. This result may suggest that parents with a high inclination towards reciprocity norm may treat transfer made for the sake of their child's health improvement as a sort of assurance that they will receive support later in life. In the case of the result concerning the increased WTP for health risk reduction for elderly parent due to reciprocity, this fact can have a few explanations. Firstly, considering the design of our study, i.e., three generation households, one can expect that this effect can be partially driven by gratitude for elderly paradental transfers experienced earlier in life. Secondly, transfers made for the sake of elderly parent may result from respondents' expectations of receiving something in return. The elderly parent support can take the form of emotional, financial, or practical help in taking care of the household in general or focusing primarily on grandchildren. Finally, supporting an elderly parent might serve a sandwich-generation member as an instrument of sharing norms with their children. Probably the most puzzling result of this study is the significant and positive impact of reciprocity attitudes on the parents (the middle age respondents) WTP for their lifetime risk of CAD reduction. One explanation for this finding might lie in the fact that the reduction of health risks increases the ability to reciprocate that can be limited by frailty and illness, particularly in older ages. Thus, individuals with stronger individual norms of reciprocity may be more willing to maintain their ability to reciprocate. Apart from that, deteriorated health status of elderly parent naturally makes caregiving more absorbing, both in terms of time and costs. Parents' health status at an older age has been found as a factor that deteriorates the quality of life of informal caregivers, such as children of one. For those who strongly believe in reciprocity, it might be evident that their children will undertake the caregiver role at some point in time. Therefore care for own health might result from a will of minimizing the future burden for children who would provide care for their elderly parents. |
16:30 | Patient Preferences for Diagnostic Imaging Services: Blueprint for Value-Based Incentives Incorporating Individual Preference Heterogeneity PRESENTER: Jamie Benson ABSTRACT. Background In recent years, health care reform advocates have focused on replacing the traditional, volume-based care delivery system with one that rewards the provision of value to patients. In this context, “value” is generally defined using measures of care coordination, cost-effectiveness and health outcomes, but yet tends to not involve patient preferences regarding these or other measures of value and quality, despite the explicit desire for incorporating patient preferences. In theory, engaged patients are at the center of the health care experience in the new value-based delivery system, but in practice, provider and payer preferences dominate. The main obstacle to including patient preferences is the difficulty in operationalizing individual patient values and effectively translating those patient preferences into value-based incentives for health care providers. The purpose of this study is to develop a Discrete Choice Experiment (DCE) to model how patient choices are affected by measures of value. However, there are a number major challenges that need to be addressed before patients can actually be part of the shared-decision making process that value-based care envisions. Primary and specialty care providers alike are working to identify practice changes to meet these incentive targets and provide services based on value. Common themes include: reducing inappropriate referrals, improving care coordination, and investing in preventive care and community health. Diagnostic services, and Radiology departments in particular, are a keystone to care-coordination regimens because of their multidimensional capacity for diagnostic services across medical branches. As diagnostic imaging demands and costs escalate, there are significant negative impacts on provider productivity and patient satisfaction. To combat the rising demand, retail diagnostics has emerged across the State of Vermont, a phenomenon that is also observed elsewhere in the country. Retail diagnostics allows patients to receive diagnostic services and evaluation by specialists regardless of geographic location, thereby cutting wait-times, increasing access and augmenting coordination capacities between treating physicians. Radiology is one the specialties for which it is especially important to better define value from a patient perspective, especially because there is usually not a direct patient-provider relationship and value is thus defined in other ways, incorporating the broader health system’s attributes. The objective of this study was to identify patient preferences for diagnostic imaging services and analyze how patients make trade-offs between attributes of services. The goal of this study is to advise hospital management whether decentralizing some imaging services to an outside “kiosk” or clinic will provide value to patients while improving access and lowering costs. Methods Two focus groups were performed in a semi-structured manner with patients in the Vermont health network, each with 12 participants, and lasting between 90 and 120 minutes. Focus group interviews were transcribed following and analyzed using ATLAS.ti version 8 qualitative analytic software, which searched for potential themes and sequestered all relevant data to each. It then refined and specified themes before producing the final report in which themes were reported based on their frequency within groups and across groups, and intensity at which themes were discussed. Themes included: overarching trust in system and referring providers; preference for transparent and informative communication; personal interaction and compassionate bedside manner from staff; and accessibility of radiology services and the facility. This preliminary work helped define the key attributes and levels for a discrete choice experiment (DCE). Attributes and levels included interpreting physician specialty (non-radiologist, general radiologist, or sub-specialty radiologist), primary care recommendation (y/n), cost (USD, pivoted to participant insurance status), clinic wait time, travel time, appointment scheduling wait time, patient-rated service quality (5-star rating), government-rated quality score (5-star rating), and online scheduling availability (y/n). Two discrete choice experiments were designed to assess different aspects of the choice scenario: acute (injured ankle X-Ray), and general imaging (MRI for chronic back pain). Attributes and levels were chosen following extensive qualitative interviews, literature review, and pilot DCEs. A “library of designs” approach was taken to present subjects with cost levels closer to what they may actually pay, based on average costs for insured and uninsured individuals, with and without a deductible. Cost and wait/travel time levels were selected for each experiment based upon national and regional average values for each procedure, obtained from literature, government datasets, and our academic medical center. In total, six different d-efficient designs with twelve choice tasks each were generated using NGene 1.2.1 (ChoiceMetrics, 2018). Results Recruitment began online on April 5th, 2021. Respondents were separated into two distinct representative samples: one for the (rural) state of Vermont to investigate our specific care landscape, and a representative sample of the remaining 49 US states. Two hundred participants were needed in each sample (national and state), across two blocks (MRI and X-Ray), for a total sample of 800 individuals. Using a sampling company, we have filled our national quotas, however we are still in the process of recruiting for our local state sample using a combination of traditional panel sampling and social media advertised sampling. Sample demographics are consistent with the sample populations, with Vermont being older, far whiter, and more rural than the US cohort, with sparse pockets of urbanicity and diversity. Preliminary stated preferences data have been analyzed via the more common random parameter mixed logit models using the cmxtmixlogit package in Stata SE 17.0 (StataCorp, 2021). Models for each DCE (X-Ray and MRI) were estimated by sample (state vs national), as well as in aggregate. To better understand individual preference heterogeneity, interaction models including participant demographics have also been constructed. Participants were willing to pay significantly more for MRI services than for X-Ray, however interaction models revealed far higher cost sensitivity for those in lower income brackets. Participants valued the specialty of the radiologist next most highly, followed by recommendation from their primary physician. At the conclusion of each DCE, participants were asked to rank the attributes in order from most important to their choices to least, enabling comparison of self-stated and model-estimated attribute prioritization, which were found to be strongly concordant. |
17:00 | Two methods one story? Using multidimensional thresholding and a best-worst choice experiment to elicit physicians’ preferences for the medical management of subarachnoid haemorrhage PRESENTER: Sebastian Heidenreich ABSTRACT. BACKGROUND Information about treatment preferences of patients, physicians, and caregivers is becoming increasingly important in the development of pharmaceutical products, regulatory assessments, and care provision. While choice experiments with multiple attributes traded across alternatives are the workhorse for stated preference elicitation, their application to small-sample settings is challenging. Multi-dimensional thresholding (MDT) has been proposed as an alternative to conventional stated preference methods such as discrete choice experiments or best-worst scaling. MDT is a choice-based preference-elicitation method with roots in multi-criteria decision analysis that aims to elicit preferences at the individual level. While the applicability of MDT has been demonstrated previously, the use remains limited and no head-to-head comparison with other methods has been conducted. METHODS In a study concerned with physicians’ preferences for the medical management of subarachnoid haemorrhage post aneurysm repair, we included both an MDT and a best-worst choice experiment (BW-CE) to elicit acceptable trade-offs between four attributes: 1) the likelihood of delayed cerebral ischemia (DCI); 2) the overall risk of hypotension; 3) the overall risk of lung complications; and 4) the overall risk of dilutional anaemia. The instruments were developed and tested using a mixed methods approach that included interviews, a qualitative pilot, a quantitative pilot, and the main data collection. Within the BW-CE, physicians identified the best and worst of three hypothetical treatments. The performance of the three treatments on the attributes was varied based on a D-efficient design with 28 choice tasks that were split into two blocks. The MDT required physicians to rank the largest possible improvement in all attributes from most to least desirable. This was followed by binary choice exercises in which the treatment performance on two attributes was adaptively varied by bisecting their normalised feasible utility space. Attributes were paired together such that the most important attribute was traded-off against the second-most important attribute, followed by trading-off between the second-most important attribute and the third-most attribute, and so on. BW-CE data were analysed using a correlated mixed logit model. Attribute weights followed a log-normal distribution to reflect the ordinal nature of the attributes. Conditional parameter estimates were normalised to sum to one. MDT data were analysed by assuming attribute weights to follow a Dirichlet distribution. For each physician, linear trade-off constraints were derived from the ranking and the sequential trade-off questions to reduce the feasible weight space to an individual-specific 4-dimensional polytope with the centroid being an estimator of the attribute weights. Given that the computation of the vertex centroid is #P-hard, a hit-and-run estimator was used. Minimum acceptable benefit (MAB) was calculated based on the BW-CE conditional parameters and MDT centroids as the marginal rates of substitution between DCI and each of the three considered treatment risks. Estimates from both methods were compared on: 1) the attribute ranking as implied by attribute weights; 2) the distribution of normalised attribute weights; 3) the median MAB; and 4) the distribution of MAB. Differences between distributions were tested using a complete combinatorial convolution test as well as the relative differences in the length of the tails. RESULTS Overall, 350 physicians were recruited via a physician access panel. The sample consisted of 129 intensivists, 116 neurologists, and 105 neurosurgeons. Physicians were on average 47 years old (SD 8 years), resident of UK (N= 175; 50.0%) or US (N= 175; 50.0%), and the majority (N= 181; 51.7%) had been practicing medicine for more than 10 years, with only few (N=15;4.6%) practicing for less than 5 years. Most physicians varied their ‘best’ (N= 346; 98.9%) and ‘worst’ choices (N= 349; 99.9%) across the three alternatives in the BW-CE. Few physicians (N= 8; 2.3%) made choices in line with dominant preferences for lower levels of DCI in the BW-CE and were removed from the overall analysis to facilitate the methods comparison. The mixed logit model had a good data fit (Adjusted McFadden R2= 0.419) with all parameters being significant (p<0.001); significant standard deviations for each attribute suggested that preferences differed noticeably across physicians (average coefficient of variation= 2.18). When comparing normalised median attribute weights, results from both preference elicitation methods suggested that the likelihood of DCI (BW-CE: 0.443; MDT: 0.388) had the largest impact on physicians’ treatment choices, followed by the risk of lung complications (BW-CE: 0.346; MDT: 0.307), the risk of hypotension (BW-CE: 0.081; MDT: 0.118), and the risk of anaemia (BW-CE: 0.063; MDT: 0.075). These findings provided consistent insights into the relative importance of attributes. Investigating the distribution of normalised weights suggested that differences were driven by the length of the tails for the risks of hypotension and anaemia. This was supported by the complete combinatorial convolution test which identified no significant differences in the weight distributions between MDT and BW-CE (smallest p-value = 0.406). Specifically, 13% (N= 40) of physicians had a larger attribute weight in the MDT than the largest corresponding conditional parameter estimated from the BW-CE data. Similarly, 21% (N= 67) of MDT weights were smaller than the smallest conditional BW-CE parameter for the corresponding attribute. No significant differences were found between the MAB distributions obtained from MDT and BW-CE (smallest p-value = 0.513), but 10% of MAB estimates obtained from the MDT data were larger than the largest corresponding conditional BW-CE parameter, while 11% were smaller than the smallest corresponding conditional BW-CE parameter. CONCLUSION This was the first study to compare MDT to a stated preference instrument that aligns closely with the current state of practice in health preference research. Results from both instruments convey a comparable message about the relative importance of the considered attributes. Obtained distributions of acceptable attribute trade-offs were not significantly different. Future research should investigate the nature of differences in the tails of the obtained distributions. FUNDING The study was sponsored by Idorsia Pharmaceuticals Ltd., Allschwil, Switzerland. |
16:00 | Is there a hypothetical gap in experiments on the willingness to pay for sustainable funds? PRESENTER: Daniel Engler ABSTRACT. This paper analyzes hypothetical bias in discrete choice experiments (DCE) for sustainable investments. To this end, we compare incentivized and hypothetical decisions in DCE conducted among households’ financial decision makers in France and Germany. People choose differently in incentivized and hypothetical settings, and the extent of differences strongly depends on the type of product, elicitation method, and sample considered (e.g., Murphy et al., 2005, Environmental and Resource Economics, 30 (3), 313–325). For example, Norwood and Lusk (2011, American Journal of Agricultural Economics, 93 (2), 528-534) among others show that goods with normative attributes (e.g., environmentally and/or climate friendly) are chosen more often in hypothetical settings, and suggest social desirability as an explanation for the difference. Stated DCE are increasingly applied in the field of sustainable investments (e.g., Gutsche and Ziegler, 2019, Journal of Banking and Finance 102, 1155-1182), where investment products, for example, consider different normative (e.g., ecological) screens to avoid investments that are considered as unsustainable. Results from such experiments are therefore particularly suspected to be affected by hypothetical bias (e.g., Bauer, Ruof, and Smeets, 2021, The Review of Financial Studies, 34 (8), 3976-4043). We thus ask three key questions: Do respondents have stronger preferences for sustainable investments in incentivized settings than in purely hypothetical settings? Do social desirability motives drive differences between incentivized and hypothetical choices? Are the determinants of sustainable investments identified in past studies the same in incentivized and hypothetical settings? Our analysis is based on a pre-registered (no reference due to blind review) DCE among representative samples of 2,153 individual investors from France and Germany from May to July 2021. Individual investors are defined as financial decision makers in their household who owned, have owned, or have experience with investment products. Participants are endowed with 500 Euros and choose six times among four real funds that are actually traded on the market, and a safe option (bank account). We employ four treatments that only differ concerning the information participants receive about incentives and the safe option. In treatment (1), participants make an incentivized choice among funds. After learning about the experimental setting, they are informed that one chosen fund is randomly selected and actually bought for 10 randomly selected study participants. The payoff is the value of the fund minus fees one year after the experiment is finished (i.e., August 2022). Treatment (2) is the same as in (1) except that the choice is hypothetical. Participants are asked to choose as if they really invested their endowment and would receive the payoff in August 2022 minus fees. Treatment (3) is the same as (1), but with the safe option to choose a bank account, where participants receive 500 Euros in any case in August 2022. Treatment (4) is the same as (3), but hypothetical. The investment universe consists of 16 bond funds that are actually tradable on the capital market. Participants learn that all provided funds invest in similar financial products, accumulate earnings, have similar risk-return profiles, and are traded in Euros. In an unlabelled design, they only see information on the fees, the strength of sustainability, returns in the past two years, and the share of issuers of bonds from the European Union which correspond to the actual values for the 16 funds. It was ensured that the correlations between the fund attributes across all alternatives are approximately zero. We use mixed logit models with correlated random coefficients to analyze preferences for the strength of sustainability. Treatment effects are measured by an interaction term between strength of sustainability and treatment dummies with a fixed parameter. We consistently find that preferences for sustainability are not significantly stronger in the hypothetical compared to the incentivized treatment, both in France and Germany, and independent of the provision of a safe option. In contrast to what the literature predicts, preferences for sustainability are even stronger in the incentivized treatment without safe option in Germany. As participants in the incentivized treatments spend more time on the experiment, read the sustainability attribute description more often, and are more certain about their choices than in the hypothetical treatments, they might think more about the real-world consequences of their choice when it can entail an immediate real-world impact. Those findings are robust to alternative explanations of hypothetical bias such as strategic motives, choice certainty, and perceived winning probability. Preferences for sustainable investments in the hypothetical treatments are not significantly stronger for participants with stronger social desirability motives compared to the incentivized treatments. However, in all treatments preferences are similarly and slightly stronger for participants with stronger social desirability motives. Lastly, we find that the determinants of sustainable investments are qualitatively and quantitatively similar in the incentivized and hypothetical treatments. Those determinants are also similar to the determinants identified in past studies. Our paper makes four main contributions. First, we contribute to the literature on differences between incentivized and hypothetical choices and consistently find that preferences for sustainable investments are, in contrast to literature-based expectations, not significantly stronger in the hypothetical treatments compared to the incentivized treatments. Second, we provide a, to the best of our knowledge, novel application of choice modelling by using tradable real-world investment products as basis for our experimental design, and combine it with a validated incentive mechanism from behavioural economics (e.g., Charness et al., 2016, Journal of Economic Behavior & Organization, 131, 141-150). Third, we provide evidence for the validity of past DCE in the fields of experimental and sustainable finance, show that the determinants of sustainable investments are the same in incentivized and hypothetical settings, and provide guidance for the design of future experiments in those fields. Fourth, using representative samples from two different countries enhances the external validity of the results. Taken together, those results improve our understanding of individual investor behavior, which is crucial from a policy perspective given that individual investors play an important role in financing the transition to a low-carbon economy (European Commission, 2021, https://ec.europa.eu/commission/presscorner/detail/en/ip_21_3405). |
16:30 | Are These Responses Simple or Simplified? Recognising Low Commitment in a Survey on Daily Schedule Changes with Automated Vehicles PRESENTER: Baiba Pudāne ABSTRACT. Low commitment of respondents is an important concern for analysts that use survey data. When respondents are insufficiently motivated to consider questions carefully, they tend to rush through the survey and answer the questions in patterns that reduce their workload, but do not necessarily maximise the truthfulness of their responses. Several methods have been proposed to detect patterns that emerge when survey is completed hastily. In attitude or opinion data, presented as a block of Likert-scale questions, it is common to check for ‘straightlining’ – i.e., selecting the same response for an entire block of rating questions. In stated-choice surveys, it is common to control for lexicographic, non-trading and inconsistent responses (Hess et al., 2010). A form of non-trading behaviour is status-quo bias – preserving current behaviour more insistently than in real-world settings. In activity-travel surveys, the analyst may pay attention to responses that do not contain any trips on a given day (Madre et al., 2007). In addition, the response times can sometimes be directly used as indicators of respondents’ commitment. Recognising low commitment is more challenging in surveys that collect complex information, such as activity-travel diaries. First, while it can be expected that respondents would simplify their daily schedules to reduce workload, it is not obvious how to measure the simplicity of a schedule – e.g., number of activities in a day, missing activities of specific types (e.g., not ‘getting ready’ between sleep and travel to work) or the duration of the longest uninterrupted activity (e.g., 8 hours of work without a lunchbreak)? Second, the response times cannot be used as an indicator of commitment, since both genuinely simple and simplified behaviour would be registered in a short time. Third and most crucial challenge is to disentangle the reasons behind the simple responses, and this issue may be even more pressing with complex data such as activity schedules. Indeed, a workday during the pandemic may appear as a simple schedule (e.g., with no trips) and yet be realistic. In this paper, we aim to disentangle the reasons behind simple responses in an activity-travel dataset. Our data come from an interactive stated activity-travel survey, which was designed to study the extent and diversity of expected daily schedule changes with automated vehicles. For details of the survey, see Pudāne et al. (2021; data available here: https://doi.org/10.4121/14125880.v1). The respondents were asked to design in a graphical interface a complete activity schedule of a recent workday and to redesign or adjust that schedule while imagining that all trips are made with a fully automated vehicle. We found that many of the reported present-day schedules were rather simple (the median number of activity fragments is seven) and that a large share of respondents (slightly more than a half) did not indicate any changes in their schedules with automated vehicles. However, it is not clear if these large segments reflect genuinely simple daily schedules and lack of expected changes in them, or if they display respondents’ minimum-effort approach to answering the survey. To distinguish between, in other words, simple behaviour and simplified reporting in our data, we adopt an approach akin to an instrumental variable: we seek an indicator that would reflect the level of respondents’ commitment but not be related to the simplicity of their true behaviour. Our proposed solution is to use the time that respondents spend on some of the non-central survey questions. Specifically, we consider two survey parts: the introduction screen of the survey, which contained a medium-long passage of text, and screen with an instruction video that shows how to complete the survey. We compare the recorded response times to the expected response times in these stages, which we compute using the average reading speed for the introduction screen and the duration of the video for the instruction screen. We interpret response times that are (considerably) shorter than expected as lack of commitment to the survey. We proceed to use these response times, along with the indicators of the response simplicity – number of activity fragments and indicators for schedule changes –, in a latent class cluster model (Magidson & Vermunt, 2004). Our results show that, indeed, there is a positive relationship among simple schedules, lack of schedule changes and short times spent on the afore-mentioned survey parts. Two largest clusters, which account for approx. 70% of the sample, contain respondents that spent approx. 38% and 75% of the theoretically expected time in the two survey parts, respectively. The largest of these clusters displays no schedule changes with automated vehicles, and the second cluster contains only minor changes in stationary activities. The remaining three clusters with longer response times represent various and well-interpretable schedule changes that necessarily involve on-board activities. This result lets us conclude that the simple responses in our data are indeed linked to simplifying efforts by the respondents. Although the causality cannot be proven here, this finding casts a new light on the substantive conclusion of the survey: although majority of respondents did not report any schedule changes with automated vehicles, the majority of committed respondents reported varied and complex schedule changes. Not accounting for the commitment levels would thus lead to an underestimation of the potential activity-travel impacts of automated vehicles. References Hess, S., Rose, J. M., & Polak, J. (2010). Non-trading, lexicographic and inconsistent behaviour in stated choice data. Transportation Research Part D: Transport and Environment, 15(7), 405-417. Madre, J. L., Axhausen, K. W., & Brög, W. (2007). Immobility in travel diary surveys. Transportation, 34(1), 107-128. Magidson, J., & Vermunt, J. K. (2004). Latent class models. The Sage handbook of quantitative methodology for the social sciences, 175-198. Pudāne, B., van Cranenburgh, S., & Chorus, C. G. (2021). A day in the life with an automated vehicle: Empirical analysis of data from an interactive stated activity-travel survey. Journal of Choice Modelling, 39, 100286. |
17:00 | Using inferred valuation to disentangle cognitive biases in stated-preference discrete choice experiments PRESENTER: Ewa Zawojska ABSTRACT. When conducting stated-preference surveys to estimate monetary values of public goods, researchers face the challenging task of deriving valid value measures for decision-making processes. The survey-based value estimates are broadly used for benefit-cost analyses and natural resource damage assessments, among others. Stated-preference methods are typically the only valuation techniques capable of capturing non-use values of public goods. However, many threats to the validity of stated-preference valuation exist—the literature indicates a wide range of possible biases. We apply an inferred valuation approach to a stated-preference discrete choice experiment (DCE), coupled with varying the number of choice alternatives, to disentangle several sources of potential biases in the value estimates and assess the biases’ magnitude. In particular, we differentiate between strategic response bias and social desirability bias in stated-preference value estimates. In the stated-preference literature, a single binary choice question is recognized as the most straightforward approach to obtain valid, truthful responses. Despite the long-lasting recommendation for its use, dating back to the National Oceanic and Atmospheric Administration report on contingent valuation (1993), many other formats continue to be applied in practice. While incentives for truth-telling are lost, statistical efficiency of value estimation is gained, as a single binary response reveals little information on a respondent’s preferences. Disparities in incentive structures across different preference elicitation formats contribute to significant variations in value estimates obtained through them, which undermines the perceived validity and robustness of stated-preference-based value measures. For example, a well-known result from the voting literature (related to Duverger’s law) suggests that when answering to a multiple-choice question, a respondent may benefit from selecting her second-best, thus not disclosing her most-preferred option. She may be inclined to do so if she believes that her second-best has a better chance of winning than her first-best and there is a risk that another option, which she considers as worse, can win. Such considerations give space for strategic responses in DCEs involving more than two response options, which has been documented in some empirical stated-preference studies (e.g., Meginnis et al. 2018). Another source of preference misrepresentation in valuation surveys lies in social desirability bias. This arises when respondents want to “look good” and behave in a socially or morally acceptable manner. These considerations may encourage responding in ways that are viewed favorably by others, please the researcher or avoid embarrassment. A large body of stated-preference literature reports that social desirability leads to overstatements of actual willingness to pay (e.g., Lopez-Becerra and Alcon 2021). In this study, we aim to understand the importance and magnitude of the two sources of bias—strategic responding and social desirability—as occurring in a DCE. To that end, our field stated-preference survey has the following two essential design characteristics. First, to capture the social desirability bias, we employ the inferred valuation approach (Fisher 1993): respondents are asked to make predictions of the responses of others in the survey, which turns the attention away from a given respondent and thereby mitigates possible social desirability bias. In our study, each respondent participates in two series of choice tasks (in a randomized order): (1) twelve choice tasks asking to indicate the most-preferred alternative for a given respondent and (2) six choice tasks asking to guess the alternative most preferred by other respondents. Our second design feature is aimed at capturing possible strategic responding and involves varying the number of choice options per task. Each respondent is randomly assigned to a treatment with two, three or four options, where one of them is always the no-change (status quo) alternative. The number of choice alternatives is kept constant throughout all choice tasks displayed to a given respondent. In addition to these exogenous survey design characteristics, we include a range of follow-up questions to better understand respondents’ motivations in the DCE eliciting their own preferences. These questions help identify the responses guided primarily by strategic response and social desirability considerations, but also by other motivations, such as protesting and yea-saying. The evaluated good is reduction in outdoor advertisement in Warsaw, Poland. The policy options are described by percentage decreases in the shares of freestanding advertisement and advertisement on buildings in the city landscape, and by an obligatory fee to a household (e.g., tax). The data was collected between December 2017 and February 2018 for a representative sample of 1,250 residents of Warsaw. Preliminary results of mixed logit models controlling for the perceived probability of choosing an alternative by others suggest that, on average, the more likely an alternative is seen to be selected by others, the less likely a respondent chooses it. This implies that the alternatives expected to have a pronounced support among other respondents may be paid less attention and be less likely selected when a respondent is deciding on her own most-preferred policy options. When estimating the effect separately for each of the three treatments varying in the number of alternatives, the negative effect of the perceived popularity of an alternative in society on the likelihood of choosing the alternative remains statistically significant only in the four-alternative treatment. Nevertheless, the random-parameter estimates for all three treatments are characterized by significant standard deviations pointing to substantial heterogeneity. These results are further investigated using hybrid choice models, integrating data from the exogenously defined variations in the survey and the self-reported perceptions in order to identify and measure the magnitude of the considered biases. We believe this analysis sheds a new light on the understanding of the common biases in stated-preference studies and helps design surveys in a way mitigating the biases and thus enhancing the validity of stated-preference value estimates. Fisher, R.J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20(2):303-315. Lopez-Becerra, E.I., Alcon, F. (2021). Social desirability bias in the environmental economic valuation: An inferred valuation approach. Ecological Economics, 184:106988. Meginnis, K., Burton, M., Chan, R., Rigby, D. (2021). Strategic bias in discrete choice experiments. Journal of Environmental Economics and Management, 102163. |