next day
all days

View: session overviewtalk overview

09:15-09:30 Session 1

Opening session

Address from Scandinavian Simulation Society (SIMS)

  • Prof. Bernt Lie, SIMS President

Virtual SIMS EUROSIM 2021: organisations and program structure

  • Adj. prof. Esko Juuso, Conference Chair
09:30-10:30 Session 2

Keynote I: Martin Björnmalm, Digital Lead for the Process Industries division in North Europe at ABB, Sweden

How process automation is making the world more resource and energy efficient – future trends

ABSTRACT. The world is facing major challenges in coping with the supply of food and other necessities to all people, while ensuring a sustainable and environmentally friendly future. To improve industry processes, meet the increasing efficiency, safety and quality demands, new digital technology and advanced automation are the most effective means. With relatively small investments, great effects can be achieved.

Today, the collection of data from the entire plant can be done faster and more accurately, while in the same time it is processed using various algorithms, advanced control systems or cloud services. It provides decision support to operators and predicted maintenance. It enables production planning based on customer orders from raw materials, overall production steps to distribution of the product and model-based control and optimization of not only individually processes, but entire factories or even entire corporations via Collaborative Operation Centers by the supplier's experts around the world 365/24.

10:45-12:15 Session 3A: Modelling 1
Development of a Surrogate-model Based Energy Efficiency Estimator for a Multi-step Chemical Process
PRESENTER: Markku Ohenoja

ABSTRACT. Energy efficiency is increasingly being considered as a critical measure of process performance due to its importance both in production costs and in environmental footprint. In this work, an indirect energy efficiency estimator was developed for the Tennessee Eastman (TE) benchmark process for the first time. The TE model was first modified to provide the reference values of energy efficiency. A sophisticated model selection scheme was then applied to build the surrogate-model. The results indicate reasonable model performance with mean absolute prediction error around 1.7%. The results also highlight the limitations present in the training set, which are, together with other practical implementation issues, discussed in this work

Modelling and Simulation of Detection Rates of Emergent Behaviours in System Integration Test Regimes

ABSTRACT. System level testing generally lacks coverage due to cost of performing realistic tests on the “system as a whole”. This lack in test coverage gives rise to seemingly emergent behaviour at system level. The interactions between multiple sub-systems lead to “the whole being greater than the sum of its parts”, which is a famous saying dated back to the time of the Greek philosopher Aristotle. Either we should test more extensively at system level, or we should test smarter. The company needs to validate its current test regime to see if the current way of testing detects the emergent behaviours in question. We seek to validate the company’s system integration test regime to see if it can detect a given set of emergent behaviours. This paper aims to find the probabilities of detecting specified types of emergent behaviour in the way the company performs system integration testing today and compare that to alternative test regimes. A model is set-up to find the probabilities of the emergent behaviour types in the different test regimes, and to simulate the corresponding detection rates and related uncertainties. The results show that the company could benefit from changing to an alternative test regime, which has higher probability of detecting a given set of unwanted behaviours emerging through system integration testing.

Application of multivariate data analysis of Raman Spectroscopy spectra of 2-oxazolidinone
PRESENTER: Federico Mereu

ABSTRACT. Due to economic development and the subsequent increase in world population, the global demand for energy will continue to rise in the following decades. The dependence on fossil fuels, the primary source of energy, emitting copious amount of CO2, is the main cause of global warming. Even if large investments are underway to decarbonise the world energy production, renewable electricity may not be suitable for certain applications, such as the cement, iron and steel, and chemical sectors. Carbon capture and storage (CCS) and its ability to avoid CO2 emissions at their source, represents a solution in the fight against climate change. Among all the different alternatives, post-combustion capture by using amine-based solvent is considered to be the most advanced technology [1]. 2-aminoethanol (MEA) is one of the main solvents due to its relatively low cost, commercial availability, good and fast absorption rate, and rich experience in industrial applications [2]. However, due to exposure to heat and products of the exhaust gas, MEA degrades to products that are unreactive towards CO2, such as 2-oxazolidinone (OZD), a heterocyclic five-membered ring organic compound. The formation of OZD starts with the reaction between MEA and CO2, which leads to the formation of carbamate complex. Elimination of a water molecule from the carbamate complex during a ring closure reaction yields an OZD molecule. The formation of OZD is a problem because it is unstable and will react giving other waste products that must be purged from the system to prevent their build-up. For this purpose, it is essential to find a procedure for the conversion of the molecule to its precursor amine. This requires a preliminary identification and quantification step. Raman spectroscopy is a valuable technique for qualitative and quantitative analyses, since there is a relationship between intensity of the Raman band, chemical information and the concentration of a sample being analyzed [3]. Raman spectrums are generally plotted as intensity against Raman Shift (or wavenumber). The vibrations of the functional groups of a molecule appear in a Raman spectrum at characteristic Raman shift, which is similar for all molecules containing the same functional group. Chemometric multivariate analysis is an advanced statistical method that can be used to extract this huge information by building specific model for specific chemical species. Our approach in this paper started with analysis of OZD samples at different concentrations using Raman spectrometer. These samples were divided into two groups i.e. calibration and validation sets. Principal component analysis (PCA) was then performed on these samples to check for any outliers. Afterwards, model for OZD was built using partial least squares regression (PLS-R). In the full paper, a discussion on the suitability of using PCA and PLS-R for analysis of OZD will be presented.

References [1] S. Chi and G. T. Rochelle, “Oxidative degradation of monoethanolamine,” Industrial and Engineering Chemistry Research, vol. 41, no. 17, pp. 4178–4186, 2002, doi: 10.1021/ie010697c. [2] A. Singh, Y. Sharma, Y. Wupardrasta, and K. Desai, “Selection of amine combination for CO2 capture in a packed bed scrubber,” Resource-Efficient Technologies, vol. 2, pp. S165–S170, Dec. 2016, doi: 10.1016/j.reffit.2016.11.014. [3] P. Larkin, Infrared and Raman Spectroscopy. Elsevier, 2011.

Characterization of the Flow (breakup) Regimes in a Twin-Fluid Atomizer based on Nozzle Vibrations and Multivariate Analysis
PRESENTER: Raghav Sikka

ABSTRACT. Air-blast atomizers are widely used atomizers in a variety of applications such as the aerospace industry, internal combustion engines, spray drying, etc. An experimental setup including novel twin-fluid atomizers has been investigated with real-time monitoring of the acoustic signal data. It is based on the fact that all flow processes emit some energy output signals, which can be recorded and analyzed to extract reliable information. A new non-intrusive approach based on acoustic chemometrics which includes vibration signal collection using glued-on accelerometers was assessed for the determination of the different flow (breakup) regimes spanning a whole range of fluids (water and air) flow rates in this novel twin-fluid atomizer (one-analyte system). This study aims to determine the flow regimes based on the dimensionless number B, whose unique set of values corresponds to different flow (breakup) regimes. The principal component analysis (PCA) intends to be performed to visually classify the breakup regimes using PCA score plots. The preliminary results showed clearly the subsets for all different flow regimes. Partial least squares regression (PLS-R) will be done on the conditioned acoustic signals to get a prediction model using the reference (measured) data values. The predicted Vs reference plot shows a correlation with a certain permissible (%) limit for root mean square of prediction (RMSEP) value for a range of dimensionless number, B. This primary study result will suffice the complex physical understanding behind the fluid flow regimes. A further detailed study is still rendered necessary to understand the feasibility of acoustic studies for the flow regime characterization for different types of atomizer design.

10:45-12:15 Session 3B: Computational Fluid Dynamic (CFD)
Composition of pyrolysis gas as basis for CPFD simulations of biomass gasification
PRESENTER: Ahmad Dawod

ABSTRACT. Pyrolysis of biomass is a green technology and has no adverse effects or emissions to the environment. Pyrolysis is the thermal decomposition of biomass occurring in the absence of oxygen and is a step in both the combustion and gasification processes. Pyrolysis is typically performed at temperatures ranging from 400 to 800°C. The process is endothermic, which means that a heat supply to the pyrolysis reactor is needed for the pyrolysis reactions to occur. The products from biomass pyrolysis include biochar, bio-oil and gases including methane, hydrogen, carbon monoxide, and carbon dioxide. The proportion of these products depends on the composition of the feedstock and the process parameters. In this project, the focus will be on the product gas also called synthesis gas. The aim of this study is to obtain a set of experimental pyrolysis data for different types of biomasses. The data will further be used as input for simulations of biomass gasification. There are limited pyrolysis data regarding the composition of the volatiles available in literature, and it is therefor crucial to perform these experiments and get valuable data for use in simulations. 3 Materials and methods A lab scale pyrolysis reactor is designed and constructed. A schematic of the reactor is presented in Figure 1. The biomass chamber is flushed with nitrogen during the pyrolysis experiments to avoid oxygen to come into the system. The reactor is placed in a furnace and heated up to the wanted temperature. The pyrolysis gas is collected and analyzed using a gas chromatograph, and a set of gas compositions for grass and wood pellets at different temperatures are obtained. CPFD simulations are performed to study the composition of the synthesis gas obtained from gasification of grass and wood pellets. The model developed in Barracuda uses a three-dimensional multiphase particle-in-cell approach. The reactions and reaction rates involved in the gasification process are defined in the chemistry module. The Lagrange approach is used for the particle phase, and the Eulerian approach is used for the gas phase. The composition of the volatiles obtained from the pyrolysis experiments are used as input to Barracuda. Simulations of gasification of grass and wood pellets are performed, and the composition and the flow rate of the synthesis gas are monitored. The synthesis gas from gasification contains mainly CO and H2 and smaller amounts of CO2 and CH4. When doing computational particle fluid dynamic (CPFD) simulations of gasification, the gas composition from the pyrolysis gas is used as input, and it is therefore crucial to have pyrolysis data from the current biomass feed to obtain gasification results that are comparable with experimental data. The pyrolysis data will further be used to develop a model to predict the composition of the pyrolysis gas as a function of the content of carbon, hydrogen, and oxygen in the feedstock. In addition to the experimental pyrolysis data obtained in this study, data from literature [5] was used in the simulations of the gasification process. The results were compared and discussed. It was found that variations in the feedstock gives variation in the pyrolysis gas, which again influences on the synthesis gas quality. More experiments are needed to get a complete set of data for the pyrolysis gas for different types of biomass.

Fluidization of fine calciner raw meal particles by mixing with coarser inert particles – Experiments and CPFD simulations

ABSTRACT. Calcination (CaCO3  CaO + CO2) is an important part of the pyroprocessing of raw meal in the production of cement clinker. Most cement plants apply an entrainment reactor to calcine the raw meal. However, a bubbling fluidized bed (BFB) may be an attractive alternative as a reactor.

The main advantage of using a BFB is the high heat transfer coefficient and the uniform distribution of temperature inside the calciner. These characteristics are particularly important in cases where the meal particles are to be indirectly heated by hot surfaces instead of heated by direct contact with a hot combustion gas.

The raw meal used in the cement industry typically ranges from 1-500 µm in size, with a significant weight fraction around 5-50 µm. Hence, the meal falls in the range of Geldart C particles. These very fine particles are difficult to fluidize due to their cohesive nature. One possible method to fluidize these particles is to mix them with Geldart B particles.

The goal of this paper is to answer the following key questions related to mixing of raw meal with coarser particles: 1. When does the mixture start to fluidize? 2. Does the mixture become a stable bubbling fluidized bed? 3. What is the fluidization characteristic of the resulting mixture?

Cold-flow lab-scale experiments are performed by mixing standard raw meal with sand (as Geldart B particles) at different ratios to determine the fluidization characteristics. Sand was chosen as it is an inert material and has a density quite similar to raw meal. A similar density will reduce the segregation effect of the resulting mixture while being fluidized.

Computational particle and fluid dynamics (CPFD) simulations are performed with the commercial software Barracuda ®, version 20.0.0. The objective of the simulations is to understand the physics behind the fluidizing mixture. The results from simulations are compared to experimental results to get a better insight into the fluidization behavior. This insight will be helpful when designing a full-scale cement calciner based on the concept of a bubbling fluidized bed.

CPFD simulations on a chlorination fluidized bed reactor for aluminum production: an optimization study
PRESENTER: Zahir Barahmand

ABSTRACT. Early CPFD simulation studies on designing a fluidized bed reactor for alumina chlorination showed that the model suffers from high particle outflow and dense phase bed channeling. The present study is aimed to optimize the previous alumina chlorination fluidized bed reactor model through modified geometry, parameter modifications, and improved meshing. To optimize the performance of the reactor, complex geometry with an extended top section was combined with a regular cylindrical reactor. Besides, the gas inlet pattern was changed from an ideal uniform distribution to a non-uniform one. Besides, the reactor’s inlet diameter is reduced, and the value for the particle sphericity and voidage has been updated based on experimental observations.  The results show that the new reactor with an extended cross-sectional area on top has a significantly lower particle outflow even with the higher inlet superficial gas velocity. The paper discusses the optimization steps and relevant changes in reactor performances in detail.

Sensitivity and uncertainty analysis in a fluidized bed reactor modeling
PRESENTER: Zahir Barahmand

ABSTRACT. As in many real applications, in the world of fine powders and small particles, depending on the accuracy of the relevant method, there are uncertainties and vagueness in the parameters such as particle size, sphericity, initial solid void fraction, envelope density, etc. In some cases, there are different methods to measure a parameter, such as a particle size that depends on the method (based on length, weight, and volume); the measured values may be significantly different from each other. Therefore, there is no crisp or exactly known parameter in many cases because of the fine powders' inherent uncertain nature. On the other hand, being characteristic of the dynamic systems, physical parameters such as temperature and pressure fluctuate but can be kept in an acceptable range, affecting the main design parameters such as fluid density and dynamic viscosity.

The most traditional tools and methods for simulating, modeling, and reasoning are crisp, deterministic, and precise, but these values are estimated or changing (randomly or stochastically). Several approaches can describe this phenomenon. Moreover, when it comes to uncertainties, mathematical tools are probably the best solutions.  With the fuzzy set theory method, linguistic variables or ranges can be converted to mathematical expressions, and consequently, instead of crisp values, these can be applied to the equations. The uncertainty analysis can be more important when the model is susceptible to one parameter. A preliminary sensitivity analysis on a fluidized bed application has shown that the solid void fraction has the highest, and the fluid density has the lowest sensitivity to its operation. The performed uncertain theoretical approach has been validated by CPFD simulation using Barracuda v20.1.0.

10:45-12:15 Session 3C: Energy
Formulation of Stochastic MPC to Balance Intermittent Solar Power with Hydro Power in Microgrid

ABSTRACT. 1 Background

In a microgrid connected with both intermittent and dispatchable sources, intermittency caused by sources such as solar power plants can be balanced by dispatching the required amount of hydro power into the grid. Both intermittent generation and consumption are stochastic in nature, not known perfectly, and require future prediction. The stochastic generation and consumption will cause the grid frequency to drift away from a required range. To improve performance, the operation should be optimized over some horizon, with the added problem that intermittent power varies randomly into the future. Optimal management of dynamic systems over a future horizon with disturbances is often posed as a Model Predictive Control (MPC) problem. Earlier work includes stochastic analysis of deterministic MPC for generating hydro turbine flow-control signal in a microgrid of solar power, wind power, and hydro power plants in Pandey et al. (2021). This paper will be an extension work of Pandey et. al. (2021).

2 Aims

In support for extending the work on Pandey et. al (2021), the use of MPC over traditional controllers such as Proportional and Integral (PI) controllers are emphasized in Avramiotis-Falireas et al. (2013), Bhagdev et al. (2019), and Reigstad & Uhlen (2020). The stochastic analysis of the deterministic MPC of Pandey et. al (2021) is further extended with a formulation of multi-objective optimization (MOO) scheme for the Stochastic MPC. In reality, changes in hydro power production are constrained by water-inertia and rotating mass, and the need to avoid wear and tear in actuators and other equipment. It is therefore of interest to see the relationship between the intermittent variation of solar power, response time for the MPC controller, the grid frequency constraint, and the rate of change of hydro-turbine valve.

3 Materials and methods

The microgrid will be constructed in an equation-based open-source Modelica language. Modelica offers several open-source libraries. For our task, we will be using OpenHPL for modeling hydro power plants, PhotoValtaics for solar power plants, and OpenIPSL for electric grid and end-users of electricity. The formulation of stochastic MPC will be performed in the modern scientific language-Julia.


I. Avramiotis-Falireas, A. Troupakis, F. Abbaspourtorbati and M. Zima. An MPC strategy for automatic generation control with consideration of deterministic power imbalances. 2013 IREP Symposium Bulk Power System Dynamics and Control-IX Optimization, Security and Control of the Emerging Power Grid. IEEE, 1-8, 2013.

D. Bhagdev, R. Mandal and K. Chatterjee. Study and application of MPC for AGC of two area interconnected thermal-hydro-wind system. 2019 Innovations in Power and Advanced Computing Technologies (i-PACT), Vol. 1. IEEE, 2019.

M. Pandey, D. Winkler, R. Sharma and B. Lie. Using MPC to Balance Intermittent Wind and Solar Power with Hydro Power in Microgrids. Energies, 14(4), 874, 2021.

Droop Control of Hydro Power System in OpenHPL

ABSTRACT. 1 Background

In a hydro-electric power generation system, the generated electrical power is supplied to a consumer load through an electric grid. One of the prime requirements for the electric grid is to have the grid frequency within a required range, normally required to be around 5% above or below 50 Hz (or 60 Hz depending on the system frequency). When there is a difference in the power generation and the consumer load the grid frequency fluctuates. To restore the grid frequency within the required range, the volumetric flow of water through the hydro-turbine is controlled in such a way that the generation and the load are balanced. For a standalone generation system (i.e., the consumer load supplied through a single generator), a speed-governing mechanism (based on a Proportional-Integral (PI) controller) is employed to control the volumetric flow of water through the turbine. However, for a parallel operation of a multi-hydro-generator system, each of the PI-controller of the speed governing mechanism of the generators fights together to restore the grid frequency. More specifically, the generators with the fast speed-governing mechanism will supply more power into the grid, and vice-versa. After numerous efforts of each of the controllers in the system, the grid frequency will be balanced. However, such a control mechanism in the parallel operation is unreliable. In the first case, if the system restored the grid frequency, the power distribution from each of the generators is random (the generator with a high rating may supply lower power and the generator with a low rating may supply more power). This will also operate the machines at inefficient conditions since the rotating machine increases its efficiency at higher loadings. In the latter case, the grid frequency may not be restored, and the entire system gets collapsed to blackout in the region of the power supply Schavemaker (2017). Thus, in a parallelly operated multi-generator hydro-electric system supplying to the common load, an idea for the reliable restoration of the grid frequency would be to have a central controller that can distribute the required amount of change in generation and load power throughout each of the hydro-electric system based on the ratings of each of the machines. The central controller is called a droop controller which generates the volumetric flow control signal for each of the hydro-turbines based on their ratings making the change in the generation and the load, and the grid frequency as a one-to-one relation. The droop control mechanism is employed in the parallelly operated multi-generator system for (i) distributing the generation-load imbalance proportionally through each of the hydro-generator based on their ratings with an added advantages of efficient operation of the rotating machines, and (ii) smooth restoration of grid frequency even from a larger disturbance in the generation-load imbalance.

2 Aims

One of the prime aims of the paper is to extend the droop control mechanism as in Sharma et. al (2012) as a feature extension for an open-source hydropower library- OpenHPL developed at the University of South-Eastern Norway.

3 Materials and methods

The parallel operation of hydro power will be constructed in an equation-based open-source Modelica language. Modelica offers an open-source hydro power library- OpenHPL for modeling hydro power systems. The droop control mechanism will be formulated in a modern scientific language-Julia.

Energy Reduction in Lithium-Ion Battery Manufacturing using Heat Pumps and Heat Exchanger Networks

ABSTRACT. Global electric mobility is rapidly expanding. Hence, the demand for lithium-ion batteries is also increasing fast. Therefore, understanding energy minimization options in this rapidly growing industry is crucial for reducing the environmental impact as well as developing low-cost and sustainable batteries. The biggest contribution to greenhouse gas emissions is the cell manufacturing process. The most energy-intensive steps of cell manufacturing are electrode drying and dry room conditioning. Therefore, we developed process models for these two systems that can be used for evaluating various energy optimization techniques such as heat pumps and heat exchanger networks. Further, various process options can be tested and benchmarked in terms of their overall energy consumption using these models. The results show that the power requirement may be reduced through all the options assessed and available energy efficiency measures may substantially lower the energy footprint of cell production with strong relevance for subsequent greenhouse gas footprints.

Develop a Cyber Physical Security Platform for Supporting Security Countermeasure for Digital Energy System
PRESENTER: Mike Mekkanen

ABSTRACT. The paper develops a cyber physical system (CPS) security platform for supporting security countermeasures for digital energy systems based on real-time simulator. The CPS platform provides functions that trainers or trainees can be able to operate and test their scenarios with a state-of-the-art integrated solution running at a real-time simulator. Those integrated solutions include energy systems simulation software and communication systems simulation/emulation software. The platform provides practical “hand-on-experiences” for participants and they are able to test, monitor and predict behaviors of both systems at the same time. The platform also helps achieve training’s objectives that meet skilled requirements for the future generation in both smart energy systems evaluation and cyber physical security fields. In particular, we present the CPS platform’s architecture and its functionalities. The developed CPS platform has also been validated and tested within different simulated threat cases and systems.

13:00-14:30 Session 4A: Data Analysis and Modelling 2
Dynamic modelling and simulation of raw meal calcination for isothermal boundary conditions

ABSTRACT. Raw meal is a finely ground mixture of raw materials used as the main feed in cement kilns. It usually contains 75-80 wt% calcium carbonate. One of the key reactions occurring in the kiln system is the calcination, in which the calcium carbonate is decomposed into calcium oxide and carbon dioxide (CaCO3(s) --> CaO(s) + CO2). This reaction is endothermic and needs a temperature of around 900 °C to occur. After completion of calcination, further heating and partly melting of the solid material will take place. During the calcination, the other components in the meal (SiO2, Al2O3, Fe2O3, …) can be considered inert. The size of the raw meal particles typically ranges from 1 to 500 µm, and the median is typically 20-30 µm. The time it takes for a given particle to calcine will largely depend on its size. When designing new reactor types for the calcination process, it is necessary to understand the dynamic behavior of the calcining particles in order to size the reactor.

The purpose of this paper is to 1) develop a dynamic model of the heating and calcination of raw meal particles of difference size, and 2) combine this model with experimentally determined data on the actual raw meal particle size distribution and chemical composition, in order to 3) determine the time required to obtain a certain calcination degree for an industrial raw meal exposed to a surface with a specified temperature.

The model is developed based on a mass and energy balance of a raw meal particle. The heat transfer from the isothermal wall to the particle mainly occurs through radiation. The supplied thermal energy is used for heating and calcining the particles. The calcination rate depends on the particle temperature and the partial pressure of CO2 in the gas surrounding the particles. The mass of the particle drops as the reaction proceeds. If the gas is a mixture of different components, for example, a combustion gas, then the CO2 partial pressure will increase as the calcination proceeds, and this reduces the driving force of the calcination process. However, a reduction in calcination rate due to high CO2 concentration can be counteracted by increased temperature as this will influence the kinetics of the reaction. Both the kinetics and the thermodynamic constraints can be expressed in terms of Arrhenius-like exponential expressions, giving a non-linear dynamic mathematical model of the calcination process.

The model equations were discretized using Euler’s forward method and solved numerically. Different particle sizes and different CaCO3 contents will give different results. Analyses of raw meal samples from a local cement plant were used as inputs to the model to determine the overall calcination degree of the aggregated raw meal as a function of time, considering both the particle size distribution and the chemical composition of different size classes.

The results can be used as a basis for determining the required size of a potential new calciner reactor type. Developing a new reactor type is of particular interest in electrification of the calcination process, a concept which is currently being investigated aiming at a significant reduction in CO2 emissions from the cement plant.

Hygrothermal Simulation of Prefabricated Cold-formed Wall Panels
PRESENTER: Ayman Hamdallah

ABSTRACT. Steel structures are light and durable, but in the building envelope they can transfer heat energy easily from the building interior to outside and hinder the energy performance of the building. In this study, we simulate the thermal performance of cold-formed steel panels that can be used as prefabricated units in building envelopes. More precisely, the thermal performance of hollow cold-formed steel elements filled with thermal insulation is studied with varying panel geometry. The focus is on stainless steel but also mild steel is briefly considered. Attention is paid especially to the thermal bridges associated to the relatively high thermal conductivity of steel materials. The influence of the width, depth and the height of the panel to thermal bridging is assessed and panel geometries with reasonable thermal performance are found. By considering also the moisture transport, the overall hygrothermal performance of the panels is then evaluated.

Resource simulator

ABSTRACT. There is a limited amount of resources available at Earth. Some of these are fossil, others renewable. Most resources utilized can be reused or recycled to a greater or smaller extent. The situation with respect to resources varies from country to country but can principally be grouped with respect to UN´s World Bank Statistics (2020) where data for each country (total 213) is collected but also grouped into “low income countries”, “middle income countries” and “high income countries”. We also look at regions of the world. Data from this is used in this paper for the resource simulation with respect to energy and environmental emissions. Other data from many different sources are complemented, and more detailed specification for specific factors described more in detail. These are then used for extrapolation to cover use today and possible scenarios for the future for countries in the three groups. The simulation is made where the structure is set, but the use of resources varied.

Variable selection and grouping for large-scale data-driven modelling

ABSTRACT. Data-driven modelling requires always variable grouping and selection. In small systems, possible groups are understood by expert knowledge. Artificial intelligence and machine learning are going to large-scale systems, where the number of possible variable combinations is very large. The groups need to be understood but it is not easy since the number of variables is often too high. Variable grouping means finding feasible groups of variables for modelling. Systems can be divided into subsystems but even then the number of available variables is often impractically high to be used with the data-based methods. This research aims to develop a reliable method for selecting and grouping variables in large-scale data-driven modelling. Variable selection and grouping methodologies can be classified into four categories: knowledge-based grouping, grouping with data analysis, decomposition, and model-based grouping and selection. The data analysis part consists of correlation analysis and handling of high dimension data with principal components. Domain expertise is used for removing the useless combinations of variables, e.g. indirect measurements and control rules. Inappropriate groups are defined as non-groups, e.g. variables which should not be parts of any acceptable group. Data-based methods are divided into three classes: data analysis, decomposition and modelling. Finally, models are used in completing the variable selection and grouping. Nonlinear scaling can improve many methodologies which are originally linear. The variable selection and grouping approach is tested in several applications. The case studies are based on integrated approaches which combine all the techniques presented in the classification of methodologies. Variable need to be selected and grouped efficiently for modelling to keep the models understandable. This approach reduces the risks of overfitting.

13:00-14:30 Session 4B: Computational Particle Fluid Dynamics (CPFD)
Design of a medium-scale circulating fluidized bed reactor for chlorination of processed aluminum oxide
PRESENTER: Zahir Barahmand

ABSTRACT. Fluidization is a well-established and widely used technology in the process industry. The production stability and the large effective contact area between the active substances, resulting in high mass and heat transfer between the phases, are some of the main advantages of fluidization. However, this technology has not yet been adequately developed for alumina chlorination as a standard solution on an industrial scale. Although a circulating fluidized bed reactor design is complex by its nature, it is advantageous to simulate the process compared to running experiments on a lab scale. The Computational Particle-Fluid Dynamic (CPFD) simulation lays a foundation for studying the given reaction process.

The reaction between the solid alumina particles and the gaseous chlorine and carbon monoxide results in the products (aluminum chloride and carbon dioxide). The present study aims to design a circulating fluidized bed reactor by simulating the process in Barracuda®. Simulations with a simple geometry contributed to a better understanding of the reaction process. Then the simulation results are compared with values from both a theoretical approach and parallel simulations in Aspen Plus®. The comparison revealed that the results from Barracuda® Virtual Reactor (VR), such as product flow rate, are within a reasonable range of what could be expected in a full-scale plant. The promising preliminary results imply that CPFD could be a promising approach for future research on the design, optimization, and implementation of the industrial alumina chlorination process. The final design includes a fluidized bed reactor with a 2.4 m internal diameter and 8 m height and four parallel internal cyclones on top.

CPFD modeling to study the hydrodynamics of an industrial fluidized bed reactor for alumina chlorination
PRESENTER: Zahir Barahmand

ABSTRACT. Aluminum is now the world's second most used metal. Since aluminum has a unique combination of appealing properties and effects, it allows for significant energy savings in many applications, such as vehicles and buildings. Although this energy-saving leads to lower CO2 emissions, the production process of aluminum still dramatically impacts the environment. The process used almost exclusively in the aluminum industry is the Hall-Héroult process with a considerable amount of carbon footprint with high energy consumption. As the best alternative, Alcoa's process (which is not industrialized yet) is based on the chlorination of processed aluminum oxide, reducing the traditional method's negative impacts. In continuation of Alcoa’s effort, this study aims to investigate the possibility of a new low carbon aluminum production process by designing an industrial fluidized bed reactor equipped with an external (due to high corrosion inside the reactor) gas-solid separation unit to handle a total of 0.6 kg/s of solid reactants and produce aluminum chloride as the main product. The research focuses on determining the best bed height based on the available reaction rates, determining the best reactor dimension to reduce particle outflow under isothermal conditions (700°C). Autodesk Inventor® and Barracuda® are used for 3D modeling of the reactor and CFD simulation for multiphase (solid-gas) reaction, respectively. Although results have shown that the bed aspect ratio (H/D; H- bed Height and D- bed Diameter) does not affect the reaction, it highly affects the reactor’s hydrodynamics and particle outflow. The final design shows the best hydrodynamics belongs to bed aspect ratio equal to 2.

Study of the thermal performance of an industrial alumina chlorination reactor based on CPFD simulation
PRESENTER: Zahir Barahmand

ABSTRACT. As a part of the new sustainable aluminum production process under study, alumina chlorination plays a crucial role. The relevant process is an exothermic reaction in a fluidized bed reactor. The solid alumina reacts with chlorine and carbon monoxide and produces aluminum chloride and carbon dioxide as the main products. Then carbon dioxide can be separated efficiently. The optimum temperature for the alumina chlorination is 700℃. The reactor’s temperature should be kept in the range of 650-850℃ (most preferably 700℃) because below that temperature range, the reaction rate drops, and above that range, the alumina (which usually is γ-alumina) transfers to other alumina types, which is not desirable for the purpose. Extending other simulation studies by authors on alumina chlorination in an isothermal condition, the CPFD method has been utilized to thermal study and simulate the overall heat transfer of the system, including convective fluid to the wall, fluid to particle, and radiation heat transfer. Radial and axial heat transfer coefficient profiles at different levels show that almost all the heat should be transferred in the lower half of the reactor, making the design more challenging. At the steady-state, the range for the fluid temperature inside the reactor has been recorded 700-780℃.

The effect of impurities on γ-Alumina chlorination in a fluidized bed reactor: A CPFD study
PRESENTER: Zahir Barahmand

ABSTRACT. Alumina is one of the most widely used pure chemicals on the market today, with annual production totaling millions of tons of highly pure alumina. A large portion of this output is used to make aluminum, but a growing amount is used in ceramics, refractories, catalysts, and various products. In nature and different thermal conditions, alumina is found in different phases. These phases can be transformed into each other in different temperatures. Among these, γ-alumina is used in the chlorination process in the aluminum production industry because of the higher reaction rates. α-Alumina has outstanding mechanical properties and superb thermal properties at high temperatures; polycrystalline α-alumina is used as a structural ceramic. As a result, this type has much lower reaction rates in the chlorination process. Previously, the chlorination of pure γ-alumina has been considered in the CPFD simulations. Extending previous researches, the present study investigates the effect of seven percent α-alumina impurity on the overall chlorination reaction conversion, bed hydrodynamics, and composition of the outflow of the reactor using Barracuda® v20.1.0. The results show that compared with the pure γ-alumina simulation, the impurity has no considerable effect on the chlorine concentration in the outlet. On the other hand, the mass balance of the bed shows an unfavorable accumulation of α-alumina in the fluidized bed reactor.

13:00-14:30 Session 4C: Oil production
Model based control and analysis of gas lifted oil field for optimal operation
PRESENTER: Nima Janatian

ABSTRACT. Distribution and control of the available lift gas is crucial for maximizing total oil production in a cluster of gas lifted wells in an oil field. This paper describes an improved dynamical model for a continuous gas lifted oil field with two oil wells. It is assumed that the fluid that comes out of the reservoir is not just pure oil, but it is a mixture of oil, water, and gas. A global sensitivity analysis using the variance-based method is performed to classify the parameters, which are both highly sensitive and uncertain simultaneously. The improved model is further used to design a model based predictive controller to optimally distribute a limited supply of lift gas being shared to the oil wells. Several simulation cases are performed to study the performance of the optimal controller under varying operational scenarios. An increase in the total oil production from the field was observed when the deterministic nonlinear model predictive control was applied to the nominal model of the gas lifted oil field. At the same time, all the constraints were fully satisfied when the perfect prediction was assumed. To study the effect of parametric uncertainty, the deterministic MPC based on the nominal model of the plant is applied to the plant model containing the uncertain parameters. It has been shown that some of the constraints were not satisfied leading to unachievable and unrealistic distribution of the lift gas supply to the two oil wells.

Sensitivity analysis of oil production models to reservoir rock and fluid properties
PRESENTER: Bikash Sharma

ABSTRACT. Improving the efficiency and optimization of oil recovery with a special focus on digitalization is on the spotlight. Achieving an optimized and successful automatic production highly depends on the ability to monitor and control the well performances. This requires a suitable dynamic model of the oil field and production equipment over the production lifetime. One of the main barriers to developing such dynamic models is that generally, it is very difficult to observe and understand the dynamic of fluid in a porous medium, describe the physical processes, and measure all the parameters that influence the multiphase flow behavior inside a reservoir. Consequently, predicting the reservoir production over time and respond to different drive and displacement mechanisms has a large degree of uncertainty attached. To develop long-term oil production models under uncertainty, it is crucial to have a clear understanding of the sensitivity of such models to the input parameters. This helps to identify the most impactful parameters on the accuracy of the models and allows to limit the time of focusing on less important data. The main goal of this paper is to do sensitivity analysis for investigation of the effect of uncertainty in each reservoir parameter on the outputs of oil production models. Two simulation models for oil production have been developed by using the OLGA-ROCX simulator. By perturbation of reservoir parameters, the sensitivity of these model outputs has been measured and analyzed. According to the simulation results after 200 days, it can be argued that the most affecting parameter for accumulated oil production was the oil density with sensitivity coefficients of -1.667 and 1.610 and relative permeability (-0.844 and 0.969). Therefore, decreasing the degree of uncertainty in those input parameters can highly increase the accuracy of the outputs of oil production models.

Uncertainty analysis of a simplified 2D control-relevant oil reservoir model
PRESENTER: Ashish Bhattarai

ABSTRACT. In this paper, a simplified 2D control relevant model for a slightly slanting wedge-shaped black oil reservoir is made more realistic by incorporating model uncertainty. The uncertainty in the model is computed via Monte Carlo simulation. Furthermore, based on this model with uncertainty, a Proportional + Integral (PI) controller is implemented to increase oil production while minimizing water production. A PI controller is used to control the valve opening of the Inlet Control Valves (ICVs) in the production well

Simulation of heavy oil production using smart wells

ABSTRACT. The application of long horizontal wells, especially in heavy oil reservoirs with a water drive, is associated with some challenges including the early breakthrough of water into the well. To solve this challenge, smart horizontal wells completed with downhole flow control devices (FCDs) and zonal isolation are widely used today. Therefore, evaluating the functionality of different types of FCDs in reducing water cut is necessary to achieve a successful design of smart wells for heavy oil production. In this paper, heavy oil production from smart wells completed with the main types of FCDs is modeled and simulated through a case study. According to the obtained results, compared to conventional wells, by using smart wells more oil and at the same time, less water can be produced from heavy oil reservoirs. Besides, in comparison with ICDs, AICDs and AICVs have better functionality in improving oil recovery and reducing water cut. It can also be concluded that among the main types of inflow control devices, AICVs have the best performance in achieving cost-effective heavy oil production.

14:45-16:00 Session 5

SIMS Annual Meeting

The Annual General Meeting with the General Assembly shall be held between the 1st August and 20th October at a time and location decided by the Board. Normally, the meeting is organized at the Annual SIMS conference in the conference venue. The 2021 General Assembly is a virtual meeting.

Recent information is available about the present activities and future plans of the eldist active simulation society of the world. Find how to join this society of societies?

All participants of the conference are warmly welcome to join the meeting at the Track A of the Zoom meeting.