SIMS EUROSIM 2021: FIRST SIMS EUROSIM CONFERENCE ON MODELLING AND SIMULATION
PROGRAM FOR WEDNESDAY, SEPTEMBER 22ND
Days:
previous day
next day
all days

View: session overviewtalk overview

09:15-10:15 Session 6

Keynote II: Head of Product Management, Jyri Lindholm, NAPCON, Neste Engineering Solutions Oy

 

09:15
How to lead the process industry to a safe and sustainable future?

ABSTRACT. Process safety is an integral part of sustainable business. Cooperation and communication between people throughout the shift is a key part of safety. Safety is as weak as the weakest link. To improve safety, collaboration, and situational awareness, the entire operational shift is placed to practice together in the same training environment. Artificial Intelligence can also be part of building operational safety by utilizing high-fidelity simulators in Machine Learning training.

10:30-12:00 Session 7A: Machine learning
10:30
Consolidating Industrial Batch Process Data for Machine Learning
PRESENTER: Simon Mählkvist

ABSTRACT. The paradigm change of Industry 4.0 brings attention to data-driven modeling and the incentive to apply machine learning methods in the process industry. Further, capitalizing on a great deal of data available is an adverse task. For batch processes, the dataset is in a three-way format (Batch × Sensor × Time). Depending on the process and the goal of the analysis, it might be necessary to aggregate batches together. For this reason, a campaign unfolding structure is applied. By grouping the batches under new labels relevant to the analytical goal, campaigns are created. These labels can be created from periodical occurrences, such as refurbishing the refractory lining in the case of the case study. In order to utilize the three-way batch format, it is necessary to align the batches. In order to address this, the feature-oriented approach Statistical Pattern Analysis (SPA) is applied. SPA derives statistics, e.g., mean, skewness and kurtosis from the time series, consequently aligning the batches. The SPA and the campaign approach create a dataset consisting of select statistics instead of an irregular three-way array. Functional data analysis (FDA) is used to smooth and extract first- and second-order derivative information from the sensors in which functional behavior can be observed before creating features. Principal Component Analysis (PCA) is used to examine the final dataset. Further, industrial processes are notoriously nonlinear, and even more so batch processes. Therefore, kernel-based principal component analysis (KPCA) is used to review the final dataset. The KPCA can accommodate different underlying characteristics by modifying the kernel function used.

10:50
Modelling of snow depth and snow density based on capacitive measurements using machine learning methods.

ABSTRACT. In the optimization process of hydropower production, it is relevant to consider information from a snowpack to estimate the water content when melting. This paper discusses the development of data driven models based on capacitive measurements to estimate the snow density, snow depth and snow water equivalent in a snowpack.

11:10
Increasing interpretability and prediction rate by combining self-organizing maps with modeling algorithms
PRESENTER: Ivan Ryzhikov

ABSTRACT. 1 Background Digital transformation makes it possible for industries to find answers on many questions in mathematical models. Machine learning algorithms, statistical analysis and visualization reveal dependencies between production efficiency and processes factors based on observed data. Mathematical models become a cornerstone of support decision making platforms. Since the models are data-driven, production experts need to measure the adequacy of models, but there is no general way to provide this estimation. Nonlinear models could give a very high prediction rate and good generalization, but due to its complexity it is difficult, if possible, to interpret the model. On the other hand, simple models can be interpretable, but in some cases give lower prediction rate, so one cannot be confident in modeling results and use the extracted knowledge. In this study we use a combination of clustering approach, such as self-organizing map, and simple modeling approaches making the final model more flexible but still interpretable. Simple model could be not good at generalizing, because simple rules, which are building in these models meet contradictions in data, but these contradictions could disappear if these are related to patterns in data.

2 Aims We propose an approach that outperforms simple modeling approaches but keeps its interpretability benefits. This approach increases our confidence in data-driven models and clarifies effects between the target variable and inputs. Applying self-organizing maps helps one to understand the main patterns in the data and see which pattern can be predicted with simple models and which requires nonlinear models. Proposed approach discovers if the main factors are different for different patterns in data, which takes place in many cases, for example: seasonal effects or different input materials. The goal of this approach is to understand what one can do to improve the situation and why.

3 Materials and methods In our research we apply linear modeling with and without regularization, tree modeling, and Kohonen’s self-organizing maps. Linear models allow us to utilize the well-known statistics, such as p-values and F-score. When we apply regularization and cross-validation we reduce the number of variables without loss of generalization and prediction rate. Tree modeling gives a clear logical scheme of factor effects on the target variable. Self-organizing maps returns clusters, which can be characterized by their profiles. Profiles can be determined with reference vectors, or average or median values by cluster. First, we provide clustering analysis and reveal patterns in data. Second, for each pattern we solve modeling problem separately. Third, we see the relation between the patterns in data and modeling results. Self-organizing maps provides interpretable visualizations. One can see clusters and their properties: number of elements, model prediction rate on train or test data, data pattern that describes the cluster and the most influential variables for that cluster.

4 Results Proposed approach increases the prediction rate in comparison to simple models but keeps the results interpretable. It introduces one more entity, such as data pattern, which characterizes the inputs and meets many applications related to industries. In this research we implemented the analytical application in R using R Shiny framework and open source modeling and visualization packages. As a result, this application can be deployed on any server or cloud so that analysts from the company network would have an easy access to it.

5 Conclusions and future development Further development is related with optimization of the clustering approach parameters, so the combined approach would maximize the prediction rate, and optimization of model type we use for each cluster, so the composite model is heterogeneous.

10:30-12:00 Session 7B: Waste water treatment and control for energy efficiency and nutrient utilization
10:30
Possible concepts for digital twin simulator for WWTP
PRESENTER: Tiina Komulainen

ABSTRACT. Application of advanced modeling and simulation technologies is essential to meet future requirements for higher wastewater treatment capacity and increased discharge water quality without large investments in construction projects. This article describes an industrial pre-project for digital twin simulator for Veas wastewater treatment plant in Norway. The desired main functionalities of the digital twin simulator were: • Data- and model-based management as well as decision support for process operators • Predictive operational support and process optimization for engineers • Testing of process modifications, control system modifications, new procedures and other changes • Competency building and knowledge transfer between the operators and engineers Commercially available technologies were compared according to the functional design specification and four possible digital twin simulator concepts were developed for wastewater treatment facilities.

10:50
Averaging level control for urban drainage system
PRESENTER: Yongjie Wang

ABSTRACT. A buffer tank has been built as a simplified version of an equalization basin in one of the largest Water Resource Recovery Facilities (WRRF) in Norway. The basin functions as the main storage and buffer magazine for wastewater (Combined Sewage and overflow, CSO) from several municipalities. The facility is facing many challenges including 1) unknown and varying inflow into the basin, which may cause an overflow of untreated CSO into water bodies near the urban area, 2) large variation of basin outflow reducing the treatment efficiency in downstream processes, 3) increased energy cost due to excessive pump actions and 4) tense efforts for manual control of pump stations. In the work, averaging level control using model-based control and estimation algorithms on the buffer tank system is studied to maintain the water level in the buffer tank equipped with a level transmitter. Implementation of Model Predictive Control (MPC) and Proportional-Integral (PI) controller together with Kalman filter for state and parameter estimation show decent benefits and potentials. Results show that: 1. Model-based control is recommended to use for the basin control problem. A mathematical model forms a basis for automatic control and optimal estimation. 2. Acceptable setpoint tracking of water level in the basin under varying inflow can be achieved using both MPC and PI controllers. However, MPC precedes PI for smoother pump actions. Python with open libraries for control and optimization and interfaces is used as simulation and testing environment in the research project.

11:10
Moving Bed Biofilm Process in Activated Sludge Model 1 for Reject Water Treatment
PRESENTER: Vasan Sivalingam

ABSTRACT. A moving bed biofilm (MBB) process was modelled in AQUASIM using the standard activated sludge model 1 (ASM1) as a baseline. The model was controlled against experimental data from a pilot Hybrid Vertical Anaerobic Biofilm (HyVAB) reactor installed at Knarrdalstrand wastewater treatment plant, Porsgrunn, Norway. High ammonium concentration removal from reject water was studied by applying different aeration schemes at the plant and the modelling tool. Results show that the standard ASM1 model was poor to fit experimental data. Simulation results evidenced missing biochemical mechanisms related to anaerobic ammonium oxidation (Anammox) and short cut nitrogen removal processes. However, the essential simulation outputs are biofilm thickness, substrates concentration variation, and biomass distribution, partially validated with experimental results. The model, therefore, helped to realise the nature of the bioprocess observed at the pilot reactor.

11:30
Detectability of Fault Signatures in a Wastewater Treatment Process

ABSTRACT. In a wastewater treatment plant reliable fault detection is an integral component of process supervision and ensuring safe operation of the process. Detecting and isolating process faults requires that sensors in the process can be used to uniquely identify such faults. However, sensors in the wastewater treatment process operate in hostile environments and often require expensive equipment and maintenance. This work addresses this problem by identifying a minimal set of sensors which can detect and isolate these faults in the Benchmark Simulation Model No. 1. Residual-based fault signatures are used to determine this sensor set using a graph-based approach; these fault signatures can be used in future work developing fault detection methods. It is recommended that further work investigate what sizes of faults are critical to detect based on their potential effects on the process, as well as ways to select an optimal sensor set from multiple valid configurations.

10:30-12:00 Session 7C: Modelling 3
10:30
Modeling of Artificial Snow Production Using Annular twin-fluid nozzle
PRESENTER: Joachim Lundberg

ABSTRACT. Nedsnødd AS develops a system for generating artificial snow. The concept is to optimize snow production for geographical locations where so-called ‘marginal’ conditions for snow production dominate the weather picture. To produce artificial snow, liquid water in a spray is exposed to cold air and becomes an agglomerate of frozen droplets. The basic idea is to improve the atomization of water to enhance the snow production capability. This work develops a model for the cooling process of the water droplets to simulate the processes determining the capabilities for snow production equipment. A model is developed to simulate the behavior of the Nedsnødd artificial snow nozzle and the outcomes are discussed. The model is supported by experimental measurements.

10:50
Modeling and simulation of an electrified drop-tube calciner

ABSTRACT. In a modern cement kiln system, about 70% of the CO2 emissions are generated through calcination (decarbonation). The CaCO3 in the limestone is the primary source of CO2, and the rest comes from fuel combustion. Electrification of the calciner, i.e., replacing fuel combustion with electrically generated heat, will eliminate the fuel combustion exhaust gases. The exit gas will then be pure CO2 and will remove the need for a separate CO2 capture plant.

Modern calciners are based on raw meal particles being entrained by hot combustion gases, which at the same time provide the required heat transfer to the particles. The purpose of this study is to investigate, through modeling and simulation, the technical feasibility of industrial calcination in an electrified DTR.

The model equations are implemented in Python 3.8 for simulation to determine the feasibility of the DTR concept, as well as the DTR diameter, length and number to process the meal in a DTR implemented in full-scale cement kiln system. A modified shrinking core model was implemented in Python 3.8 and used to calculate the time necessary to reach a calcination degree of 94%. The model suggest that a reaction time of about 7 seconds is required to calcine the particles of size 180 μm. Clustering of the particles into effective sizes of 500 μm results in a conversion time of about 12 seconds.

11:10
Electrification of an entrainment calciner in a cement kiln system – heat transfer modelling and simulations
PRESENTER: Ron M Jacob

ABSTRACT. Carbon capture and storage may be applied to reduce the CO2 emissions from a cement plant. However, this often results in complex CO2 capture solutions. To simplify the capturing process, an alternative is to electrify the cement calciner. Electrification of the calciner will reduce the emissions and improve carbon capture in two ways. Firstly, it will use carbon neutral electrical energy (if electrical energy is produced from renewable source) as a replacement for fuel combustion, hence there will be no fuel generated CO2. Secondly, it will generate a pure CO2 gas exiting from the calciner, so that a separate CO2 capture plant can be avoided.

Most modern calciners are based on the principle of calcination of particles in entrainment mode. In this principle, the raw meal is calcined by the hot gas produced from combustion while being entrained by these gases. This type of calciner we refer to as an entrainment calciner. This paper studies the possibility to electrify an existing entrainment calciner by utilizing resistance heating technology. Such technology has a high efficiency of conversion from electricity to heat and is a well-proven technology in other applications.

To assess the possibility of electrifying an existing entrainment calciner, we aim at answering the following key questions: 1. How can the particles be entrained in the absence of exhaust gas from fuel combustion? 2. What heat transfer area is required to transfer the required heat? 3. Will there be enough space to mount all heating elements in the calciner? 4. What mechanisms are involved in the heat transfer process, and what are the bottlenecks for transferring heat from the heating elements to the raw meal? 5. How will the surface temperature of the heating elements affect the space requirement inside the calciner?

A model of the electrified calciner has been developed in Python, and simulations have been performed to answer the key questions presented above. The modelling of the electrified calciner includes determination of the: 1. Amount of gas required to entrain the raw meal 2. Required heat for preheating and calcination 3. Radiation heat transfer through network modelling 4. Convective heat transfer 5. Overall heat transfer coefficient in the system 6. Heating element design

The simulations indicate that electrification of an existing entrainment calciner is technically possible. The results may be used as a reference when evaluating other potential reactor designs for electrified calcination.

11:30
The application of the Lattice Boltzmann Method in the calculation of the virtual mass

ABSTRACT. Virtual mass is an important quantity in the analysis of the unsteady motion of objects underwater or other fluids or unsteady flow around bodies, for example, the virtual mass effect is important in the inertia of ships, floaters, swimmers’ organs, airplanes, and bubbles. The additional mass resulting from the fluid acting on the structure can be calculated by solving the equation of potential flow around the object. In this paper, a system in which a square object is immersed in a channel of fluid and moves parallel to the wall has been considered. The corresponding virtual mass at a determined distance S from the wall and for the object size D (the side of the square object) is calculated via the Lattice Boltzmann Method. Here, it is tried to change D and S separately and investigate their effects on the virtual mass. According to the simulation results, for the systems in which the distance from the wall is more than four times the object size (S > 4D), the distance does not influence the added mass. Furthermore, the virtual mass rises when the object approaches the wall and experiences its maximum value as it reaches the wall (S → 0). As a result, in this case, the virtual mass is about 75% larger than in the case of S=4D. In addition, the simulations reveal that by increasing the dimensions of the object D the virtual mass increases and vice versa.

12:40-14:10 Session 8A: Machine learning 2
12:40
Modelling a Cement Precalciner by Machine Learning Methods

ABSTRACT. This work is a feasibility study of modelling the calcination process in a cement precalciner by employing machine learning algorithms. Calcination which is the thermal decomposition of calcium carbonate into calcium oxide and carbon dioxide plays a major role in characterizing the clinker quality, energy demand and CO2 emissions in a cement production facility. Due to the complex nature of the calcination process, it has been always a challenge to reasonably model the precalciner system. Six machine learning algorithms were tested to predict the apparent degree of calcination, CO2 molar fraction and water molar fraction in the precalciner outlet stream. Fifteen input variables were used to train the algorithms where their values were obtained through a large number of simulated dataset by applying mass and energy balance to the precalciner system. Artificial neural network (ANN) showed a better predictability for all three outputs than other regression methods.

13:00
On solving fault detection problem and risk estimation monitoring with deep neural networks and postprocessing
PRESENTER: Ivan Ryzhikov

ABSTRACT. 1 Background Production system fault is a serious problem with severe consequences. The only way to undo the damage is to prevent the fault, but that requires understanding of the current situation: is something wrong with the process? Unfortunately, some industrial processes do not have a set of variables and their border values that can be used to identify if the fault risk is growing and situation is getting worse. At the same time, if the production expert has this information in advance and can monitor the risk, there will be a chance to avoid the fault. But what is the risk and how we measure it? By risk here we understand some variable, that characterizes the current process state. First, we use historical data, which contain the process observations and the dates and times of faults to build data-driven mathematical models predicting risk. Second, we use postprocessing to transform model prediction of risk into characteristic of the process, that decision maker can use.

2 Aims In this study we develop and research an approach that solves fault detection problem and provides a production monitoring in a sense of fault risk. Proposed approach is implemented as an application that allows decision maker to see in a real time if the current situation is risky: is it alarming, is it regular or is it suspicious. The study is focused on aspects of modeling risk and predicting events that occurs rarely. Another complexity comes from the nature of the risk: if there was no fault, it still does not mean that the situation was not alarming. To fight these problems, we propose the specific construction of the risk variable and criterion we use to estimate the model adequacy.

3 Materials and methods The proposed approach consists of three parts: making risk variable, modeling and postprocessing. First, we build a risk variable using the fault dates and times and our assumptions on what is the earliest time before the fault we expect to see that something is wrong with the process. We consider risk as a function that equals 0 anytime but starting from some timepoint (earliest time we expect to see the difference in a process) it monotonically increases until the fault time. Second, we make a criterion that is based on the mean square error, but it is weighted. Weights are representing both our confidence in data and sensitivity of the model. The weights for errors on intervals near to the faults are greater than weights for errors at any other time. By using weights, we also allow high risk to appear even though it did not cause a fault. Weights help to resolve the possible contradictions if the same process state caused finally caused the fault one time and did not cause another time. We propose two different cross-validation schemes for this specific problem and stratification by the state: regular or leading the fault. Finally, we use additive transformation with filter, that transforms risk estimation into levels that can be interpretable by decision maker.

4 Results Proposed approach solves the fault detection problem for the cases when there are no specific variables by which we could conclude that something is wrong with the process. The model combined with postprocessing system makes it possible to interpret the result and make a decision. In this research we implemented the analytical application in R using R Shiny framework and open source modeling and visualization packages. As a result, this application can be deployed on any server or cloud so that analysts from the company network would have an easy access to it.

5 Conclusions and future development Further development is related with optimization of the approach parameters: times before and after the fault, and interpreting the model, so one could understand what factors are leading to the fault.

13:20
ANN-Based Correlations for Excess Properties to Represent Density and Viscosity of Aqueous Monoethanol Amine (MEA) Mixtures.

ABSTRACT. The applicability of Artificial Neural Networks (ANNs) to represent excess properties is discussed. The excess molar volume and excess free energy of activation for viscous flow were calculated from measured density and viscosity at different monoethanol amine (MEA) concentrations and temperatures. Different ANNs with multiple inputs and a single hidden layer were trained, validated and tested to represent excess molar volume and excess free energy of activation for viscous flow. Developed ANN models show good accuracies in data fitting by giving R2 as 0.99 and 0.98 for excess molar volume and excess free energy of activation for viscous flow respectively for the test data. The calculated average absolute relative deviation (AARD) for excess molar volume and excess free energy of activation for viscous flow are 1.5 % and 1.2 % respectively for the test data that give better predictions for the density and viscosity using a Redlich and Kister polynomial for the regression. The density and viscosity models based on ANN for excess molar volume and excess free energy of activation for viscous flow give high accuracies, which is an advantage of many aspects in engineering applications.

12:40-14:10 Session 8B: Wastewater treatment and drainage systems
12:40
Challenges in connecting a wastewater treatment plant to a machine learning platform
PRESENTER: Christian Wallin

ABSTRACT. Treatment of wastewater is fundamental to protect the environment and to ensure a healthy water supply. Higher demands are put on the treatment of the effluent from wastewater treatment plants (WWTP) to reduce more pollutants as well as remove pharmaceutical residues. To be able to deliver better water quality monitoring and control is of importance but wastewater treatment is far behind many industrial processes when it comes to automation. Digital twins and machine learning could offer many benefits but not much work has been done in this field concerning wastewater treatment. How do you move from an existing traditional process automation system to an integrated machine learning platform? This paper investigates the challenges of implementing an integrated machine learning platform for a wastewater treatment plant. The paper is based on experience from a project where a number of different processes, including a WWTP where integrated into a machine learning platform in an online cloud environment. In this paper we focus on the integration of the WWTP. On the platform a model is run in real-time using process data. Machine learning algorithms are used to treat the process data and for sensor fault detection. The challenges and considerations are many, such as cyber-security when it comes to data access and data transfer and how to convert the process data to a format that can be used by the model. Multiple defining choices must be made along the way that can have a major impact on the final platform functionality. It is important not only to evaluate these choices but also to have enough knowledge and jurisdiction to make both the right decisions and to also make them in time. Many projects run out of time and/or money for different reasons and strategies will be discussed for how to mitigate risk factors.

13:00
A screening method for urban drainage zones
PRESENTER: Tiina Komulainen

ABSTRACT. Due to climate change, the storms have intensified leaving the urban drainage system and wastewater treatment plants hard to tackle with the large water quantities. In this study we develop a data-based screening method to identify which drainage zones would benefit most of blue-green infrastructure to avoid spilling of uncleaned water. First the precipitation and drainage zone flow rate data are pre-processed and de-seasonalized to remove the flow rate due to consumer wastewater. Then, system identification is applied for the rain periods and transfer function parameters for first order plus time delay model are collected. The screening index is calculated from the transfer function model parameters. The results show that the system is very nonlinear, but the mean values for the screening index is statistically significantly different for the drainage zones included to this study. The screening index clearly separates the different types of drainage zones and gives a reasonable suggestion for which drainage zones should be considered further for implementation of blue-green infrastructure like nature-based solutions.

13:20
An Individual-based Model for Simulating Antibiotic Resistance Spread in Bacterial Flocs in Wastewater Treatment Plants
PRESENTER: Svein H. Stokka

ABSTRACT. Biological wastewater treatment plants (WWTPs) plays a potentially important role in the spread of antibiotic resistance. WWTPs receive sewage from various sources and this sewage can contain large numbers of resistant bacteria. Studies have shown that resistance levels stay high throughout conventional wastewater treatment processes. Resistant bacteria can proliferate in WWTPs, and they can also spread their resistance genes to other nonresistant bacteria through horizontal gene transfer. The potential for horizontal gene transfer is high because of the high cell density and because the bacteria spend much time packed together in flocs. This is worrisome as resistance can spread from pathological bacteria that arrive with the wastewater to aquatic and soil bacteria that are well adapted to the WWTP environment and to river and soil environments that receive WWTP effluents and biosolids. To better understand how resistance spreads by growth and horizontal gene transfer in a bacterial floc we propose an individual-based model (IbM) and a solver algorithm that can be used to simulate and study this system. An IbM treats each bacterium cell as a single discrete entity with specific properties and a specific placement and is a model type that captures local heterogeneity and local interactions. Our model is a simple case-specific model that includes only the most relevant bacteria properties and functions. The individual cells have functions that define how they move, how they grow by consuming nutrients, how they divide if they have reached a certain size, how they exchange resistance genes with neighbouring cells, and how they stop growing and eventually die if they are starved of nutrients.

13:40
Multi-objective Dynamic Optimization of Crops Irrigated with Reused Treated Wastewater
PRESENTER: Antoine Haddon

ABSTRACT. Among the prominent issues for water reuse in agriculture, there is a need to estimate the benefits and impacts of irrigation with treated wastewater in order to take advantage of the nutrients contained in wastewater, under constraints of health and environmental hazards. Modeling and control of dynamical systems may be used to find new ways of optimizing the whole water reuse chain, such as the dynamic adaptation of wastewater treatment quality to meet the changing needs of plants throughout a growth cycle. In this context, we focus here on determining the water and nutrient requirements of crops when irrigated with reuse water, taking into account the different objectives of maximizing crop biomass and minimizing environmental and irrigation costs. To deal with the high complexity of modern crop models, we propose a double modeling approach, in which we design a dynamic systems model - the 'control model' - and calibrate it using simulation data obtained from a complex crop model. We then show how this allows to recast the multi-objective optimization problem of reuse irrigation as a constrained optimal control problem that can be solved using a dynamic programming method. Numerical simulations of a case study are provided to illustrate the resolution.

12:40-14:10 Session 8C: Control
12:40
Developing a Dynamic Diesel Engine Model for Energy Optimal Control
PRESENTER: Viktor Leek

ABSTRACT. Developing a heavy-duty Euro 6 diesel engine model for energy optimal control. The modeling focus is on accuracy in the entire engine operating range, with attention to the region of highest efficiency and physically plausible extrapolation. The effect of the fuel-to-air ratio on combustion efficiency is studied, and it is demonstrated how this influences the energy optimal control. A convenient, physics-based, method for pressure sensor bias estimation is also presented.

13:00
Intelligent Micro Grid Controller Development for Hardware-in-the-loop Micro Grid Simulation Subject to Cyber-Attacks
PRESENTER: Mike Mekkanen

ABSTRACT. This paper develops Hardware-in-the-loop (HIL) simulation against cyber attacks. We design a light-weight intelligent electronic device (IED) that performs Micro Grid Controller (MGC), interfaces are developed based on International Electrotechnical Commission (IEC) 61850 GOOSE protocol from/to the real-time simulation and the MGC. They are executed on two equipment stages, Field Programmable Gate Array (FPGA) and BeagleBoneBlack. CSIL versus CHIL tests are used to evaluate the Micro Grid (MG) behavior against different cyber attacks. We also evaluate the MGC designed control function in accordance with IEC 61850 GOOSE protocol. The results show that the light-weight MGC approach and data modeling of various IEC 61850 predefined data objects, data attributes and logical nodes (LNs) are correct for the design of the power balance control/protection function against cyber attacks in various cyber-attack case studies.

13:20
Level measurements with computer vision - comparison of traditional and modern computer vision methods

ABSTRACT. Level measurements is important for a large number of applications in both industry, science and the commercial sector. Popular measurement technologies include guided radar, ultrasonic, capacities and flotation based sensors principles.

In this work, the system of interest is a coffee machine that is being outfitted with an industrial robotic arm. It is of interest to estimate the level of coffee beans remaining in the coffee machine tank, which can be inspected through a transparent window in the machine. The facility for visual inspection, together with the obvious need to ensure safe human consumption of the produced coffee, makes the use of vision based sensor technology particularly interesting.

The goal of the project is to compare modern machine learning methods against traditional image processing techniques, for the purpose of estimating the level of coffee beans in a transparent tank fitted to a coffee machine. Measurements using both approaches is to be compared against manual level measurements.

13:40
On Uncertainty Analysis of the Rate Controlled Production (RCP) Model
PRESENTER: Soheila Taghavi

ABSTRACT. RCP model, which is a general empirical equation, is being thoroughly used to simulate and investigate the performance of the oil wells completed by Autonomous Inflow Control Devices (AICD’s) and Autonomous Inflow Control valve (AICV). In this paper, a dimensionless version of the model was presented, and the parameters of the modified model were estimated. In addition, we demonstrated how the model and measurement uncertainties can be quantified within the Bayesian statistical inference framework. A Markov Chain Monte Carlo (MCMC) method known as Hamilton Monte Carlo (HMC) was used to estimate the joint posterior probability distribution. Results from the analysis confirmed that at the calibration step the model can describe most of the variations in the measurements. However, the results at validation step showed a slightly overprediction by the model in specific areas of the valve performance. The inadequacy in model could not be explained by the measurement noise or the uncertainty in the estimated parameters.