previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 14A: Tutorial: Application, Analysis, and Development of Metaheuristic Algorithms with HeuristicLab, an Open-source Optimization Environment for Research and Education - Stefan Wagner

Tutorial: Application, Analysis, and Development of Metaheuristic Algorithms with HeuristicLab, an Open-source Optimization Environment for Research and Education 


HeuristicLab [1, 2] is an open-source environment for heuristic optimization that features several metaheuristic optimization algorithms as well as optimization problems. It is developed by the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) [3] of the University of Applied Sciences Upper Austria and is based on C# and Microsoft .NET. HeuristicLab is used as development platform for several research and industry projects (for example the Josef Ressel Centers SymReg [4] and adaptOp [5]) as well as for teaching metaheuristics in the study programs Software Engineering and Medical- and Bioinformatics in Hagenberg. Over the years HeuristicLab has become more and more known within the metaheuristic optimization community and is used by researchers and lecturers at different universities worldwide. This tutorial demonstrates how to apply, analyze, and develop metaheuristic optimization algorithms using HeuristicLab [1, 2]. It will be shown how to parameterize and execute evolutionary algorithms to solve combinatorial optimization problems (routing, scheduling) as well as data analysis problems (regression, classification). Participants will learn how to assemble different algorithms and parameter settings to a large-scale optimization experiment and how to execute such experiments on multi-core or cluster systems. Furthermore, the experiment results will be compared using HeuristicLab’s interactive charts for visual and statistical analysis to gain knowledge from the executed test runs. To complete the tutorial, it will be sketched briefly how HeuristicLab can be extended with further optimization problems and how custom optimization algorithms can be developed.






Location: Room 1
09:00-10:30 Session 14B: Tutorial: Conditional Rewriting in Theorema 2.0 - Wolfgang Windsteiger

Tutorial: Conditional Rewriting in Theorema 2.0


The Theorema system is a proof assistant based on Mathematica, one of the leading systems (not only) symbolic computation.The main aim of Theorema is “natural style”, both in input and in output. From Mathematicawe mainly use its highly programmable notebook interface and the pattern-based programming language. In contrast to other provers in and around the Wolfram/Mathematica-ecosystem, Theorema’s logical engine guiding the proofs does not use any of Mathematica’s builtin algorithms for manipulating symbolic expressions, but has its own language for predicate logic and a proof calculus similar to natural deduction in order to automatically generate human-like proofs. In this tutorial, we want to focus on conditional rewriting and explain its implementation and its applications in the frame of Theorema 2.0. The tutorial will mainly be based on a live demo of the system showing concrete examples of proofs as they are used in teaching logic with the help of Theorema.

Location: Room 2
10:30-10:50Coffee Break
10:50-11:50 Session 15A: Logic and Programming Session
Location: Room 1
FERPModels: A Certification Framework for Expansion-Based QBF Solving

ABSTRACT. Modern expansion-based solvers for quantified Boolean formulas (QBFs) are successful in many applications. However, no such solver supports the generation of proofs needed to independently validate the correctness of the solving result and for the extraction of winning strategies which encode concrete solutions to the application problems.

In this paper, we present a complete tool chain for proof generation, result validation, and for universal winning strategy extraction in the context of expansion-based solving. In particular, we introduce a proof format for the ForallExp-calculus on which expansion-based solving is founded, implement proof generation in a recent QBF solver, provide a checker for these proofs, and develop a new strategy extraction algorithm.

IPO-MAXSAT: The In-Parameter-Order Strategy combined with MaxSAT solving for Covering Array Generation

ABSTRACT. Covering arrays (CAs) are combinatorial designs that represent the backbone of combinatorial testing, which is applied most prominently in automated software testing. The generation of optimized CAs is a difficult combinatorial optimization problem being subject to ongoing research. Previous studies have shown that many different algorithmic approaches are best suited for different instances of CAs. In this paper we present the IPO-MAXSAT algorithm, which adopts the prominent IPO strategy for CA generation and uses MaxSAT solving to optimize the occurring sub-problems. We devise three different algorithmic variants that use a MaxSAT solver for different sub-problems. These variants are evaluated in an extensive set of experiments where we also consider the usage of different MaxSAT solvers. Further, we provide a comparison against various other algorithms realizing the IPO strategy as well as the state of the art.

Maslov’s Inverse Method and Favorable M-sequent Calculus

ABSTRACT. In 1964, Sergei Maslov proposed the so-called Maslov inverse method (MIM) for establishing the deducibility of sequents of the form \rightarrow D1, ..., Dn in classical first-order logic without functional symbols and equality, where D1, ..., Dn are quantifier-free formulas in conjunctive normal form under the condition that all their variables are bound by existential quantifiers. The definition of MIM in the form of the so-called favorable ensembles calculus was given by S. Maslov in terms not used by logicians and researchers in automated reasoning. That is why some attempts were made to explain the MIM operation scheme in resolution terms. In contrast, this paper treats MIM as a sequent-type special calculus called a favorable m-sequent calculus, which allows MIM to be described in a form closer to the original one than its known resolution treatments.The soundness and completeness of the favorable m-sequent calculus of independent interest is proved for classical first-order logic with functional symbols and without equality.

10:50-11:50 Session 15B: Workshop on Agents for Complex Systems + PhD Session
Location: Room 2
Intelligent Agent for Food Recognition in a Smart Fridge

ABSTRACT. Respecting an adequate diet is essential for a healthy lifestyle. However, keeping track of the daily consumed food and of what one has in the home fridge may be tedious and time-consuming if not backed up by user-friendly applications. In this context, the paper presents some of the main components of an intelligent agent and system aimed to create a user-oriented application for recommending personalized food recipes based on the available ingredients and dietary restrictions in a domestic environment. The developed application includes a component to accurately detect the ingredients from an image taken from inside a fridge, equipped with mounted cameras, and allows the user to get information about each ingredient, such as the ingredient type, name, and associated calories. To facilitate the development of this application, our own data set of food ingredients were collected and annotated, and an ontology of recipes was developed.

Machine Learning Models to Predict Soil Moisture for Irrigation Schedule (online)

ABSTRACT. The agriculture industry must alter its operations in the context of climate change. Farmers can plan their irrigation operations more effectively and efficiently with the exact measurement and forecast of moisture content in their fields. Sensor-based irrigation and machine learning algorithms have the potential to facilitate farmers with significantly effective water management solutions. However, today's machine learning methods based on sensor data necessitate a huge quantity of data for effective training, which poses a number of challenges including affordability, battery life, internet availability, evaporation issues and other factors. The purpose of this report is to find an efficient machine learning model by doing metric evaluation and comparing the R-squared value, that can predict soil humidity within a certain crop field scenario for the next couple of days using historical data. 

Data cleansing in Continuous Glucose Monitor time series
PRESENTER: Bogdan Butunoi

ABSTRACT. The data coming from the growing number of Internet of Things (IoT) devices that generate various metrics needs to be processed before it becomes meaningful input data for machine learning algorithms. Similar to IoT devices, continuous glucose monitors (CGM) also created a new set of problems in the current closed-loop systems research. A closed-loop system that overcomes these problems would increase the quality of life for the patient while also lowering the long-term side effects of diabetes.

12:00-13:30Lunch Break
13:30-14:20 Session 16: Invited talk: Implementation Techniques for Mathematical Model Checking - Wolfgang Schreiner

Invited talk:  Implementation Techniques for Mathematical Model Checking


The aim of the RISCAL software is the analysis of theories and algorithms over discrete domains as they arise in computer science, discrete mathematics, logic, and algebra. For this purpose, RISCAL provides an expressive specification language based on a strongly typed variant of first-order logic with a rich set of types and operations in which theories can be formulated and algorithms can be specified; nevertheless the validity of all formulas and the correctness of all algorithms is decidable. This is because all RISCAL types have finite sizes and thus the underlying model is finite; the sizes of the types and thus of the model is configurable by model parameters. RISCAL thus represents a tool for the automated validation and (more important) falsification of conjectures, in order to avoid fruitless attempts to prove invalid statements. The system has been mainly developed for educational purposes but it also has been applied in research.

In this talk, we present and compare the various decision techniques that have been applied in the implementation of RISCAL. The original implementation was based on “semantic evaluation” where all syntactic phrases of the specification language were translated into an executable representation of their (generally nondeterministic) denotational semantics; while this mechanism is reasonably efficient, its main advantage is actually its transparency: it gives easily rise to visual representations of the semantics and to the generation of counterexamples for invalid conjectures. Later (by Franz-Xaver Reichl), an alternative decision mechanism was developed in the form of a translation of RISCAL into the SMT-LIB theory QF_UFBV (unquantified formulas over bit vectors with uninterpreted sort and function symbols) which can be decided by various SMT solvers. While this decision mechanism is generally much more efficient and allows the analysis of much larger domains, there are also situations when it is inferior to the semantics-based approach; we discuss the pros and cons of both approaches based on both synthetic and real-world benchmarks.

These techniques are complemented by the recent addition of an implementation (by Ágoston Sütő) of the verification of transition systems with respect to linear temporal logic specifications by their translation to finite automata.

Location: Room 1
14:20-14:40Coffee Break
14:40-16:00 Session 17A: Special session Computer Algebra and Computational Logic (1)
Location: Room 1
Approaches to the Skolem Problem
Proving properties of linear operators via ideal membership of noncommutative polynomials
14:40-16:00 Session 17B: Workshop on Digital Image Processing for Medical and Automotive Industry (1)
Location: Room 2
Learning networks hyper-parameter using multi-objective optimization of statistical performance metrics

ABSTRACT. Deep Learning has enabled remarkable progress over the last years on a several objectives, such as image and speech recognition, and machine translation. Deep neural architectures are a main contribution for this progress. Current architectures have mostly been developed manually by engineers, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. In this paper we present a strategy for the optimization of network hyper-parameters using a multi-objective Non-dominated Sorting Genetic Algorithm combined with a nested cross-validation to optimize statistical metrics of the performance of networks. In order to illustrate the the proposed hyper-parameter optimization, we have apply it to a use case that uses transformers to map abstract radiomic features to specific radiological annotations. Results obtained with the LUNA16 public data base show generalization power of the proposed optimization strategy for hyper-parameter setting.

Improving the Diagnostic of Contrast Enhanced Ultrasound Imaging using Optical Flow for Lesion

ABSTRACT. Our proposal aims an automatic method used for obtaining the ultrasound image of a region of interest based on the optical flow computation. Combined with a kernel correlation filter tracking algorithm and a Xception deep convolutional neural network architecture, our solution provides state-of-the-art results (over 90% accuracy) in the automatic diagnosis of liver lesion using contrast enhanced ultrasound.

AI-based healthcare application for medical image processing and diagnosis prediction

ABSTRACT. The main focus of this paper is the implementation of a e-health software, web and mobile based solution, identified as VitaWise. The focus will be on tackling already some of the biggest challenges that we have right now when talking about e-health and personal health in general. The core idea of the application is to bring medics and health advice in everyone’s home. The application can do that by not only using instant messaging, but also instant health checks using Machine Learning algorithms such as the Decision Forest Regression model trained in the cloud using Microsoft Azure Machine Learning Studio, as well as image recognition using DenseNet.

Cancer is a leading cause of death and a major barrier to increasing life expectancy in every country in the world. According to World Health Organization (WHO) estimates for 2019 cancer is the first or second cause of death before the age of 70 in 112 of 183 countries and ranks third or fourth in 23 other countries. The increase in the prominence of cancer as the leading cause of death partially reflects significant decreases in mortality rates from stroke and coronary heart disease compared to cancer in many countries.

We collected a set of data consisting of radiographs with and without cancer from the Cancer Imaging Archive of the Department of Biomedical Informatics at the University of Arkansas for Medical Sciences. Transformations were applied to this dataset, using the DenseNet algorithm, a type of convolutional neural network, to drive the PyTorch model. The obtained model was compared with the initial data set to obtain a score, and in order to evaluate the radiographs in real time, an endpoint was created.

Hardware-software co-design

ABSTRACT. The industry’s dash to AI is described as “one of the biggest gold rushes” by Laurent Moll, ArterisIP ‘s Chief Operating Officer. Nowadays, a new era for open computing is triggered by RISC-V, while Chris Lattner announced a “Golden Age of Compilers”. A prospector, in the sense of the AI golden-rush, geared with both a silicon and a compiler [for that silicon], enters the arena in the hope to strike for gold: high performance per watt, which brings AI inference from datacenters down to IoT devices. The prospectors include companies who “have been making a silicon for a long time, and a lot of new people who have not made silicon [before],”

16:00-16:20Coffee Break
16:20-17:40 Session 18A: Workshop on Digital Image Processing for Medical and Automotive Industry (2)
Location: Room 2
Eye Detection For Drivers Using Convolutional Neural Networks With Automatically Generated Ground Truth Data

ABSTRACT. Eye detection is an essential feature for driver monitoring systems acting as a base functionality for other algorithms like attention or drowsiness detection. Multiple methods for eye detection exist. The machine learning based methods involve a manual labeling process in order to generate training and testing datasets. This paper presents an eye detection algorithm based on convolutional neural networks trained using automatically generated ground truth data and proves that we can train very good machine learning models using automatically generated labels. Such approach reduces the effort needed for manual labeling and data preprocessing.

Pedestrian Trajectory Prediction Based on Tree Method using Graph Neural Networks

ABSTRACT. Pedestrian trajectory prediction in real-world scenarios is a challenging task for several computer vision applications, such as autonomous driving, video surveillance, and robotic systems. This is not a trivial task due to the numerous potential trajectories. In this article, it provides a tree-based approach to handle this multimodal prediction challenge. The tree is designed based on the observed data and is also used to predict future trajectories. In particular, an individual's potential future trajectory is represented by the tree's root-to-leaf route. Compared to previous approaches that use implicit latent variables to describe possible future paths, the movement behaviors may be directly represented by the path in the tree (e.g., go straight and then turn left), and thus offer more socially suitable trajectories. The experimental results on the ETH, UCY, and Stanford Drone datasets show that this approach can exceed the performance of the state-of-the-art approaches. The solution is more efficient and compact, with a smaller model size and a higher accuracy, and delivers better results with reference to average displacement error (ADE) and final displacement error (FDE) metrics.

Semi-Supervised Pipeline for Human Sprites Neural Rendering

ABSTRACT. In this paper we are discussing the possibility of a hybrid approach in renderable scenes. The main idea of the presented experiment is to render the human actors by using existing videos of the characters. The input video is first converted to a sprite dataset. The dataset is generated with supervised techniques but human intervention is also required. After that we extract body and pose parameters. Lastly, we render novel poses using a GAN-based approach similar to pix2pix.

Motorage - Computer vision-based self-sufficient smart parking system

ABSTRACT. As devices get ever so interconnected, the concept of Smart City is becoming less of a distant aspiration and more of a reality. One major discomfort which is persistent in traditional cities, from the perspective of a vehicle motorist, is the lack of guidance or coordination when in need of a parking space, be it in a shopping center, at the airport, at work or any other overcrowded point of interest. Motorists are frequently presented with little to no adequate information regarding the parking space availability within a parking lot, and in turn end up spending excessive time in hopes of finding an open spot. This work tackles the problem of improving operational efficiency regarding the widely known problem of finding a parking space, through automation and surveillance, in hopes of providing higher quality allocation and management services. We propose a smart parking system centered around environment monitoring with the help of motion sensors and live-feed cameras, as well as efficient parking space management through a reservation system. As a result, the entire process is automated and the end user is kept up to date, with regard to parking space availability, no longer needing to wander around in order to find a vacant spot. Given that the system does not require any human intervention, it efficiently manages the incoming traffic by ensuring all reservation expectations are met, and it does so effortlessly.

16:20-17:40 Session 18B: Special session on Computer Algebra and Computational Logic (2)
Location: Room 1
Proofs by Induction in a Saturation Theorem Prover

ABSTRACT. Many applications of theorem proving, for example program verification and analysis, require first-order reasoning with both quantifiers and theories such as arithmetic and datatypes. There is no complete procedure for reasoning in such theories but the state-of-the-art in automated theorem proving is still able to reason effectively with real-world problems from this rich domain. We complement theory reasoning by automating induction inside a saturation-based theorem prover. Our goal is to incorporate lightweight automated induction in a way that complements the saturation-based approach, allowing us to solve problems requiring a combination of first-order reasoning, theory reasoning, and inductive reasoning.

Why is quantifier elimination doubly exponential?