View: session overviewtalk overview
Peter Bentley - Building a Nature-Inspired Computer
09:00 | Building a Nature-Inspired Computer SPEAKER: Peter Bentley ABSTRACT. Since the before birth of computers we have strived to make intelligent machines that share some of the properties of our own brains. We have tried to make devices that quickly solve For the last decade Peter Bentley and his group have made their own journey in this area. In order to overcome the observed incompatibilities between conventional architectures and
Computation has always meant transformation in the past, whether it is the transformation of position of beads on an abacus, or of electrons in a CPU. But this simple definition also allows us to call the sorting of pebbles on a beach, or the transcription of protein, or the growth of dendrites in the brain, valid forms of computation. Such a definition is important, for it provides a common language for biology and computer science, enabling both to be understood in terms of computation. The systemic computer is designed to enable many features of natural computation and provide an effective platform for biological modeling and bio-inspired algorithms. Several Through this work, many important lessons have been learned. In addition to the advances in bio-inspired computing, it is increasingly possible to see parallels between systemic computing and other techniques and architectures under development. High performance graph-based computing or novel hardware based on memristors or neural modeling may provide excellent new substrates for systemic-style computation in the future. |
10:10 | A Surrogate-Based Strategy for Multi-Objective Tolerance Analysis in Electrical Machine Design SPEAKER: Alexandru-Ciprian Zavoianu ABSTRACT. By employing state-of-the-art automated design and optimization techniques from the field of evolutionary computation, engineers are able to discover electrical machine designs that are highly competitive with respect to several (usually conflicting) objectives like efficiency, material costs, torque ripple and others. Apart from being Pareto-optimal, a good electrical machine design must also be quite robust, i.e., it must not be sensitive with regard to its design parameters as this would severely increase manufacturing costs or make the physical machine exhibit characteristics that are very different from those of its computer simulation model. Even when using a modern parallel/distributed computing environment, carrying out a (global) tolerance analysis of an electrical machine design is extremely challenging because of the number of evaluations that must be performed and because each evaluation requires (a series of) very time-intensive non-linear finite element (FE) simulations. In the present research, we describe how global surrogate models (ensembles of fast-to-train artificial neural networks) that are created in order to speed-up the multi-objective evolutionary search can be easily reused to perform a fast tolerance analysis of the optimized designs. Using two industrial optimization scenarios, we show that the surrogate-based approach can offer very valuable insights (and sometimes very accurate predictions) regarding the local and global sensitivities of the considered objectives at a fraction of the computational cost required by a FE-based strategy. Encouraged by the good performance on individual designs, we also used the surrogate approach to track the average sensitivity of the Pareto front during the entire optimization procedure. Our results indicate that there is no generalized increase of sensitivity during the runs, i.e., the used evolutionary algorithms do not enter a stage where they discover electrical drive designs that trade robustness for quality. |
10:30 | Lie Algebra-Valued Hopfield Neural Networks SPEAKER: Călin-Adrian Popa ABSTRACT. This paper introduces Lie algebra-valued Hopfield neural networks, for which the states, outputs, weights and thresholds are all from a Lie algebra. This type of networks represents an alternative generalization of the real-valued neural networks besides the complex-, hyperbolic-, quaternion-, and Clifford-valued neural networks that have been intensively studied over the last few years. The dynamics of these networks from the energy function point of view is studied by giving the expression of such a function and proving that it is indeed an energy function for the proposed network. |
10:50 | Feature creation using genetic programming with application in malware detection SPEAKER: Razvan Benchea ABSTRACT. This paper extends the authors previous research on a malware detection method, focusing on improving the accuracy of the perceptron based - One Side Class Perceptron algorithm via the use of Genetic Programming. We are concerned with finding a proper balance between the three basic requirements for malware detection algorithms: a) that their training time on large datasets falls below acceptable upper limits; b) that their false positive rate (clean/legitimate files/software wrongly classified as malware) is as close as possible to 0 and c) that their detection rate is as close as possible to 1. When the first two requirements are set as objectives for the design of detection algorithms, it often happens that the third objective is missed: the detection rate is low. This study focuses on improving the detection rate while preserving the small training time and the low rate of false positives. Another concern is to use the perceptron-based algorithm’s good performance on linearly separable data, by extracting features from existing ones. In order to keep the overall training time low, the huge search space of possible extracted features is efficiently explored – in terms of time and memory foot-print – using Genetic Programming; better separability is sought for. For experiments we used a dataset consisting of 350,000 executable files with an initial set of 300 Boolean features describing each of them. The feature-extraction algorithm is implemented in a parallel manner in order to cope with the size of the data set. We also tested different ways of controlling the growth in size of the variable-length chromosomes. The experimental results show that the features produced by this method are better than the best ones obtained through mapping allowing for an increase in detection rate. |
11:10 | High Probability Mutation and Error Thresholds in Genetic Algorithms SPEAKER: Nicolae-Eugen Croitoru ABSTRACT. Error Threshold is a concept from molecular biology that has been introduced [G. Ochoa (2006) Error Thresholds in Genetic Algorithms. Evolutionary Computation Journal, 14:2, pp 157-182, MIT Press] in Genetic Algorithms and has been linked to the concept of Optimal Mutation Rate. In this paper, the author expands previous works with a study of Error Thresholds near 1 (i.e. mutation probabilities of approx. 0.95), in the context of binary encoded chromosomes. Comparative empirical tests are performed, and the author draws conclusions in the context of population consensus sequences, population size, error thresholds, selection pressure and quality of found optima. |
11:30 | A Study on Techniques for Proactively Identifying Malicious URLs SPEAKER: Dumitru Bogdan Prelipcean ABSTRACT. As most of the malware nowadays use Internet as their main doorway to infect a new system, it has become imperative for security vendors to provide cloud-based solutions that can filter and block malicious URLs. This paper presents different practical considerations related to this problem. The key points that we focus on are the usage of different machine learning techniques and unsupervised learning methods for detecting malicious URLs with respect to memory footprint. The database that we have used in this paper was collected during a period of 48 weeks and consists in approximately 6.000.000 benign and malicious URLs. We also evaluated how detection rate and false positive rate evolved during that period and draw some conclusions related to current malware landscape and Internet attack vectors. |
11:50 | Stock Market Trading Strategies - Applying Risk and Decision Analysis Models for Detecting Financial Turbulence SPEAKER: Monica Tirea ABSTRACT. Risk handling and evaluation plays an important role in optimizing an investment portfolio. This paper's goal is to describe a system that determines, classifies and handles risk associated to any type of investment based on sentiment analysis, price movement, information related to companies,certain characteristics, the traders confidence level, and by measuring the potential loss over a certain period of time. This research implies analyzing trader's risk, market risk, risk associated to each evaluated company or financial group, political and governmental risk. The system is able to create different types of portfolio options based on the investor/trader profile, which is build based on the user's tolerance to risk ( determined by the results from an interactive quiz that the user must complete when entering the system). We propose a multi-agent system that uses different type of data(numerical, textual) in order to choose the appropriate mix of investments in order to minimize the risk and maximize the gain on a stock portfolio. In order to validate the result a system was constructed. |
Stefan Woltran - Dynamic Programming on Tree Decomposition in Practice. Some Lessons Learned
13:10 | Dynamic Programming on Tree Decompositions in Practice -- Some Lessons Learned SPEAKER: Stefan Woltran ABSTRACT. Many prominent NP-hard problems have been shown tractable for In this talk, we first give a brief introduction to the |
14:20 | Symbolic Derivation of Mean-Field PDEs from Lattice-Based Models SPEAKER: Helene Ranetbauer ABSTRACT. Transportation processes, which play a prominent role in the life and social sciences, are typically described by discrete models on lattices. For studying their dynamics a continuous formulation of the problem via partial differential equations (PDE) is employed. In this paper we propose a symbolic computation approach to derive mean-field PDEs from a lattice-based model. We start with the microscopic equations, which state the probability to find a particle at a given lattice site. Then the PDEs are formally derived by Taylor expansions of the probability densities and by passing to an appropriate limit as the time steps and the distances between lattice sites tend to zero. We present an implementation in a computer algebra system that performs this transition for a general class of models. In order to rewrite the mean-field PDEs in a conservative formulation, we adapt and implement symbolic integration methods that can handle unspecified functions in several variables. To illustrate our approach, we consider an application in crowd motion analysis where the dynamics of bidirectional flows are studied. However, the presented approach can be applied to various transportation processes of multiple species with variable size in any dimension, for example, to confirm several proposed mean-field models for cell motility. |
14:40 | Computation of GCD of Sparse Multivariate Polynomial by Extended Hensel Construction SPEAKER: unknown ABSTRACT. Let $F$ be a squarefree multivariate polynomial in main variable $x$ and subvariables $u,v,\dots$. If the leading coefficient (LC) of $F$ w.r.t. $x$ vanishes at the origin of subvariables then we say that the LC of $F$ is singular. A representative algorithm for multivariate polynomial GCD is the EZ-GCD, which is based on the generalized Hensel construction (GHC). In order to apply the GHC, $F$ must be such that 1) the LC of $F$ is non-singular and 2) the initial Hensel factor of GCD is ``lucky''. These requirements are usually satisfied by the ``nonzero substitution'', i.e., to shift the origin of subvariables. However, the origin shifting may cause a drastic increase of the number of terms of $F$ if $F$ is sparse. In 1993, Sasaki and Kako proposed the extended Hensel construction (EHC) which is the Hensel construction for multivariate polynomial with singular LC. Using the EHC, Inaba implemented an algorithm of multivariate polynomial factorization and verified that it is very useful for sparse polynomials. In this paper, we apply the EHC for the computation of GCD of sparse multivariate polynomials. In order to find a lucky initial factor, we utilize the weighting of subvariables, etc. We report results of preliminary experiments which show that our method is hopeful. |
15:00 | Lagrange Inversion and series for Lambert W SPEAKER: unknown ABSTRACT. We show that Lagrange inversion can be used to obtain closed-form expressions for a number of series expansions of the Lambert $W$ function. Equivalently, we obtain expressions for the $n$th derivative. Various integer sequences related to the series expansions now can be expressed in closed form. |
15:20 | Towards an automatic tool for multi-scale model derivation illustrated with a micro-mirror array SPEAKER: Walid Belkhir ABSTRACT. This paper reports recent advances in the development of a symbolic asymptotic modeling software package, called MEMSALab, which will be used for automatic generation of asymptotic models for arrays of micro and nanosystems. The main purpose of this software is to construct models incrementally so that model features can be included step by step. This idea, conceptualized under the name "by extension-combination", is presented for the first time after having recalled the general functioning principles. A friendly user language recently introduced is also shortly discussed. We illustrate the mathematical operations that need to be implemented in MEMSALab by an example of an asymptotic model for the stationary heat equation in a Micro-Mirror Array developed for astrophysics. |
14:20 | Investigation on Parameter Effect for Contour Semi-Automatic Contour Detection in Histopathological Image Processing SPEAKER: Ruxandra Stoean ABSTRACT. Histopathological image understanding is a demanding task for pathologists, involving the risky decision of confirming or denying the presence of cancer. What is more, the increased incidence of the disease, on the one hand, and the current prevention screening, on the other, result in an immense quantity of such pictures. For the colorectal cancer type in particular, a computational approach attempts to learn from small manually annotated portions of images and extend the findings to the complete ones. As the output of such techniques highly depends on the input variables, the current study conducts an investigation of the effect on the automatic contour detection that the choices for parameter values have from a cropped section to the complete image. |
14:40 | Generating Healthy Menus for Older Adults using a Hybrid Honey Bees Mating Optimization Approach SPEAKER: Cristina Bianca Pop ABSTRACT. This paper models the problem of generating healthy menu recommendations for older adults as an optimization problem and proposes a hybrid Honey Bees Mating Optimization method for solving this problem. The method hybridizes the state of the art Honey Bees Mating Optimization meta-heuristic by injecting strategies inspired from Genetic Algorithms, Hill Climbing, Simulated Annealing, and Tabu Search into the steps that generate new solutions of the optimization problem. The method has been integrated in a food ordering system enabling older adults to order food daily. Experiments have been conducted on several hybridization configurations to identify the most appropriate hybridization that leads to the healthy menu recommendation that best satisfies the older adult’s diet recommended by the nutritionist, its culinary preferences and time and price constraints. |
15:00 | NSC-PSO, a novel PSO variant without speeds and coefficients SPEAKER: George Anescu ABSTRACT. The paper is introducing the principles of a new global optimization method, No Speeds and Coefficients Particle Swarm Optimization (NSC-PSO), applied to approaching the Continuous Global Optimization Problem (CGOP). Inspired from existing meta-heuristic optimization methods from the Swarm Intelligence (SI) class, like canonical Particle Swarm Optimization (PSO) and Artificial Bee Colony(ABC), the proposed NSC-PSO method is improving over the canonical PSO by eliminating the need of using the speeds of particles and the coefficients specific to the method. For proving the competitiveness of the proposed NSC-PSO method it was compared with the ABC method on a test bed of 10 known multimodal optimization problems by applying an appropriate testing methodology. Experimental results showed overall increased success rates and increased efficiency of the NSC-PSO method over the ABC method and demonstrated that it is a promising approach to CGOP. |
15:20 | Unsupervised Aspect Level Sentiment Analysis Using Self-organizing Maps SPEAKER: Emil Stefan Chifu ABSTRACT. This paper presents an unsupervised method for aspect level sentiment analysis that uses the Growing Hierarchical Self-organizing Maps. Different sentences in a product review refer to different aspects of the reviewed product. We use the Growing Hierarchical Self-organizing Maps in order to classify the review sentences. This way we determine whether the various aspects of the target entity (e.g. a product) are opinionated with positive or negative sentiment in the review sentences. By classifying the sentences against a domain specific tree-like ontological taxonomy of aspects and sentiments associated to the aspects (positive/ negative sentiments), we really classify the opinion polarity as expressed in sentences about the different aspects of the target object. The approach proposed has been tested on a collection of product reviews, more exactly reviews about photo cameras. |
16:00 | Computation of Stirling numbers and generalizations SPEAKER: unknown ABSTRACT. We consider the computation of Stirling numbers for positive and negative arguments. We describe a new, more efficient, computational scheme of Stirling cycle numbers than presently implemented in (for example) \textsc{Maple}. We also discuss generalizations of Stirling numbers, specifically those due to Flajolet and Prodinger. It becomes possible to evaluate Stirling numbers for negative arguments. The question of the value at the origin is also discussed. The point is a singular one, and different possibilities are discussed. |
16:20 | Solving SAT by an Iterative Version of the Inclusion-Exclusion Principle SPEAKER: Gabor Kusper ABSTRACT. Our goal is to present a basic, novel, and correct SAT solver algorithm; show its soundness; compare it with a standard SAT solver, give some ideas in which cases might it be competitive. We do not present a fine-tuned, state-of-the-art SAT solver, only a new basic algorithm. So we introduce \texttt{CCC}, a SAT solver algorithm which is an iterative version of the inclusion-exclusion principle. \texttt{CCC} stands for \texttt{C}ounting \texttt{C}lear \texttt{C}lauses. It counts those full length (in our terminology: clear) clauses, which are subsumed by the input SAT problem. Full length clauses are $n$-clauses, where $n$ is the number of variables in the input problem. A SAT problem is satisfiable if it does not subsume all $n$-clauses. The idea is that in an $n$-clause each of $n$ variables is present either as a positive literal or as a negative one. So we can represent them by $n$ bits. \texttt{CCC} is motivated by the inclusion-exclusion principle,it counts full length clauses as the principle does in case of the SAT problem, but in an iterative way. It works in the following way: It sets its counter to be $0$. It converts $0$ to an $n$-clause, which is the one with only negative literals. It checks whether this $n$-clause is subsumed by the input SAT problem. If yes, it increases the counter and repeats the loop. If not, we have a model, which is given by the negation of this $n$-clause. We show that almost always we can increase the counter by more than one. We show that this algorithm always stops and finds a model if there is one. We present a worst case time complexity analysis and lot of test results. The test results show that this basic algorithm can outperform a standard SAT solver, although its implementation is very simple without any optimization. \texttt{CCC} is competitive if the input problem contains lot of short clauses. Our implementation can be downloaded and the reader is welcome to make a better solver out of it. We believe that this new algorithm could serve as a good basis for parallel algorithms, because its memory usage is constant and no communication is needed between the nodes. |
16:00 | Measuring and Comparing the Scaling Behaviour of a High-Performance CFD Code on Different Supercomputing Infrastructures SPEAKER: unknown ABSTRACT. Parallel code design is a challenging task especially when addressing petascale systems for massive parallel processing (MPP), i.e. parallel computations on several hundreds of thousands of cores. Our in-house computational fluid dynamics code, developed by our group, was designed for such high-fidelity runs in order to exhibit excellent scalability values. Basis for this code is an adaptive hierarchical data structure together with an efficient communication and (numerical) computation scheme that supports MPP. For a detailled scalability analysis, we performed several experiments on two of Germany's national supercomputers up to 140,000 processes. In this paper, we will show the results of those experiments and discuss any bottlenecks that could be observed while solving engineering-based problems such as porous media flows or thermal comfort assessments for problem sizes up to several hundred billion degrees of freedom. |
16:20 | Extensions over OpenCL for latency reduction and critical applications SPEAKER: Grigore Lupescu ABSTRACT. Hardware and software stack complexity make programming GPGPUs difficult and limit application portability. This article first discusses challenges imposed by the current hardware and software model in GPGPU systems which relies heavily on the HOST device (CPU). We then identify system bottlenecks both in the hardware design and in the software stack and present two ideas to extend the HOST and DEVICE side of the OpenCL API with the aim to improve latency and device safety. As a first goal we target HOST side latency reduction using user synchronization directives. Our second goal was to improve on DEVICE side latency and add safety through a software layer which manages kernel execution. For both HOST and DEVICE side latency reduction we present concrete performance results. |
17:20 | Automatic Language Identification for Romance Languages using Stop Words and Diacritics SPEAKER: Ciprian-Octavian Truică ABSTRACT. Automatic language identification is a natural language processing problem that tries to determine the natural language of a given content. In this paper we present a statistical method for automatic language identification of written text using dictionaries containing stop words and diacritics. We propose different approaches that combine the two dictionaries to accurately determine the language of textual corpora. This method was chosen because stop words and diacritics are very specific to a language, although some languages have some similar words and special characters they are not all common. The languages taken into account were romance languages because they are very similar and usually it is hard to distinguish between them from a computational point of view. We have tested our method using a Twitter corpus and a news article corpus. Both corpora consists of UTF-8 encoded text, so the diacritics could be taken into account, in the case that the text has no diacritics only the stop words are used to determine the language of the text. The experimental results show that the proposed method has an accuracy of over 90% for small texts and over 99.8% for large texts. |
17:40 | Improving malware detection response time with behavior-based statistical analysis techniques SPEAKER: Dumitru Bogdan Prelipcean ABSTRACT. Detection of malicious software is a current problem which has several approaches on solving. Among these we mention signature based detection, heuristic detection and behavioral analysis. In the last year the number of malicious files has increased exponentially. At the same time, automated obfuscation methods (used to to generate malicious files with similar behavior and different aspect) have grown significantly. In response to these new obfuscation methods, many security vendors have introduced file reputation techniques to quickly find out potentially clean and malicious samples. In this paper we present a statistical based method that can be used to identify a specific dynamic behavior of a program. The main idea behind this solution is to analyze the execution flow of every file and to extract sequences of native system functions with a potential malign outcome. This technique is reliable against most forms of malware polymorphism and is intended to work as a filtering system for different automated detection systems. We use a database consisting in approximately 50.000 malicious files gathered over the last three months and almost 3.000.000 clean files collected for a period of 3 years. Our technique proved to be an effective filtering method and helped us improve our detection response time against the most prevalent malware families discovered in the last year. |
18:00 | Business reviews classification using sentiment analysis SPEAKER: Andreea Salinca ABSTRACT. Abstract— The research area of sentiment analysis, opinion mining, sentiment mining, sentiment extraction is gaining popularity in the last years. Online reviews are becoming very important in measuring the quality of a business. This paper presents a sentiment analysis approach to business reviews classification using a large reviews dataset provided by Yelp: Yelp Challenge dataset. In this work we propose several approaches for automatic sentiment classification, using two feature extraction methods and four machine learning models. It is illustrated a comparative study on the effectiveness of the ensemble methods for reviews sentiment classification. |
18:20 | Comparative Analysis of Existing Architectures for General Game Agents SPEAKER: Ionel-Alexandru Hosu ABSTRACT. Abstract – This paper addresses the development of general purpose game agents able to learn a vast number of games using the same architecture. The article analyzes the main existing approaches to general game playing and reviews their results. Methods such as deep learning, reinforcement learning and evolutionary algorithms are considered for this problem. The testing platform is represented by the popular video game console Atari 2600. Research into developing general purpose agents for games is closely related to achieving artificial general intelligence (AGI). |
17:20 | Evaluation of geomorphons as a basis for quantifying contextual information SPEAKER: unknown ABSTRACT. Currently, landform classification and mapping is one of the most active areas of geomorphometry [1]. Based on the principle of pattern recognition rather than differential geometry, Stepinski and Jasiewicz (2011) proposed a new qualitative classification of landform types, called geomorphon [2]. Geomorphon is a new concept of visualization and analysis of landform elements at a broad range of scales by using line-of-sight based neighborhoods. However, there is still a lack of studies approaching the issue of classifying repeating patterns of landform types by analyzing digital elevation models (DEMs). The importance of this issue stems from the need to relate landforms to context. Considering this assumption, the delimitation of landform elements should be followed by contextual and topological analysis [3]. Therefore, our interest is to test the potential of geomorphons to produce landform elements that are suitable for quantifying landscape metrics [4]. Introduced in landscape ecology to evaluate the spatial structure of a landscape, landscape metrics were thought useful to complement local derivatives in creating geometric signatures of topography [5, 6]. This approach relies on the potential of landscape metrics to evaluate landform patterns and account for spatial context in geomorphometric analysis. The quantification of landscape metrics has been carried out on geomorphon map. In order to achieve the advantages of geomorphon method a trial and error approach was performed for several parameters which need to be set optimally for the given area of interest. Furthermore, a set of statistical analysis was carried out in order to summarize the available data, extract useful information and formulate hypotheses for further research. Statistical analysis is mostly focused on finding related variables and groupings of similar observations. Therefore, the Principal Component analysis (PCA) is used as a tool for dimensionality reduction while Self-organizing map (SOM) is used as an alternative method for the optimal visualization and clustering of landscape metrics. The proposed methodology has been applied on freely available SRTM DEMs. The current approach provides a first prospect regarding the usefulness of geomorphons as a basis for the quantification of landscape metrics. We expect the additional information on pattern and context to be crucial in the ontology of landform types. |
17:40 | A comparison of pixel-based and geographic object-based image analysis for the classification of soil types SPEAKER: Andrei Dornik ABSTRACT. Geographic Object-Based Image Analysis (GEOBIA) is a new and evolving paradigm in remote sensing and geographic information systems, being not just a collection of segmentation, analysis and classification methods but disposing of specific tools, software, rules, and language (Blaschke et al, 2014). GEOBIA emerged as an alternative to pixel-based approaches aiming to partition remote sensing imagery into homogeneous image-objects, based on image segmentation. In addition, GEOBIA has been successfuly applied recently on digital elevation models (DEMs), for landform classification. Despite numerous arguments, there are no attempts to compare object-based and pixel-based approaches for digital soil type mapping and very few attempts to exploit object based analysis of DEM derivatives or remote sensing images in digital soil mapping. The main objective of this study is to assess the object-based approach through comparison of its results with the results of pixel-based classification for digital soil type mapping. Both approaches are based on Random forests (RF) classifier using DEM derivatives and digital maps representing vegetation cover as soil covariates. Two DEM derivatives, valley depth and SAGA wetness index were segmented with multi-resolution segmentation algorithm resulting homogeneous objects, these objects being further classified as soil types using the RF method. A pixel-based classification of soil types was performed, using also the RF method. Resulting maps were assessed in terms of their accuracy using the control soil profile dataset. The overall accuracy of the object-based soil map was 58% being 10% higher than that of the pixel-based soil map and the kappa coefficient was 0.41 with 0.14 higher, respectively. The statistical results showed that the object-based soil map attains higher overall accuracy, kappa coefficient, producer’s accuracy and user’s accuracy than the pixel-based map for five soil types out of six. Probably due to reduced number of training samples, four soil types out of ten were incorrectly predicted by both methods, with kappa index of 0. The results of our experiments show that the object-based method using RF and environmental variables is superior to pixel-based approach, leading to higher accuracy values for soil type classification. |
18:00 | Assessing the potential of segmentation methods applied on digital terrain models as support for soil mapping SPEAKER: Marinela Chetan ABSTRACT. Soil units are the fundamental elements used in soil mapping, traditional delineation techniques requiring huge amounts of material and time resources. In the recent decades, numerous studies have addressed the issue of automatic or semi-automatic extraction of soil units, being a major objective for efficient soil management. These studies used pixel-based methods, in present the main paradigm in spatial analysis. Blaschke and Strobl (2001) argued that the pixel although represents geo-spatial information is not underlied on spatial concepts. Relatively recent it was developed in the field of remote sensing, object-based image analysis, which aim to delineate homogenous spatial objects. The object-based techniques have been also successfully applied on digital elevation models for landform classification. The main objective of this study is to assess the potential of three segmentation tools applied on digital terrain models for automatic delineation of preliminary soil units. The three segmentation tools are: the original Physiographic Tool (PT), the PT in which the Level 3 is based on slope (PT-slope) and the version two of Estimation of Scale Parameter (ESP2). The comparison was performed to determine the suitability of object-based methods, in particular PT and ESP2, for the delineation of soil units at a scale of 1: 1000000. Evaluation of these units was conducted through visual and quantitative comparison with soil units of Canada (SLC), a model obtained with traditional techniques. The results of this study show that the three segmentation tools applied on digital terrain models, produced different results, being difficult to determine which is the most appropriate. Since the analysis scale is very coarse, the first levels of all the three tools, are inappropriate for delineation of preliminary soil units at this scale. The objects areal extent of Levels 2 and 3 obtained with the ESP2 and PT-slope is similar to SLC, but they contain many heterogeneous objects, thus being inappropriate for the proposed objective. Regarding all aspects of comparison, levels 2 and 3 obtained with the PT produced the most similar results to SLC. Therefore we conclude that there are the most appropriate for the delineation of preliminary soil units at 1: 1000000 scale. |
18:20 | The impact of using a 4D data assimilation scheme in WRF-ARW Model SPEAKER: unknown ABSTRACT. The need of a weather prediction system in an area with a high flood risk is essential, because it can alert the authorities of an upcoming weather phenomena that could pose a real threat. In order to achieve this we developed the Rapid Refresh WRF (RR-WRF), a weather prediction system based on WRF-ARW limited area model for Romania and surrounding areas. For this system, the grid-point resolution and data assimilation technique played a key role in the overall forecast accuracy. Initially we developed RR-WRV V1 which performed well in terms of accuracy but because of the data assimilation technique used, some errors were introduced in the first part of the forecast and the model became numerically unstable in the first 12 hours of the forecast. To overcome this, we developed the second RR-WRF version (RR-WRF V2) in which another data assimilation and runtime methodology was used. The differences were modest in terms of overall average absolute error but put into geographic perspective, we found that RR-WRF V2 system performed better in non-mountainous areas. Also, we found that in zones that have fewer weather stations (that were used in the new data assimilation approach), the errors were considerable larger for both temperature and relative humidity. Although we did not performed any objective verification for precipitation forecast, based on a subjective comparison between the forecast and doppler radar data for a severe precipitation event, we found that with the new data assimilation and initialization methodology, the spatial distribution of precipitations was improved. However, we need to make more studies and to analyze more cases in order to have a conclusive result. |