View: session overviewtalk overviewside by side with other conferences
14:20 | Investigation on Parameter Effect for Contour Semi-Automatic Contour Detection in Histopathological Image Processing SPEAKER: Ruxandra Stoean ABSTRACT. Histopathological image understanding is a demanding task for pathologists, involving the risky decision of confirming or denying the presence of cancer. What is more, the increased incidence of the disease, on the one hand, and the current prevention screening, on the other, result in an immense quantity of such pictures. For the colorectal cancer type in particular, a computational approach attempts to learn from small manually annotated portions of images and extend the findings to the complete ones. As the output of such techniques highly depends on the input variables, the current study conducts an investigation of the effect on the automatic contour detection that the choices for parameter values have from a cropped section to the complete image. |
14:40 | Generating Healthy Menus for Older Adults using a Hybrid Honey Bees Mating Optimization Approach SPEAKER: Cristina Bianca Pop ABSTRACT. This paper models the problem of generating healthy menu recommendations for older adults as an optimization problem and proposes a hybrid Honey Bees Mating Optimization method for solving this problem. The method hybridizes the state of the art Honey Bees Mating Optimization meta-heuristic by injecting strategies inspired from Genetic Algorithms, Hill Climbing, Simulated Annealing, and Tabu Search into the steps that generate new solutions of the optimization problem. The method has been integrated in a food ordering system enabling older adults to order food daily. Experiments have been conducted on several hybridization configurations to identify the most appropriate hybridization that leads to the healthy menu recommendation that best satisfies the older adult’s diet recommended by the nutritionist, its culinary preferences and time and price constraints. |
15:00 | NSC-PSO, a novel PSO variant without speeds and coefficients SPEAKER: George Anescu ABSTRACT. The paper is introducing the principles of a new global optimization method, No Speeds and Coefficients Particle Swarm Optimization (NSC-PSO), applied to approaching the Continuous Global Optimization Problem (CGOP). Inspired from existing meta-heuristic optimization methods from the Swarm Intelligence (SI) class, like canonical Particle Swarm Optimization (PSO) and Artificial Bee Colony(ABC), the proposed NSC-PSO method is improving over the canonical PSO by eliminating the need of using the speeds of particles and the coefficients specific to the method. For proving the competitiveness of the proposed NSC-PSO method it was compared with the ABC method on a test bed of 10 known multimodal optimization problems by applying an appropriate testing methodology. Experimental results showed overall increased success rates and increased efficiency of the NSC-PSO method over the ABC method and demonstrated that it is a promising approach to CGOP. |
15:20 | Unsupervised Aspect Level Sentiment Analysis Using Self-organizing Maps SPEAKER: Emil Stefan Chifu ABSTRACT. This paper presents an unsupervised method for aspect level sentiment analysis that uses the Growing Hierarchical Self-organizing Maps. Different sentences in a product review refer to different aspects of the reviewed product. We use the Growing Hierarchical Self-organizing Maps in order to classify the review sentences. This way we determine whether the various aspects of the target entity (e.g. a product) are opinionated with positive or negative sentiment in the review sentences. By classifying the sentences against a domain specific tree-like ontological taxonomy of aspects and sentiments associated to the aspects (positive/ negative sentiments), we really classify the opinion polarity as expressed in sentences about the different aspects of the target object. The approach proposed has been tested on a collection of product reviews, more exactly reviews about photo cameras. |
16:00 | Measuring and Comparing the Scaling Behaviour of a High-Performance CFD Code on Different Supercomputing Infrastructures SPEAKER: unknown ABSTRACT. Parallel code design is a challenging task especially when addressing petascale systems for massive parallel processing (MPP), i.e. parallel computations on several hundreds of thousands of cores. Our in-house computational fluid dynamics code, developed by our group, was designed for such high-fidelity runs in order to exhibit excellent scalability values. Basis for this code is an adaptive hierarchical data structure together with an efficient communication and (numerical) computation scheme that supports MPP. For a detailled scalability analysis, we performed several experiments on two of Germany's national supercomputers up to 140,000 processes. In this paper, we will show the results of those experiments and discuss any bottlenecks that could be observed while solving engineering-based problems such as porous media flows or thermal comfort assessments for problem sizes up to several hundred billion degrees of freedom. |
16:20 | Extensions over OpenCL for latency reduction and critical applications SPEAKER: Grigore Lupescu ABSTRACT. Hardware and software stack complexity make programming GPGPUs difficult and limit application portability. This article first discusses challenges imposed by the current hardware and software model in GPGPU systems which relies heavily on the HOST device (CPU). We then identify system bottlenecks both in the hardware design and in the software stack and present two ideas to extend the HOST and DEVICE side of the OpenCL API with the aim to improve latency and device safety. As a first goal we target HOST side latency reduction using user synchronization directives. Our second goal was to improve on DEVICE side latency and add safety through a software layer which manages kernel execution. For both HOST and DEVICE side latency reduction we present concrete performance results. |
17:20 | Evaluation of geomorphons as a basis for quantifying contextual information SPEAKER: unknown ABSTRACT. Currently, landform classification and mapping is one of the most active areas of geomorphometry [1]. Based on the principle of pattern recognition rather than differential geometry, Stepinski and Jasiewicz (2011) proposed a new qualitative classification of landform types, called geomorphon [2]. Geomorphon is a new concept of visualization and analysis of landform elements at a broad range of scales by using line-of-sight based neighborhoods. However, there is still a lack of studies approaching the issue of classifying repeating patterns of landform types by analyzing digital elevation models (DEMs). The importance of this issue stems from the need to relate landforms to context. Considering this assumption, the delimitation of landform elements should be followed by contextual and topological analysis [3]. Therefore, our interest is to test the potential of geomorphons to produce landform elements that are suitable for quantifying landscape metrics [4]. Introduced in landscape ecology to evaluate the spatial structure of a landscape, landscape metrics were thought useful to complement local derivatives in creating geometric signatures of topography [5, 6]. This approach relies on the potential of landscape metrics to evaluate landform patterns and account for spatial context in geomorphometric analysis. The quantification of landscape metrics has been carried out on geomorphon map. In order to achieve the advantages of geomorphon method a trial and error approach was performed for several parameters which need to be set optimally for the given area of interest. Furthermore, a set of statistical analysis was carried out in order to summarize the available data, extract useful information and formulate hypotheses for further research. Statistical analysis is mostly focused on finding related variables and groupings of similar observations. Therefore, the Principal Component analysis (PCA) is used as a tool for dimensionality reduction while Self-organizing map (SOM) is used as an alternative method for the optimal visualization and clustering of landscape metrics. The proposed methodology has been applied on freely available SRTM DEMs. The current approach provides a first prospect regarding the usefulness of geomorphons as a basis for the quantification of landscape metrics. We expect the additional information on pattern and context to be crucial in the ontology of landform types. |
17:40 | A comparison of pixel-based and geographic object-based image analysis for the classification of soil types SPEAKER: Andrei Dornik ABSTRACT. Geographic Object-Based Image Analysis (GEOBIA) is a new and evolving paradigm in remote sensing and geographic information systems, being not just a collection of segmentation, analysis and classification methods but disposing of specific tools, software, rules, and language (Blaschke et al, 2014). GEOBIA emerged as an alternative to pixel-based approaches aiming to partition remote sensing imagery into homogeneous image-objects, based on image segmentation. In addition, GEOBIA has been successfuly applied recently on digital elevation models (DEMs), for landform classification. Despite numerous arguments, there are no attempts to compare object-based and pixel-based approaches for digital soil type mapping and very few attempts to exploit object based analysis of DEM derivatives or remote sensing images in digital soil mapping. The main objective of this study is to assess the object-based approach through comparison of its results with the results of pixel-based classification for digital soil type mapping. Both approaches are based on Random forests (RF) classifier using DEM derivatives and digital maps representing vegetation cover as soil covariates. Two DEM derivatives, valley depth and SAGA wetness index were segmented with multi-resolution segmentation algorithm resulting homogeneous objects, these objects being further classified as soil types using the RF method. A pixel-based classification of soil types was performed, using also the RF method. Resulting maps were assessed in terms of their accuracy using the control soil profile dataset. The overall accuracy of the object-based soil map was 58% being 10% higher than that of the pixel-based soil map and the kappa coefficient was 0.41 with 0.14 higher, respectively. The statistical results showed that the object-based soil map attains higher overall accuracy, kappa coefficient, producer’s accuracy and user’s accuracy than the pixel-based map for five soil types out of six. Probably due to reduced number of training samples, four soil types out of ten were incorrectly predicted by both methods, with kappa index of 0. The results of our experiments show that the object-based method using RF and environmental variables is superior to pixel-based approach, leading to higher accuracy values for soil type classification. |
18:00 | Assessing the potential of segmentation methods applied on digital terrain models as support for soil mapping SPEAKER: Marinela Chetan ABSTRACT. Soil units are the fundamental elements used in soil mapping, traditional delineation techniques requiring huge amounts of material and time resources. In the recent decades, numerous studies have addressed the issue of automatic or semi-automatic extraction of soil units, being a major objective for efficient soil management. These studies used pixel-based methods, in present the main paradigm in spatial analysis. Blaschke and Strobl (2001) argued that the pixel although represents geo-spatial information is not underlied on spatial concepts. Relatively recent it was developed in the field of remote sensing, object-based image analysis, which aim to delineate homogenous spatial objects. The object-based techniques have been also successfully applied on digital elevation models for landform classification. The main objective of this study is to assess the potential of three segmentation tools applied on digital terrain models for automatic delineation of preliminary soil units. The three segmentation tools are: the original Physiographic Tool (PT), the PT in which the Level 3 is based on slope (PT-slope) and the version two of Estimation of Scale Parameter (ESP2). The comparison was performed to determine the suitability of object-based methods, in particular PT and ESP2, for the delineation of soil units at a scale of 1: 1000000. Evaluation of these units was conducted through visual and quantitative comparison with soil units of Canada (SLC), a model obtained with traditional techniques. The results of this study show that the three segmentation tools applied on digital terrain models, produced different results, being difficult to determine which is the most appropriate. Since the analysis scale is very coarse, the first levels of all the three tools, are inappropriate for delineation of preliminary soil units at this scale. The objects areal extent of Levels 2 and 3 obtained with the ESP2 and PT-slope is similar to SLC, but they contain many heterogeneous objects, thus being inappropriate for the proposed objective. Regarding all aspects of comparison, levels 2 and 3 obtained with the PT produced the most similar results to SLC. Therefore we conclude that there are the most appropriate for the delineation of preliminary soil units at 1: 1000000 scale. |
18:20 | The impact of using a 4D data assimilation scheme in WRF-ARW Model SPEAKER: unknown ABSTRACT. The need of a weather prediction system in an area with a high flood risk is essential, because it can alert the authorities of an upcoming weather phenomena that could pose a real threat. In order to achieve this we developed the Rapid Refresh WRF (RR-WRF), a weather prediction system based on WRF-ARW limited area model for Romania and surrounding areas. For this system, the grid-point resolution and data assimilation technique played a key role in the overall forecast accuracy. Initially we developed RR-WRV V1 which performed well in terms of accuracy but because of the data assimilation technique used, some errors were introduced in the first part of the forecast and the model became numerically unstable in the first 12 hours of the forecast. To overcome this, we developed the second RR-WRF version (RR-WRF V2) in which another data assimilation and runtime methodology was used. The differences were modest in terms of overall average absolute error but put into geographic perspective, we found that RR-WRF V2 system performed better in non-mountainous areas. Also, we found that in zones that have fewer weather stations (that were used in the new data assimilation approach), the errors were considerable larger for both temperature and relative humidity. Although we did not performed any objective verification for precipitation forecast, based on a subjective comparison between the forecast and doppler radar data for a severe precipitation event, we found that with the new data assimilation and initialization methodology, the spatial distribution of precipitations was improved. However, we need to make more studies and to analyze more cases in order to have a conclusive result. |