View: session overviewtalk overview
11:00 | Compositions of Constant Weighted Extended Tree Transducers PRESENTER: Malte Blattmann ABSTRACT. Conjecture 11 of [Lagoutte, Maletti: Survey - Weighted extended top-down tree transducers - Part III: Composition. Proc. AFCS, LNCS 7020, p. 272-308, Springer 2011] is confirmed. It is demonstrated that the composition of a constant weighted extended tree transducer with a linear weighted top-down tree transducer can be computed by a single weighted extended tree transducer. Whereas linearity and the top-down property are syntactic, the constant property is semantic. The decidability of the constant property is investigated in several restricted settings. |
11:30 | Reducing local alphabet size in recognizable picture languages PRESENTER: Pierluigi San Pietro ABSTRACT. We start from the classical definition of the family of recognizable picture (i.e., 2D) languages as the projection of a local picture language defined by a set of two-by-two tiles, i.e. by a 2-strictly-locally-testable (2-SLT) 2D language. The same family is also defined by the projection of larger $k$ by $k$ tiles, $k>2$, i.e. by a $k$-SLT 2D language. A basic measure of the descriptive complexity of a 2D language is given by the size of the 2-SLT alphabet, more precisely by the so-called alphabetic ratio of sizes: 2-SLT-alphabet / picture-alphabet. We study how the alphabetic ratio changes moving from 2 to larger values, and we obtain the following result: any recognizable picture language over an alphabet of size $n$ is the projection of a SLT language over an alphabet of size $2n$. Moreover, two is the minimal alphabetic ratio possible in general. This result reproduces into 2D a known similar property (known as Extended Medvedev's theorem) of the regular word languages, concerning the minimal alphabetic ratio needed to define a language by means of a projection of an SLT word language. |
12:00 | Active Learning of Sequential Transducers with Side Information about the Domain PRESENTER: Raphaël Berthon ABSTRACT. Active learning is a setting in which a student queries a teacher, through membership and equivalence queries, in order to learn a language. Performance on these algorithms is often measured in the number of queries required to learn a target, with an emphasis on costly equivalence queries. In graybox learning, the learning process is accelerated by foreknowledge of some information on the target. Here, we consider graybox active learning of subsequential string transducers, where a regular overapproximation of the domain is known by the student. We show that there exists an algorithm using string equation solvers that uses this knowledge to learn subsequential string transducers with a better guarantee on the required number of equivalence queries than classical active learning. |