Turing-100:Papers with Abstracts

Papers
Abstract. If Turing were a first-year graduate student interested in computers,
he would probably migrate into the field of computational biology. During his studies, he presented
a work about a mathematical and computational model of the morphogenesis process, in which chemical substances
react together. Moreover, a protein can be thought of as a computational element, i.e. a processing unit, able to
transform an input into an output signal. Thus, in a biochemical pathway, an enzyme reads the amount of reactants (substrates)
and converts them in products. In this work, we consider the biochemical pathway in unicellular organisms (e.g. bacteria) as a living computer, and we are able to program it in order to obtain desired outputs.
The genome sequence is thought of as an executable code specified by a set of commands in a sort of ad-hoc low-level programming language. Each combination of genes is coded as a string of bits $y \in \left \{ 0 , 1 \right \}^L$, each of which represents a gene set. By turning off a gene set, we turn off the chemical reaction associated with it. Through an optimal executable code stored in the ``memory'' of bacteria, we are able to simultaneously maximise the concentration of two or more metabolites of interest.
Finally, we use the Robustness Analysis and a new Sensitivity Analysis method to investigate both the fragility of the computation carried out by bacteria and the most important entities in the mathematical relations used to model them.
Abstract. Many computational modeling approaches of the mind seem to be characterized by an implicit strong physicalism, which frequently leads to confusion in philosophy of AI. This work aims at pointing out some fundamental aspects of this problem, both with respect to the relation between epistemological computationalism and physical realization, and the view of symbol manipulation as constrained computation.
Abstract. Modern Computation and Information theories have had significant impact on science in the 20th century - in theory and application. This influence is tracked (through a generalized, Information-laden scientific Style of Reasoning denoting the Information-theoretical and Computational turn in science), with a focus on the information processing and transfer metaphors and descriptive tools prevalent in current physics. Implementation of Information-Theoretical concepts produces such mathematical physical developments as Black-Hole Thermodynamics (BHTD) and the Black-Hole Wars (Suskind, 2008). The treatment of physical systems as information processing systems drives such branches of physics as Quantum Information Theory (QIT). The common Informational basis of computation and communication brings about a foundational shift in scientific reasoning with deep – potentially problematic as well as intriguing – philosophical ramifications. Models of computation and of physics
Abstract. We give explanations on the differences between a number of representations that have been proposed in the literature to measure the structural complexity of an object. In particular, we propose the notion of process complexity, the notion of sophistication from a given perspective, the notion of sophisticated depth, and the notion of self-dissimilarity. We also propose the notion of an approximate process for an object and argue that for all practical purposes, an approximate process is as good as an exact process.
Abstract. Turing's involvement with computer building was popularized in the 1970s and later. Most notable are the books by Brian Randell (1973), Andrew Hodges (1983), and Martin Davis (2000). A central question is whether John von Neumann was influenced by Turing's 1936 paper when he helped build the EDVAC machine, even though he never cited Turing's work. This question remains unsettled up till this day. As remarked by Charles Petzold, one standard history barely mentions Turing, while the other, written by a logician, makes Turing a key player.

Contrast these observations then with the fact that Turing's 1936 paper was cited and heavily discussed in 1959 among computer programmers. In 1966, the first Turing award was given to a programmer, not a computer builder, as were several subsequent Turing awards. An historical investigation of Turing's influence on computing, presented here, shows that Turing's 1936 notion of universality became increasingly relevant among programmers during the 1950s. The central thesis of this paper states that Turing's influence was felt more in programming after his death than in computer building during the 1940s.
Abstract. Using techniques from higher-type computability theory and proof theory we extend the well-known game-theoretic technique of backward induction to finite games of unbounded length. The main application is a closed formula for calculating strategy profiles in Nash equilibrium and subgame perfect equilibrium even in the case of games where the length of play is not a-priori fixed.
Abstract. A new computing model, called the active element machine (AEM), is presented that demonstrates Turing incomputable computation using quantum random input. The AEM deterministically executes a universal Turing machine (UTM) program η with random active element firing patterns. These firing patterns are Turing incomputable when the AEM executes a UTM having an unbounded number of computable steps. For an unbounded number of computable steps, if zero information is revealed to an adversary about the AEM’s representation of the UTM’s state and tape and the quantum random bits that help determine η’s computation and zero information is revealed about the dynamic connections between the active elements, then there does not exist a “reverse engineer” Turing machine that can map the random firing patterns back to the sequence of UTM instructions. This casts a new light on Turing’s notion of a computational procedure. In practical terms, these methods present an opportunity to build a new class of computing machines where the program’s computational steps are hidden. This non-Turing computing behavior may be useful in cybersecurity and in other areas such as machine learning where multiple, dynamic interpretations of firing patterns may be applicable.
Abstract. In his paper `Computing machinery and intelligence',
Turing introduces a rather trivial chess problem as a
conversation piece in an example Turing test.
The example is about much more than computers playing chess:
it has a hidden message that is about resolving ambiguity
and knowledge representation,
probably inserted by Turing as an easter egg for us to find.
Abstract. We introduce a notion of ultrametric automata and Turing machines using p-adic numbers to describe random branching of the process of computation. These automata have properties similar to the properties of probabilistic automata but complexity of probabilistic automata and complexity of ultrametric automata can differ very much.
Abstract. We show how to exploit enzymatic saturation -an ubiquitous nonlinear effects in biochemistry- in order to process information in molecular networks. The networks rely on the linearity of DNA strand displacement and the nonlinearity of enzymatic replication. We propose a pattern-recognition network that is compact and should be robust to leakage.
Abstract. In this work a new phenomenon called polaractivation is introduced. Polaractivation is based on quantum polar encoding and the result is similar to the superactivation effect — positive capacity can be achieved with zero-capacity quantum channels. However, polaractivation has many advantages over the superactivation: it is limited neither by any preliminary conditions on the quantum channel nor on the maps of other channels involved in the joint channel structure. We prove that the polaractivation works for arbitrary zero-private capacity quantum channels and we demonstrate, that the symmetric private classical capacity of arbitrary zero-private capacity quantum channels is polaractive.
Abstract. In the first decade of the 21st century, many revolutionary properties of quantum channels were discovered. These phenomena are purely quantum mechanical and completely unimaginable in classical systems. Recently, the most important discovery in Quantum Information Theory was the possibility of transmitting quantum information over zero-capacity quantum channels. In this work we prove that the possibility of superactivation of quantum channel capacities is determined by the mathematical properties of the quantum relative entropy function.
Abstract. Comparative tests work by finding the difference (or the absence of difference) between a reference subject and an evaluee. The Turing Test, in its standard interpretation, takes (a subset of) the human species as a reference.
Motivated by recent findings and developments in the area of machine intelligence evaluation, we discuss what it would be like to have a Turing Test where the reference and the interrogator subjects are replaced by Turing Machines.
This question sets the focus on several issues that are usually disregarded when dealing with the Turing Test, such as the degree of intelligence of reference and interrogator, the role of imitation (and not only prediction) in intelligence, its view from the perspective of game theory and others.
Around these issues, this paper finally brings the Turing Test to the realm of Turing machines.
Abstract. In this paper, we propose a probabilistic hybrid logic for the specification of data privacy requirements. The proposed logic is a combination of quantitative uncertainty logic and basic hybrid logic with a satisfaction operator. We show that it is expressive enough for the specification of many well-known data privacy requirements, such as <math>k</math>-anonymity, <math>l</math>-diversity and its precursor logical safety, <math>t</math>-closeness, and <math>δ</math>-disclosure privacy. The main contribution of the work is twofold. On one hand, the logic provides a common ground to express and compare existing privacy criteria. On the other hand, the uniform framework can meet the specification needs of combining new criteria as well as existing ones.
Abstract. We formulate and prove two Rice-like theorems
that characterize limitations on nameability of properties
within a given naming scheme for partial functions.
Such a naming scheme can, but need not be, an executable formalism.
A programming language is an example of an executable naming scheme,
where the program text names the partial function it implements.
Halting is an example of a property
that is not nameable in that naming scheme.

The proofs reveal requirements on the naming scheme
to make the characterization work.
Universal programming languages satisfy these requirements,
but also other formalisms can satisfy them.
We present some non-universal programming languages
and a non-executable specification language
satisfying these requirements.
Our theorems have
Turing's well-known Halting Theorem and Rice's Theorem as special cases,
by applying them to a universal programming language or Turing Machines as naming scheme.
Thus, our proofs separate the nature of the naming scheme
(which can, but need not, coincide with computability) from the diagonal argument.
This sheds further light on how far reaching and simple the `diagonal' argument is in itself.
Abstract. Symmetries of combinatorial objects are known to complicate search algorithms, but such obstacles can often be removed by detecting symmetries early and discarding symmetric subproblems. Canonical labeling of combinatorial objects facilitates easy equivalence checking through quick matching. All existing canonical labeling software also finds symmetries, but the fastest symmetry-finding software does not perform canonical labeling. In this work, we contrast the two problems and dissect typical algorithms to identify their similarities and differences. Based on this analysis, we develop a novel approach to canonical labeling where symmetries are found first and then used to speed up the canonical labeling algorithms. Empirical results show that this approach outperforms state-of-the-art canonical labelers.
Abstract. The purpose of this article is to remind three fundamental contributions by A. M. Turing to three important fields of contemporary science and engineering (theory of computation, artificial intelligence, and biocomputing), and to emphasize the connections between them. The article reminds and formulates resp., also three hypotheses related to the three initiatives and discuss in short their mutual interrelatedness.
Abstract. We use notions originating in Computational Complexity to provide insight into the analogies between computational complexity and Higher Recursion Theory. We consider alternating Turing machines, but with a modified, global, definition of acceptance. We show that a language is accepted by such a machine iff it is Pi-1-1. Moreover, total alternating machines, which either accept or reject each input, accept precisely the hyper-arithmetical (Delta-1-1) languages. Also, bounding the permissible number of alternations we obtain a characterization of the levels of the arithmetical hierarchy.
.
The novelty of these characterizations lies primarily in the use of finite computing devices, with finitary, discrete, computation steps. We thereby elucidate the correspondence between the polynomial-time and the arithmetical hierarchies, as well as that between the computably-enumerable, the inductive (Pi-1-1), and the PSpace languages.
Abstract. The reachability problem for Vector Addition Systems (VASs) is a central problem of net theory. The general problem is known to be decidable by algorithms based on the classical Kosaraju-Lambert-Mayr-Sacerdote-Tenney decomposition (KLMTS decomposition). Recently from this decomposition, we deduced that a final configuration is not reachable from an initial one if and only if there exists a Presburger inductive invariant that contains the initial configuration but not the final one. Since we can decide if a Preburger formula denotes an inductive invariant, we deduce from this result that there exist checkable certificates of non-reachability in the Presburger arithmetic. In particular, there exists a simple algorithm for deciding the general VAS reachability problem based on two semi-algorithms. A first one that tries to prove the reachability by enumerating finite sequences of actions and a second one that tries to prove the non-reachability by enumerating Presburger formulas. In another recent paper we provided the first proof of the VAS reachability problem that is not based on the KLMST decomposition. The proof is based on the notion of production relations that directly proves the existence of Presburger inductive invariants. In this paper we propose new intermediate results simplifying a bit more this last proof.
Abstract. Hoare logic (also known as Floyd-Hoare logic) can be used to formally verify the correctness of programs while testing provides a practical way to detect errors in programs. Unfortunately, the former is rarely applied in practice and the later is difficult to detect all existing errors. In this paper, we propose a novel technique that makes good use of Hoare logic to strengthen testing. The essential idea is first to use specification-based testing to discover all traversed program paths and then to use Hoare logic to prove their correctness. During the proof process, all errors on the paths can be detected. A case study is conducted to show its feasibility; an example taken from the case study is used to illustrate how the proposed method is applied; and discussion on the potential challenges to the method is presented.
Abstract. Turing’s o-machine discussed in his PhD thesis can perform all of the usual operations of a Turing machine and in addition, when it is in a certain internal state, can also query an oracle for an answer to a specific question that dictates its further evolution. In his thesis, Turing said 'We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine.’ There is a host of literature discussing the role of the oracle in AI, modeling brain, computing, and hyper-computing machines. In this paper, we take a broader view of the oracle machine inspired by the genetic computing model of cellular organisms and the self-organizing fractal theory. We describe a specific software architecture implementation that circumvents the halting and un-decidability problems in a process workflow computation to introduce the architectural resiliency found in cellular organisms into distributed computing machines. A DIME (Distributed Intelligent Computing Element), recently introduced as the building block of the DIME computing model, exploits the concepts from Turing’s oracle machine and extends them to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency through auto-failover; auto-scaling; live-migration; and end-to-end transaction security assurance in a distributed system. We demonstrate these characteristics using prototypes without the complexity introduced by hypervisors, virtual machines and other layers of ad-hoc management software in today’s distributed computing environments.
Abstract. This paper contributes to the general understanding of the "geometrical model of concurrency" that was named higher dimensional automata (HDAs) by Pratt and van Glabbeek. In particular we provide some understanding of the modal logics for such models and their expressive power in terms of the bisimulation that can be captured.
The geometric model of concurrency is interesting from two main reasons: its generality and expressiveness, and the natural way in which autoconcurrency and action refinement are captured.
Logics for this model, though, are not well investigated, where a simple, yet adequate, modal logic over HDAs was only recently introduced.
As this modal logic, with two existential modalities, "during" and "after", captures only split bisimulation, which is rather low in the spectrum of van Glabbeek and Vaandrager, the immediate question was what small extension of this logic could capture the more fine-grained hereditary history preserving bisimulation (hh)?

In response, the work in this paper provides several insights. One is the fact that the geometrical aspect of HDAs makes it possible to use for capturing the hh-bisimulation, a standard modal logic that does not employ event variables, opposed to the two logics (over less expressive models) that we compare with. The logic that we investigate here uses standard backward-looking modalities (i.e., past modalities) and extends the previously introduced logic (called HDML) that had only forward, action-labelled, modalities.

Since the direct proofs are rather intricate, we try to understand better the above issues by introducing a related model that we call ST-configuration structures, which extend the configuration structures of van Glabbeek and Plotkin. We relate this model to HDAs, and redefine and prove the earlier results in the light of this new model. These offer a different view on why the past modalities and geometrical concurrency capture the hereditary history preserving bisimulation.
Additional correlating insights are also gained.
Abstract. The game of chess as always been viewed as an iconic representation
of intellectual prowess. Since the very beginning of computer
science, the challenge of being able to program a computer capable
of playing chess and beating humans has been alive and used both as
a mark to measure hardware/software progresses and as an ongoing
programming challenge leading to numerous discoveries. In the early
days of computer science it was a topic for specialists. But as
computers were democratized, and the strength of chess engines
began to increase, chess players started to appropriate to
themselves these new tools. We show how these interactions between
the world of chess and information technologies have been herald of
broader social impacts of information technologies. The game of
chess, and more broadly the world of chess (chess players,
literature, computer softwares and websites dedicated to chess,
etc.), turns out to be a surprisingly and particularly sharp
indicator of the changes induced in our everyday life by the
information technologies. Moreover, in the same way that chess is a
modelization of war that captures the raw features of strategic
thinking, chess world can be seen as small society making the study
of the information technologies impact easier to analyze and to
grasp.
Abstract. Turing machines based on quantum logic can solve undecidable
problems. In this paper we will give recursion-theoretical
characterization of the computational power of this kind of quantum
Turing machines. In detail, for the unsharp case, it is proved that
&#931<sup>0</sup><sub>1</sub>
&#8746&#928<sup>0</sup><sub>1</sub>&#8838
L<sup>T</sup><sub>d</sub>(&#949,&#931)(L<sup>T</sup><sub>w</sub>(&#949,&#931))&#8838&#928<sup>0</sup><sub>2</sub>
when the truth value lattice is locally finite and the operation &#8743
is computable, where
L<sup>T</sup><sub>d</sub>(&#949,&#931)(L<sup>T</sup><sub>w</sub>(&#949,&#931))denotes the
class of quantum language accepted by these Turing machine in
depth-first model (respectively, width-first model);
for the sharp case, we can obtain similar results for usual orthomodular lattices.
Abstract. Many strategies have been exploited for the task of feature selection, in an effort to identify more compact and better quality feature subsets. Such techniques typically involve the use of an individual feature significance evaluation, or a measurement of feature subset consistency, that work together with a search algorithm in order to determine a quality subset. Feature selection ensemble aims to combine the outputs of multiple feature selectors, thereby producing a more robust result for the subsequent classifier learning tasks. In this paper, three novel implementations of the feature selection ensemble concept are introduced, generalising the ensemble approach so that it can be used in conjunction with many subset evaluation techniques, and search algorithms. A recently developed heuristic algorithm: harmony search is employed to demonstrate the approaches. Results of experimental comparative studies are reported in order to highlight the benefits of the present work. The
paper ends with a proposal to extend the application of feature selection ensemble to aiding the development of biped robots (inspired by the authors’ involvement in the joint celebration of Olympic and the centenary of the birth of Alan Turing).
Abstract. In the area of reasoning about actions, one of the key computational problems is the projection problem: to find whether a given logical formula is true after
performing a sequence of actions. This problem is undecidable in the general
situation calculus; however, it is decidable in some fragments. We consider
a fragment P of the situation calculus and Reiter's basic action theories (BAT)
such that the projection problem can be reduced to the satisfiability problem
in an expressive description logic $ALCO(U)$ that includes nominals ($O$),
the universal role ($U$), and constructs from the well-known logic $ALC$. It turns out
that our fragment P is more expressive than previously explored description logic
based fragments of the situation calculus. We explore some of the logical properties of our theories.
In particular, we show that the projection problem can be solved using regression
in the case where BATs include a general ``static" TBox, i.e., an ontology that has
no occurrences of fluents. Thus, we propose seamless integration of traditional
ontologies with reasoning about actions. We also show that the projection
problem can be solved using progression if all actions have only local effects on
the fluents, i.e., in P, if one starts with an incomplete initial theory that
can be transformed into an $ALCO(U)$ concept, then its progression resulting from
execution of a ground action can still be expressed in the same language. Moreover,
we show that for a broad class of incomplete initial theories progression can be computed efficiently.
Abstract. Induction is a powerful proof technique adapted to reason on sets
with an unbounded number of elements. In a first-order setting, two
different methods are distinguished: the conventional induction,
based on explicit induction schemas, and the implicit induction,
based on reductive procedures. We propose a new cycle-based
induction method that keeps their best features, i.e. i) performs
lazy induction, ii) naturally fits for mutual induction, and iii) is
free of reductive constraints. The heart of the method is a proof
strategy that identifies in the proof script the subset of formulas
contributing to validate the application of induction hypotheses.
The conventional and implicit induction are particular cases of our
method.
Abstract. This paper uses an information-theoretic perspective to propose multi-locus informativeness measures for ancestry inference. These measures describe the potential for correct classification of unknown individuals to their source populations, given genetic data on population structure. Motivated by Shannon's axiomatic approach in deriving a unique information measure for communication (Shannon 1948), we first identify a set of intuitively justifiable criteria that any such quantitative information measure should satisfy, and then select measures that comply with these criteria. It is shown that standard information-theoretic measures such as multidimensional mutual information cannot completely account for informativeness when source populations differ in size, necessitating a decision-theoretic approach.
Abstract. Creativity – whether in humans or machines – is more than a matter of simple creation. To be “creative” implies an ability to do more than invent, but an ability to recognize and appreciate the inventions of others. After all, the ability to recognize surprising value in the efforts of others is the same ability we use to guide our own creative efforts. Solipsistic creativity is rare indeed, and most creativity relies on an audience that is creative enough to value our efforts. Of what value is an ability to e.g. speak ironically if we cannot also understand or appreciate the irony of others? The goal of imbuing computers with creative abilities must thus include a sub-goal of enabling computers to recognize and respond appropriately to the creativity of others. As computers are increasingly used to analyze the burgeoning texts of the world-wide-web, the ability to automatically detect and analyze the linguistic creativity of speakers has become more important than ever. In this paper we consider how speakers engage creatively with cliché, to achieve creative ends through the novel variation of familiar linguistic forms. Our computational analysis of a large collection of linguistic patterns on the Web shows that speakers are surprisingly conservative in their variation strategies, and novelty alone rarely leads to creativity. This conformity can make it easier for computers to detect when speakers are using familiar language in truly original ways.
Abstract. Timed transition systems are a widely studied model for real-time systems.
The intention of the paper is to show how several categorical (open maps, path-bisimilarity and coalgebraic) approaches to an abstract characterization of
bisimulation relate to each other and to the numerous suggested behavioral equivalences of linear time -- branching time spectrum, in the setting of timed transition systems.
Abstract. In the paper we prove in a new and simple way that Interaction
machines are more powerful than Turing machines. To do that
we extend the definition of Interaction machines to multiple interactive
components, where each component may perform simple computation.
The emerging expressiveness is due to the power of interaction and allows
to accept languages not accepted by Turing machines. The main
result that Interaction machines can accept arbitrary languages over a
given alphabet sheds a new light to the power of interaction. Despite of
that we do not claim that Interaction machines are complete. We claim
that a complete theory of computer science cannot exist and especially,
Turing machines or Interaction machines cannot be a complete model of
computation. However complete models of computation may and should
be approximated indefinitely and our contribution presents one of such
attempts.
Abstract. We outline the logic of current approaches to the so-called
``frame problem'' (that is, the problem of predicting change in the
physical world by using logical inference), and we show that
these approaches are not completely extensional since
none of them is closed under uniform substitution. The underlying difficulty
is something known, in the philosophical community, as Goodman's``new riddle of induction'' or the ``Grue paradox''. Although it seems, from the philosophical discussion, that this paradox cannot be solved in purely a priori terms and that a solution will require some form of real-world data, it nevertheless remains obscure both what the logical form of this real-world data might be, and also how this data actually interacts with logical deduction. We show, using work of McCain and Turner, that this data can be captured using the semantics of classical proofs developed by Bellin, Hyland and Robinson, and, consequently, that the appropriate arena for solutions of the frame problem lies in proof theory. We also give a very explicit model for the categorical semantics of classical proof theory using techniques derived from work on the frame problem.