4th Workshop on Logic, Language, Information and Computation (WoLLIC'97)
August 20-22, 1997
(Tutorial Day: August 19th)
Ponta Mar Hotel, Fortaleza (Ceará), Brazil

Sponsored by IGPL, FoLLI, ASL, SBC
Funded by CAPES, CNPq, FINEP, FUNCAP
Organised by UFC, UFPE

[next page | previous page | main page]


Abstracts of Invited Talks

Devlin | Edalat | van Glabbeek | Gurevich | Johnstone | Kossak | Lehmann | McDermott | Moortgat | Vardi | Veloso


Using Logic to Understand What Went Wrong

by
Keith Devlin, St. Mary's College

The initial versions of newly designed information systems (indeed, new systems of most kinds) rarely work the way the different groups involved thought they were supposed to. Improving the design requires an analysis of what went wrong the first (or second, or third, etc.) time around. The deepest problems usually involve contexts of use and human factors.
The traditional approach to human factors issues is to carry out an analysis with the methodologies of social science. This can lead to a deep understanding of the rich complexities that arise in the context of use of the system. But how do the results of such an analysis find their way into the formal specifications required to change the design of the system?
Together with Duska Rosenberg, I have been developing and testing an analytic methodology called "LFZ analysis" (layered formalism and zooming), which can capture some of the complexity of a social science description and at the same time allow the precision required for system design. The method is grounded in situation theory, and uses the process (but not the product) of logical formalization as a means to achieve increased understanding of the domain in question.
Reference: Keith Devlin and Duska Rosenberg, Language at Work: Analyzing Communication Breakdown in the Workplace to Inform Systems Design, Stanford, CA: CSLI Publications, 1996.

Exact Real Number Computation Using Linear Fractional Transformations

by Abbas Edalat, Imperial College

We introduce a feasible and incremental framework for exact real number computation which uses linear fractional transformations with non-negative coefficients. It leads to the notion of exact floating point which unifies the previous approaches in this subject.
A library of efficient algorithms for elementary functions has been developed in this framework.
(This is joint work with Peter John Potts)
(See background material)

Translating Event Oriented Models of Concurrency into Petri nets by means of Propositional Logic

by Rob van Glabbeek, Stanford University

Event structures and Petri nets are both powerful models for the faithful description of parallel computation. By providing translations in both directions, Nielsen, Plotkin and Winskel have shown that a subclass of prime event structures can be seen as an abstraction of a subclass of safe Petri nets. In this talk I generalise this correspondence to generalisations of arbitrary event structures, and non-safe nets. For this purpose it is beneficial to view event structures as (infinitary) propositional theories.

Based on joint work with Gordon Plotkin.

Choice-less Polynomial Time

by Yuri Gurevich, University of Michigan

This is a joint work with Andreas Blass and Saharon Shelah. It is devoted to the central open question in finite model theory: Does there exist a logic that captures polynomial time? Years ago, the speaker conjectured the negative answer for the question. Now we propose a logic that captures more of polynomial time than any previous logics and which is maximal in a sense. The lecture requires some familiarity with Gurevich abstract state machines (formerly evolving algebras).
(See background material.)

Grothendieck Toposes as First-Order Theories

by Peter Johnstone, Cambridge University

It is by now well understood that a Grothendieck topos can be viewed as the `extensional embodiment' of a geometric theory, in the sense that (a) for every Grothendieck topos E there is a geometric theory T(E) such that E contains a generic T(E)-model, and (b) for every geometric theory T there is a Grothendieck topos E(T) containing a generic model of T. However, as Carsten Butz has recently emphasized, a Grothendieck topos can also be viewed as the embodiment of a certain (infinitary, non-classical) first-order theory, in the sense that it contains a model of this theory which is generic with respect to open maps of toposes. Elementary size considerations imply that not every infinitary first-order theory has a `classifying topos' in this sense; in this talk we shall explore what can be said about those theories which do have classifying toposes.

What is Model Theory of Arithmetic?

by Roman Kossak, City University of New York

There is little evidence that the study of nonstandard models of arithmetic can contribute to our knowledge of standard natural numbers. The period of active development, initiated by the discovery of Paris and Harrington, ended after it turned out that the major problems of the model theory of fragments of arithmetic can be reduced to open, and apparently difficult, problems of number theory and computational complexity. However, for those whose main interest is model theory as an independent field of study, nonstandard models of arithmetic have a lot to offer. Peano Arithmetic can serve as an attractive test case for measuring the strength and limitations of model theoretic techniques. The general problem is to determine how much of the structure of a single model, and how much of the variety of the class of all models, can be described relative to some basic isomorphism invariants. Even the countable case poses significant difficulties, and part of the problem is to decide what are the essential, and natural, invariants that should be considered.
In my lecture I want to discuss one particular direction - the study of recursively saturated models. I want to argue that, in a precise sense, the study of countable models of PA can be reduced to the study of a single recursively saturated model. I will then discuss what is known about the structure of a recursively saturated model of PA. I will concentrate on results directly related to general questions concerning automorphisms and automorphism groups, and I will try to show how this approach generates the "right" kind of model theory of arithmetic.

Stereotypical Reasoning

by Daniel Lehmann, Hebrew University of Jerusalem

Stereotypical reasoning assumes that the situation at hand is one of a kind and that it enjoys the properties generally associated with that kind of situations.
It is one of the most basic forms of nonmonotonic reasoning. A formal model for stereotypical reasoning is proposed and the logical properties of this form of reasoning are studied. Stereotypical reasoning is shown to be cumulative under weak assumptions.

Why Nonmonotonicity is Trivial Even Though Nonmonotonic Logic is Impossible

by Drew McDermott, Yale University

Nonmonotonic logic has proven difficult to formalize and intractable to work with, but people persist in working on it because it is supposed to be a crucial part of the theory of knowledge representation. Why? Presumably because implementing reasoning computationally requires formalizing reasoning, which requires expressing it in terms of logic, and too much of it can't be expressed using standard, monotonic logic; hence, logic must be extended. Many people find this argument convincing. I show that it is fallacious. The principal fallacy is to suppose that formalism = logic, but there is a worse fallacy in the neighborhood, which is the belief that because computers are formal systems, every conclusion they draw must be true in some formal deductive system. The truth is that formalizing reasoning is simply a matter of expressing it algorithmically; computer science already provides an adequate theory of knowledge representation. From this point of view, nonmonotonicity is seen to be a simple and not terribly central aspect of reasoning.

Grammatical Resources: Logic, Structure, and Control

by Michael Moortgat, Utrecht University

A fusion of ideas from Girard's linear logic and from Lambek's type-logical tradition has led to the development of a logic of `grammatical resources', offering a deductive perspective on the composition of form and meaning in natural language. The architecture of this grammar logic consists of three components. The core deductive engine is a base logic characterizing the universally valid laws of grammatical composition, i.e. laws that are independent of non-logical properties of the composition relation. The structural component then defines packages of resource management postulates for various modes of composition. These packages are added to the base logic as `plug-in' modules. They give rise to structural variation within languages (differential management for distinct dimensions of grammatical organization), and cross-linguistically. Most importantly, the control component provides systematic means for imposing structural constraints and for licensing structural relaxation under the explicit control of logical constants --- so-called `modalities'. Viewed from the perspective of a Curry-Howard style `derivational' theory of meaning, the expressive power of the grammatical base logic is limited --- the price one pays for structure sensitivity. We discuss this tension between semantic expressivity and structural discrimination. We characterize a family of `synthetic' operators, defined in terms of the control modalities, that recover the lost semantic expressivity without destroying vital structural information.

Computational Model Theory

by Moshe Y. Vardi, Rice University

Model theory and recursion theory are two distinct and quite separate branches in mathematical logic. On the other hand, finite-model theory and complexity theory, which constitute their analogues in computer science, are intimately interrelated, giving together rise to computational model theory. Underlying this theory is the extension of first-order logic with iteration constructs, in order to make the logic "computational". These iterative logics can be viewed as effective fragments of infinitary finite-variable logic, whose expressive power can be captured by means of pebble games. Alternatively, these logics can be viewed as complexity-bounded classes of relational machines, which are Turing machines equiped with a relational store. Thus, characterizing the expressive power of these logics is equivalent to understanding the relationship between complexity classes such as PTIME and PSPACE.
In this expository talk the speaker will survey 15 years of research in this area.

Logical Specifications and Program Development

by Paulo Veloso, Universidade Federal do Rio de Janeiro

An overview of logical specifications is presented, emphasising the interplay between programming considerations and logical aspects arising from abstract analyses. This approach aims at concepts and methods that are theoretically sound and close to programming practice. The presentation starts with an analysis of the programming process, which provides motivations for logical specifications, and examines some logical concepts and results suggested by this approach illustrating their applications.
Stepwise program development can be seen as successive refinements of behavioural specifications, of programs and data. Also, a specification can be gradually constructed from smaller ones. In this process, one generally wishes to preserve, rather than revise, properties. This is the idea of modularity: compositionality of steps without undue extra effort [1,4]. Specification engineering involves methodical specification construction (management of software projects can also be seen in these terms) [4].
Logical formalisms are used in program analysis [2] and they appear to be adequate for specifying at the appropriate level of abstraction, i.e. without extraneous details [1].
Logica specifications are based on (many-sorted) first-order logic, they describe computational objects by abstracting their properties into axiomatic theory presentations. The central concepts concerning implementations and parameterised specifications are formulated in terms of interpretations and conservative extensions of theories. An implementation is an interpretation into a conservative extension; parameterised specifications are conservative extensions of their parameters and the latter can be instantiated by interpretations to originate instantiated specifications [1,4]. Composition of implementations and instantiated specifications hinge on a pushout construction, which should preserve conservativeness. Modularity results, connected to interpolation and joint consistency, establish the preservation of faithfulness (so, of conservative extensions) by pushouts [6,7].
Methods for constructing conservative extensions are of interest [1]. For new operations the methods examined involve weakening of classical definitions and fall into three classes according to their power. For new sorts, constructs akin to those in programming languages and in Intuitionistic Type Theory are formulated as definition-like.
Logical specifications have also been instrumental in formalising problem-solving methods [5], as well as in providing tools for their application [3].

References
[1]. T.S.E. Maibaum and P.A.S. Veloso, Logical Specifications. In: S. Abramsky, D. Gabbay and T.S.E. Maibaum (eds.) Handbook of Logic in Computer Science, Oxford Univ. Press, Oxford, (forthcoming).
[2]. Z. Manna, The Mathematical Theory of Computation, McGraw-Hill, New York, 1974.
[3]. D.R. Smith, KIDS: a semiautomatic program development system. IEEE Trans. Software Engin. SE-16(9):1024-1043, 1990.
[4]. W.M. Turski and T.S.E. Maibaum, The Specification of Computer Programs. Addison-Wesley, Wokingham, 1987.
[5]. P.A.S. Veloso, On the concepts of problem and problem-solving method. Decision Support Systems 3(2):59-66, 1987.
[6]. P.A.S. Veloso, On pushout consistency, modularity and interpolation for logical specifications. Inform. Proc. Letters 60(2):59-66, 1996.
[7]. P.A.S. Veloso and T.S.E. Maibaum, On the Modularization Theorem for Logical Specifications. Inform. Proc. Letters 53(5):287-293, 1995.