title
stringlengths 4
246
| id
stringlengths 32
39
| arxiv_url
stringlengths 32
39
| pdf_url
stringlengths 32
39
| published_date
stringlengths 10
10
| updated_date
stringlengths 10
10
| authors
sequencelengths 1
535
| affiliations
sequencelengths 1
535
| summary
stringlengths 23
3.54k
| comment
stringlengths 0
762
| journal_ref
stringlengths 0
545
| doi
stringlengths 0
151
| primary_category
stringclasses 156
values | categories
sequencelengths 1
11
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Improving Precision of Type Analysis Using Non-Discriminative Union | http://arxiv.org/abs/cs/0612063v1 | http://arxiv.org/abs/cs/0612063v1 | http://arxiv.org/pdf/cs/0612063v1 | 2006-12-12 | 2006-12-12 | [
"Lunjin Lu"
] | [
""
] | This paper presents a new type analysis for logic programs. The analysis is
performed with a priori type definitions; and type expressions are formed from
a fixed alphabet of type constructors. Non-discriminative union is used to join
type information from different sources without loss of precision. An operation
that is performed repeatedly during an analysis is to detect if a fixpoint has
been reached. This is reduced to checking the emptiness of types. Due to the
use of non-discriminative union, the fundamental problem of checking the
emptiness of types is more complex in the proposed type analysis than in other
type analyses with a priori type definitions. The experimental results,
however, show that use of tabling reduces the effect to a small fraction of
analysis time on a set of benchmarks.
Keywords: Type analysis, Non-discriminative union, Abstract interpretation,
Tabling | 47 pages, 5 tables, to appear in Theory and Practice of Logic
Programming | Theory and Practice of Logic Programming, 8 (1): 33-80, 2008 | cs.PL | [
"cs.PL"
] |
|
Predicate Abstraction via Symbolic Decision Procedures | http://arxiv.org/abs/cs/0612003v2 | http://arxiv.org/abs/cs/0612003v2 | http://arxiv.org/pdf/cs/0612003v2 | 2006-12-01 | 2007-04-24 | [
"Shuvendu K. Lahiri",
"Thomas Ball",
"Byron Cook"
] | [
"",
"",
""
] | We present a new approach for performing predicate abstraction based on
symbolic decision procedures. Intuitively, a symbolic decision procedure for a
theory takes a set of predicates in the theory and symbolically executes a
decision procedure on all the subsets over the set of predicates. The result of
the symbolic decision procedure is a shared expression (represented by a
directed acyclic graph) that implicitly represents the answer to a predicate
abstraction query.
We present symbolic decision procedures for the logic of Equality and
Uninterpreted Functions (EUF) and Difference logic (DIFF) and show that these
procedures run in pseudo-polynomial (rather than exponential) time. We then
provide a method to construct symbolic decision procedures for simple mixed
theories (including the two theories mentioned above) using an extension of the
Nelson-Oppen combination method. We present preliminary evaluation of our
Procedure on predicate abstraction benchmarks from device driver verification
in SLAM. | The final accepted paper for Logical Methods in Computer Science,
special issue on CAV 2005. Editor Sriram Rajamani (sriram@microsoft.com).
Please perform make to build the paper. The pdf file is paper.pdf, and the
comments for the referee's is present in referee_comments | Logical Methods in Computer Science, Volume 3, Issue 2 (April 24,
2007) lmcs:2218 | 10.2168/LMCS-3(2:1)2007 | cs.LO | [
"cs.LO",
"cs.PL",
"cs.SC",
"F.3.1; F.4.1"
] |
Knowledge Representation Concepts for Automated SLA Management | http://arxiv.org/abs/cs/0611122v1 | http://arxiv.org/abs/cs/0611122v1 | http://arxiv.org/pdf/cs/0611122v1 | 2006-11-23 | 2006-11-23 | [
"Adrian Paschke",
"Martin Bichler"
] | [
"",
""
] | Outsourcing of complex IT infrastructure to IT service providers has
increased substantially during the past years. IT service providers must be
able to fulfil their service-quality commitments based upon predefined Service
Level Agreements (SLAs) with the service customer. They need to manage, execute
and maintain thousands of SLAs for different customers and different types of
services, which needs new levels of flexibility and automation not available
with the current technology. The complexity of contractual logic in SLAs
requires new forms of knowledge representation to automatically draw inferences
and execute contractual agreements. A logic-based approach provides several
advantages including automated rule chaining allowing for compact knowledge
representation as well as flexibility to adapt to rapidly changing business
requirements. We suggest adequate logical formalisms for representation and
enforcement of SLA rules and describe a proof-of-concept implementation. The
article describes selected formalisms of the ContractLog KR and their adequacy
for automated SLA management and presents results of experiments to demonstrate
flexibility and scalability of the approach. | Paschke, A. and Bichler, M.: Knowledge Representation Concepts for
Automated SLA Management, Int. Journal of Decision Support Systems (DSS),
submitted 19th March 2006 | cs.SE | [
"cs.SE",
"cs.AI",
"cs.LO",
"cs.PL",
"I.2"
] |
||
Effectiveness of Garbage Collection in MIT/GNU Scheme | http://arxiv.org/abs/cs/0611093v2 | http://arxiv.org/abs/cs/0611093v2 | http://arxiv.org/pdf/cs/0611093v2 | 2006-11-20 | 2007-09-01 | [
"Amey Karkare",
"Amitabha Sanyal",
"Uday Khedker"
] | [
"",
"",
""
] | Scheme uses garbage collection for heap memory management. Ideally, garbage
collectors should be able to reclaim all dead objects, i.e. objects that will
not be used in future. However, garbage collectors collect only those dead
objects that are not reachable from any program variable. Dead objects that are
reachable from program variables are not reclaimed.
In this paper we describe our experiments to measure the effectiveness of
garbage collection in MIT/GNU Scheme. We compute the drag time of objects, i.e.
the time for which an object remains in heap memory after its last use. The
number of dead objects and the drag time together indicate opportunities for
improving garbage collection. Our experiments reveal that up to 26% of dead
objects remain in memory. The average drag time is up to 37% of execution time.
Overall, we observe memory saving potential ranging from 9% to 65%. | 7 figures, 3 tables | cs.PL | [
"cs.PL",
"cs.PF",
"D.3.2; D.3.4; D.4.2"
] |
||
Interactive Problem Solving in Prolog | http://arxiv.org/abs/cs/0611014v1 | http://arxiv.org/abs/cs/0611014v1 | http://arxiv.org/pdf/cs/0611014v1 | 2006-11-03 | 2006-11-03 | [
"Erik Braun",
"Rainer Luetticke",
"Ingo Gloeckner",
"Hermann Helbig"
] | [
"",
"",
"",
""
] | This paper presents an environment for solving Prolog problems which has been
implemented as a module for the virtual laboratory VILAB. During the problem
solving processes the learners get fast adaptive feedback. As a result
analysing the learner's actions the system suggests the use of suitable
auxiliary predicates which will also be checked for proper implementation. The
focus of the environment has been set on robustness and the integration in
VILAB. | 4 pages, 1 figure, accepted for publication: Interactive computer
aided learning (ICL) 2006, International Conference in Villach (Austria).
Paper was not published because the authors were not able to participate on
the conference | cs.HC | [
"cs.HC",
"cs.CY",
"cs.PL",
"K.3.1; K.3.2; D.1.6; I.2.6; H.5.2"
] |
||
Efficient constraint propagation engines | http://arxiv.org/abs/cs/0611009v1 | http://arxiv.org/abs/cs/0611009v1 | http://arxiv.org/pdf/cs/0611009v1 | 2006-11-02 | 2006-11-02 | [
"Christian Schulte",
"Peter J. Stuckey"
] | [
"",
""
] | This paper presents a model and implementation techniques for speeding up
constraint propagation. Three fundamental approaches to improving constraint
propagation based on propagators as implementations of constraints are
explored: keeping track of which propagators are at fixpoint, choosing which
propagator to apply next, and how to combine several propagators for the same
constraint. We show how idempotence reasoning and events help track fixpoints
more accurately. We improve these methods by using them dynamically (taking
into account current domains to improve accuracy). We define priority-based
approaches to choosing a next propagator and show that dynamic priorities can
improve propagation. We illustrate that the use of multiple propagators for the
same constraint can be advantageous with priorities, and introduce staged
propagators that combine the effects of multiple propagators with priorities
for greater efficiency. | 45 pages, 1 figure, 14 tables | ACM TOPLAS, 31(1) article 2, 2008 | cs.AI | [
"cs.AI",
"cs.PL",
"D.3.2; D.3.3"
] |
|
Complexity of Data Flow Analysis for Non-Separable Frameworks | http://arxiv.org/abs/cs/0610164v2 | http://arxiv.org/abs/cs/0610164v2 | http://arxiv.org/pdf/cs/0610164v2 | 2006-10-30 | 2006-10-31 | [
"Bageshri Karkare",
"Uday Khedker"
] | [
"",
""
] | The complexity of round robin method of intraprocedural data flow analysis is
measured in number of iterations over the control flow graph. Existing
complexity bounds realistically explain the complexity of only Bit-vector
frameworks which are separable. In this paper we define the complexity bounds
for non-separable frameworks by quantifying the interdependences among the data
flow information of program entities using an Entity Dependence Graph. | Published in the International Conference on Programming Languages
and Compilers (PLC) 2006, Las Vegas, U.S.A | cs.PL | [
"cs.PL"
] |
||
A type-based termination criterion for dependently-typed higher-order
rewrite systems | http://arxiv.org/abs/cs/0610062v1 | http://arxiv.org/abs/cs/0610062v1 | http://arxiv.org/pdf/cs/0610062v1 | 2006-10-11 | 2006-10-11 | [
"Frederic Blanqui"
] | [
""
] | Several authors devised type-based termination criteria for ML-like languages
allowing non-structural recursive calls. We extend these works to general
rewriting and dependent types, hence providing a powerful termination criterion
for the combination of rewriting and beta-reduction in the Calculus of
Constructions. | Colloque avec actes et comit\'{e} de lecture. internationale | Dans 15th International Conference on Rewriting Techniques and
Applications - RTA'04 (2004) 15 p | cs.LO | [
"cs.LO",
"cs.PL"
] |
|
Memory and compiler optimizations for low-power and -energy | http://arxiv.org/abs/cs/0610028v1 | http://arxiv.org/abs/cs/0610028v1 | http://arxiv.org/pdf/cs/0610028v1 | 2006-10-05 | 2006-10-05 | [
"Olivier Zendra"
] | [
""
] | Embedded systems become more and more widespread, especially autonomous ones,
and clearly tend to be ubiquitous. In such systems, low-power and low-energy
usage get ever more crucial. Furthermore, these issues also become paramount in
(massively) multi-processors systems, either in one machine or more widely in a
grid. The various problems faced pertain to autonomy, power supply
possibilities, thermal dissipation, or even sheer energy cost. Although it has
since long been studied in harware, energy optimization is more recent in
software. In this paper, we thus aim at raising awareness to low-power and
low-energy issues in the language and compilation community. We thus broadly
but briefly survey techniques and solutions to this energy issue, focusing on a
few specific aspects in the context of compiler optimizations and memory
management. | ICOOOLPS'2006 was co-located with the 20th European Conference on
Object-Oriented Programming (ECOOP'2006) | Dans 1st ECOOP Workshop on Implementation, Compilation,
Optimization of Object-Oriented Languages, Programs and Systems
(ICOOOLPS'2006). (2006) | cs.PL | [
"cs.PL",
"cs.PF"
] |
|
On Verifying Complex Properties using Symbolic Shape Analysis | http://arxiv.org/abs/cs/0609104v1 | http://arxiv.org/abs/cs/0609104v1 | http://arxiv.org/pdf/cs/0609104v1 | 2006-09-18 | 2006-09-18 | [
"Thomas Wies",
"Viktor Kuncak",
"Karen Zee",
"Andreas Podelski",
"Martin Rinard"
] | [
"",
"",
"",
"",
""
] | One of the main challenges in the verification of software systems is the
analysis of unbounded data structures with dynamic memory allocation, such as
linked data structures and arrays. We describe Bohne, a new analysis for
verifying data structures. Bohne verifies data structure operations and shows
that 1) the operations preserve data structure invariants and 2) the operations
satisfy their specifications expressed in terms of changes to the set of
objects stored in the data structure. During the analysis, Bohne infers loop
invariants in the form of disjunctions of universally quantified Boolean
combinations of formulas. To synthesize loop invariants of this form, Bohne
uses a combination of decision procedures for Monadic Second-Order Logic over
trees, SMT-LIB decision procedures (currently CVC Lite), and an automated
reasoner within the Isabelle interactive theorem prover. This architecture
shows that synthesized loop invariants can serve as a useful communication
mechanism between different decision procedures. Using Bohne, we have verified
operations on data structures such as linked lists with iterators and back
pointers, trees with and without parent pointers, two-level skip lists, array
data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob
data structure analysis systems, enabling us to combine Bohne with analyses of
data structure clients and apply it in the context of larger programs. This
report describes the Bohne algorithm as well as techniques that Bohne uses to
reduce the ammount of annotations and the running time of the analysis. | cs.PL | [
"cs.PL",
"cs.LO",
"cs.SE"
] |
|||
Analysis of Equality Relationships for Imperative Programs | http://arxiv.org/abs/cs/0609092v1 | http://arxiv.org/abs/cs/0609092v1 | http://arxiv.org/pdf/cs/0609092v1 | 2006-09-16 | 2006-09-16 | [
"P. Emelyanov"
] | [
""
] | In this article, we discuss a flow--sensitive analysis of equality
relationships for imperative programs. We describe its semantic domains,
general purpose operations over abstract computational states (term evaluation
and identification, semantic completion, widening operator, etc.) and semantic
transformers corresponding to program constructs. We summarize our experiences
from the last few years concerning this analysis and give attention to
applications of analysis of automatically generated code. Among other
illustrating examples, we consider a program for which the analysis diverges
without a widening operator and results of analyzing residual programs produced
by some automatic partial evaluator. An example of analysis of a program
generated by this evaluator is given. | 31 pages, 10 figures, 2 tables, 1 appendix | cs.PL | [
"cs.PL",
"D.3.1; F.3.2"
] |
||
Nominal Logic Programming | http://arxiv.org/abs/cs/0609062v2 | http://arxiv.org/abs/cs/0609062v2 | http://arxiv.org/pdf/cs/0609062v2 | 2006-09-12 | 2007-08-20 | [
"James Cheney",
"Christian Urban"
] | [
"",
""
] | Nominal logic is an extension of first-order logic which provides a simple
foundation for formalizing and reasoning about abstract syntax modulo
consistent renaming of bound names (that is, alpha-equivalence). This article
investigates logic programming based on nominal logic. We describe some typical
nominal logic programs, and develop the model-theoretic, proof-theoretic, and
operational semantics of such programs. Besides being of interest for ensuring
the correct behavior of implementations, these results provide a rigorous
foundation for techniques for analysis and reasoning about nominal logic
programs, as we illustrate via examples. | 46 pages; 19 page appendix; 13 figures. Revised journal submission as
of July 23, 2007 | ACM Transactions on Programming Languages and Systems 30(5):26,
August 2008 | 10.1145/1387673.1387675 | cs.PL | [
"cs.PL",
"cs.LO",
"D.1.6; F.3.2; F.4.1"
] |
On the confluence of lambda-calculus with conditional rewriting | http://arxiv.org/abs/cs/0609002v2 | http://arxiv.org/abs/cs/0609002v2 | http://arxiv.org/pdf/cs/0609002v2 | 2006-09-01 | 2006-09-11 | [
"Frédéric Blanqui",
"Claude Kirchner",
"Colin Riba"
] | [
"",
"",
""
] | The confluence of untyped lambda-calculus with unconditional rewriting has
already been studied in various directions. In this paper, we investigate the
confluence of lambda-calculus with conditional rewriting and provide general
results in two directions. First, when conditional rules are algebraic. This
extends results of Muller and Dougherty for unconditional rewriting. Two cases
are considered, whether beta-reduction is allowed or not in the evaluation of
conditions. Moreover, Dougherty's result is improved from the assumption of
strongly normalizing beta-reduction to weakly normalizing beta-reduction. We
also provide examples showing that outside these conditions, modularity of
confluence is difficult to achieve. Second, we go beyond the algebraic
framework and get new confluence results using an extended notion of
orthogonality that takes advantage of the conditional part of rewrite rules. | 10.1007/11690634\_26 | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
Decidability of Type-checking in the Calculus of Algebraic Constructions
with Size Annotations | http://arxiv.org/abs/cs/0608125v2 | http://arxiv.org/abs/cs/0608125v2 | http://arxiv.org/pdf/cs/0608125v2 | 2006-08-31 | 2006-09-11 | [
"Frédéric Blanqui"
] | [
""
] | Since Val Tannen's pioneer work on the combination of simply-typed
lambda-calculus and first-order rewriting (LICS'88), many authors have
contributed to this subject by extending it to richer typed lambda-calculi and
rewriting paradigms, culminating in calculi like the Calculus of Algebraic
Constructions. These works provide theoretical foundations for type-theoretic
proof assistants where functions and predicates are defined by oriented
higher-order equations. This kind of definitions subsumes induction-based
definitions, is easier to write and provides more automation. On the other
hand, checking that user-defined rewrite rules are strongly normalizing and
confluent, and preserve the decidability of type-checking when combined with
beta-reduction, is more difficult. Most termination criteria rely on the term
structure. In a previous work, we extended to dependent types and higher-order
rewriting, the notion of ``sized types'' studied by several authors in the
simpler framework of ML-like languages, and proved that it preserves strong
normalization. The main contribution of the present paper is twofold. First, we
prove that, in the Calculus of Algebraic Constructions with size annotations,
the problems of type inference and type-checking are decidable, provided that
the sets of constraints generated by size annotations are satisfiable and admit
most general solutions. Second, we prove the later properties for a size
algebra rich enough for capturing usual induction-based definitions and much
more. | 10.1007/11538363\_11 | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
FOSS-Based Grid Computing | http://arxiv.org/abs/cs/0608122v4 | http://arxiv.org/abs/cs/0608122v4 | http://arxiv.org/pdf/cs/0608122v4 | 2006-08-30 | 2015-07-17 | [
"A. Mani"
] | [
""
] | In this expository paper we will be primarily concerned with core aspects of
Grids and Grid computing using free and open-source software with some emphasis
on utility computing. It is based on a technical report entitled
'Grid-Computing Using GNU/Linux' by the present author. This article was
written in 2006 and should be of historical interest. | 47 Pages. arXiv admin note: text overlap with arXiv:cs/0605056 by
other authors | cs.DC | [
"cs.DC",
"cs.PL",
"C.2.4; D.1.3"
] |
||
Heap Reference Analysis Using Access Graphs | http://arxiv.org/abs/cs/0608104v3 | http://arxiv.org/abs/cs/0608104v3 | http://arxiv.org/pdf/cs/0608104v3 | 2006-08-28 | 2007-09-01 | [
"Uday Khedker",
"Amitabha Sanyal",
"Amey Karkare"
] | [
"",
"",
""
] | Despite significant progress in the theory and practice of program analysis,
analysing properties of heap data has not reached the same level of maturity as
the analysis of static and stack data. The spatial and temporal structure of
stack and static data is well understood while that of heap data seems
arbitrary and is unbounded. We devise bounded representations which summarize
properties of the heap data. This summarization is based on the structure of
the program which manipulates the heap. The resulting summary representations
are certain kinds of graphs called access graphs. The boundedness of these
representations and the monotonicity of the operations to manipulate them make
it possible to compute them through data flow analysis.
An important application which benefits from heap reference analysis is
garbage collection, where currently liveness is conservatively approximated by
reachability from program variables. As a consequence, current garbage
collectors leave a lot of garbage uncollected, a fact which has been confirmed
by several empirical studies. We propose the first ever end-to-end static
analysis to distinguish live objects from reachable objects. We use this
information to make dead objects unreachable by modifying the program. This
application is interesting because it requires discovering data flow
information representing complex semantics. In particular, we discover four
properties of heap data: liveness, aliasing, availability, and anticipability.
Together, they cover all combinations of directions of analysis (i.e. forward
and backward) and confluence of information (i.e. union and intersection). Our
analysis can also be used for plugging memory leaks in C/C++ languages. | Accepted for printing by ACM TOPLAS. This version incorporates
referees' comments | ACM TOPLAS, 30(1), 2007 | 10.1145/1290520.1290521 | cs.PL | [
"cs.PL",
"cs.SE",
"D.3.4; F.3.2"
] |
Modules over Monads and Linearity | http://arxiv.org/abs/cs/0608051v2 | http://arxiv.org/abs/cs/0608051v2 | http://arxiv.org/pdf/cs/0608051v2 | 2006-08-11 | 2007-05-07 | [
"André Hirschowitz",
"Marco Maggesi"
] | [
"",
""
] | Inspired by the classical theory of modules over a monoid, we give a first
account of the natural notion of module over a monad. The associated notion of
morphism of left modules ("Linear" natural transformations) captures an
important property of compatibility with substitution, in the heterogeneous
case where "terms" and variables therein could be of different types as well as
in the homogeneous case. In this paper, we present basic constructions of
modules and we show examples concerning in particular abstract syntax and
lambda-calculus. | 15 pages, too many changes to be summarized | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
Resource Usage Analysis for the Pi-Calculus | http://arxiv.org/abs/cs/0608035v2 | http://arxiv.org/abs/cs/0608035v2 | http://arxiv.org/pdf/cs/0608035v2 | 2006-08-07 | 2006-09-13 | [
"Naoki Kobayashi",
"Kohei Suenaga",
"Lucian Wischik"
] | [
"",
"",
""
] | We propose a type-based resource usage analysis for the π-calculus
extended with resource creation/access primitives. The goal of the resource
usage analysis is to statically check that a program accesses resources such as
files and memory in a valid manner. Our type system is an extension of previous
behavioral type systems for the π-calculus, and can guarantee the safety
property that no invalid access is performed, as well as the property that
necessary accesses (such as the close operation for a file) are eventually
performed unless the program diverges. A sound type inference algorithm for the
type system is also developed to free the programmer from the burden of writing
complex type annotations. Based on the algorithm, we have implemented a
prototype resource usage analyzer for the π-calculus. To the authors'
knowledge, ours is the first type-based resource usage analysis that deals with
an expressive concurrent language like the pi-calculus. | Logical Methods in Computer Science, Volume 2, Issue 3 (September
13, 2006) lmcs:2246 | 10.2168/LMCS-2(3:4)2006 | cs.PL | [
"cs.PL",
"cs.LO",
"F.3.1; D.3.1"
] |
|
On Quasi-Interpretations, Blind Abstractions and Implicit Complexity | http://arxiv.org/abs/cs/0608030v1 | http://arxiv.org/abs/cs/0608030v1 | http://arxiv.org/pdf/cs/0608030v1 | 2006-08-06 | 2006-08-06 | [
"Patrick Baillot",
"Ugo Dal Lago",
"Jean-Yves Moyen"
] | [
"",
"",
""
] | Quasi-interpretations are a technique to guarantee complexity bounds on
first-order functional programs: with termination orderings they give in
particular a sufficient condition for a program to be executable in polynomial
time, called here the P-criterion. We study properties of the programs
satisfying the P-criterion, in order to better understand its intensional
expressive power. Given a program on binary lists, its blind abstraction is the
nondeterministic program obtained by replacing lists by their lengths (natural
numbers). A program is blindly polynomial if its blind abstraction terminates
in polynomial time. We show that all programs satisfying a variant of the
P-criterion are in fact blindly polynomial. Then we give two extensions of the
P-criterion: one by relaxing the termination ordering condition, and the other
one (the bounded value property) giving a necessary and sufficient condition
for a program to be polynomial time executable, with memoisation. | 18 pages | cs.PL | [
"cs.PL",
"cs.CC",
"cs.LO"
] |
||
ACD Term Rewriting | http://arxiv.org/abs/cs/0608016v1 | http://arxiv.org/abs/cs/0608016v1 | http://arxiv.org/pdf/cs/0608016v1 | 2006-08-03 | 2006-08-03 | [
"Gregory J. Duck",
"Peter J. Stuckey",
"Sebastian Brand"
] | [
"",
"",
""
] | We introduce Associative Commutative Distributive Term Rewriting (ACDTR), a
rewriting language for rewriting logical formulae. ACDTR extends AC term
rewriting by adding distribution of conjunction over other operators.
Conjunction is vital for expressive term rewriting systems since it allows us
to require that multiple conditions hold for a term rewriting rule to be used.
ACDTR uses the notion of a "conjunctive context", which is the conjunction of
constraints that must hold in the context of a term, to enable the programmer
to write very expressive and targeted rewriting rules. ACDTR can be seen as a
general logic programming language that extends Constraint Handling Rules and
AC term rewriting. In this paper we define the semantics of ACDTR and describe
our prototype implementation. | 21 pages; 22nd International Conference on Logic Programming
(ICLP'06) | cs.PL | [
"cs.PL",
"cs.SC"
] |
||
Towards "Propagation = Logic + Control" | http://arxiv.org/abs/cs/0608015v1 | http://arxiv.org/abs/cs/0608015v1 | http://arxiv.org/pdf/cs/0608015v1 | 2006-08-03 | 2006-08-03 | [
"Sebastian Brand",
"Roland H. C. Yap"
] | [
"",
""
] | Constraint propagation algorithms implement logical inference. For
efficiency, it is essential to control whether and in what order basic
inference steps are taken. We provide a high-level framework that clearly
differentiates between information needed for controlling propagation versus
that needed for the logical semantics of complex constraints composed from
primitive ones. We argue for the appropriateness of our controlled propagation
framework by showing that it captures the underlying principles of manually
designed propagation algorithms, such as literal watching for unit clause
propagation and the lexicographic ordering constraint. We provide an
implementation and benchmark results that demonstrate the practicality and
efficiency of our framework. | 15 pages; 22nd International Conference on Logic Programming
(ICLP'06) | cs.PL | [
"cs.PL",
"cs.AI"
] |
||
Deriving Escape Analysis by Abstract Interpretation: Proofs of results | http://arxiv.org/abs/cs/0607101v2 | http://arxiv.org/abs/cs/0607101v2 | http://arxiv.org/pdf/cs/0607101v2 | 2006-07-24 | 2006-07-28 | [
"Patricia M. Hill",
"Fausto Spoto"
] | [
"",
""
] | Escape analysis of object-oriented languages approximates the set of objects
which do not escape from a given context. If we take a method as context, the
non-escaping objects can be allocated on its activation stack; if we take a
thread, Java synchronisation locks on such objects are not needed. In this
paper, we formalise a basic escape domain e as an abstract interpretation of
concrete states, which we then refine into an abstract domain er which is more
concrete than e and, hence, leads to a more precise escape analysis than e. We
provide optimality results for both e and er, in the form of Galois insertions
from the concrete to the abstract domains and of optimal abstract operations.
The Galois insertion property is obtained by restricting the abstract domains
to those elements which do not contain garbage, by using an abstract garbage
collector. Our implementation of er is hence an implementation of a formally
correct escape analyser, able to detect the stack allocatable creation points
of Java (bytecode) applications.
This report contains the proofs of results of a paper with the same title and
authors and to be published in the Journal "Higher-Order Symbolic Computation". | cs.PL | [
"cs.PL",
"F.2"
] |
|||
De l'oprateur de trace dans les jeux de Conway | http://arxiv.org/abs/math/0607462v1 | http://arxiv.org/abs/math/0607462v1 | http://arxiv.org/pdf/math/0607462v1 | 2006-07-19 | 2006-07-19 | [
"Nicolas Tabareau"
] | [
""
] | In this report, we propose a game semantics model of intuitionistic linear
logic with a notion of brackets and a trace operator. This model is a revised
version of Conway games augmented with an algebraicly defined gain which enable
to describe well bracketed strategies. We then show the existence of a free
cocommutative comonoid in the category of Conway. To conclude, we propose a new
model of an Algol-like language with higher-order using the presence of a trace
operator in our model to describe the memorial aspect of the language. | math.CT | [
"math.CT",
"cs.PL"
] |
|||
PALS: Efficient Or-Parallelism on Beowulf Clusters | http://arxiv.org/abs/cs/0607040v1 | http://arxiv.org/abs/cs/0607040v1 | http://arxiv.org/pdf/cs/0607040v1 | 2006-07-09 | 2006-07-09 | [
"Enrico Pontelli",
"Karen Villaverde",
"Hai-Feng Guo",
"Gopal Gupta"
] | [
"",
"",
"",
""
] | This paper describes the development of the PALS system, an implementation of
Prolog capable of efficiently exploiting or-parallelism on distributed-memory
platforms--specifically Beowulf clusters. PALS makes use of a novel technique,
called incremental stack-splitting. The technique proposed builds on the
stack-splitting approach, previously described by the authors and
experimentally validated on shared-memory systems, which in turn is an
evolution of the stack-copying method used in a variety of parallel logic and
constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first
distributed or-parallel implementation of Prolog based on the stack-splitting
method ever realized. The results presented confirm the superiority of this
method as a simple yet effective technique to transition from shared-memory to
distributed-memory systems. PALS extends stack-splitting by combining it with
incremental copying; the paper provides a description of the implementation of
PALS, including details of how distributed scheduling is handled. We also
investigate methodologies to effectively support order-sensitive predicates
(e.g., side-effects) in the context of the stack-splitting scheme. Experimental
results obtained from running PALS on both Shared Memory and Beowulf systems
are presented and analyzed. | 63 pages, 32 figures, 5 tabels. Theory and Practice of Logic
Programming (to appear) | cs.DC | [
"cs.DC",
"cs.PL"
] |
||
An Analysis of Arithmetic Constraints on Integer Intervals | http://arxiv.org/abs/cs/0607016v2 | http://arxiv.org/abs/cs/0607016v2 | http://arxiv.org/pdf/cs/0607016v2 | 2006-07-06 | 2007-03-21 | [
"Krzysztof R. Apt",
"Peter Zoeteweij"
] | [
"",
""
] | Arithmetic constraints on integer intervals are supported in many constraint
programming systems. We study here a number of approaches to implement
constraint propagation for these constraints. To describe them we introduce
integer interval arithmetic. Each approach is explained using appropriate proof
rules that reduce the variable domains. We compare these approaches using a set
of benchmarks. For the most promising approach we provide results that
characterize the effect of constraint propagation. This is a full version of
our earlier paper, cs.PL/0403016. | 44 pages, to appear in 'Constraints' journal | cs.AI | [
"cs.AI",
"cs.PL",
"D.3.2; D.3.3"
] |
||
Applying and Combining Three Different Aspect Mining Techniques | http://arxiv.org/abs/cs/0607006v1 | http://arxiv.org/abs/cs/0607006v1 | http://arxiv.org/pdf/cs/0607006v1 | 2006-07-02 | 2006-07-02 | [
"Mariano Ceccato",
"Marius Marin",
"Kim Mens",
"Leon Moonen",
"Paolo Tonella",
"Tom Tourwe"
] | [
"",
"",
"",
"",
"",
""
] | Understanding a software system at source-code level requires understanding
the different concerns that it addresses, which in turn requires a way to
identify these concerns in the source code. Whereas some concerns are
explicitly represented by program entities (like classes, methods and
variables) and thus are easy to identify, crosscutting concerns are not
captured by a single program entity but are scattered over many program
entities and are tangled with the other concerns. Because of their crosscutting
nature, such crosscutting concerns are difficult to identify, and reduce the
understandability of the system as a whole.
In this paper, we report on a combined experiment in which we try to identify
crosscutting concerns in the JHotDraw framework automatically. We first apply
three independently developed aspect mining techniques to JHotDraw and evaluate
and compare their results. Based on this analysis, we present three interesting
combinations of these three techniques, and show how these combinations provide
a more complete coverage of the detected concerns as compared to the original
techniques individually. Our results are a first step towards improving the
understandability of a system that contains crosscutting concerns, and can be
used as a basis for refactoring the identified crosscutting concerns into
aspects. | 28 pages | cs.SE | [
"cs.SE",
"cs.PL"
] |
||
Formalizing typical crosscutting concerns | http://arxiv.org/abs/cs/0606125v1 | http://arxiv.org/abs/cs/0606125v1 | http://arxiv.org/pdf/cs/0606125v1 | 2006-06-29 | 2006-06-29 | [
"Marius Marin"
] | [
""
] | We present a consistent system for referring crosscutting functionality,
relating crosscutting concerns to specific implementation idioms, and
formalizing their underlying relations through queries. The system is based on
generic crosscutting concerns that we organize and describe in a catalog.
We have designed and implemented a tool support for querying source code for
instances of the proposed generic concerns and organizing them in composite
concern models. The composite concern model adds a new dimension to the
dominant decomposition of the system for describing and making explicit source
code relations specific to crosscutting concerns implementations.
We use the proposed approach to describe crosscutting concerns in design
patterns and apply the tool to an opensource system (JHotDraw). | 24 pages | cs.SE | [
"cs.SE",
"cs.PL"
] |
||
A common framework for aspect mining based on crosscutting concern sorts | http://arxiv.org/abs/cs/0606113v1 | http://arxiv.org/abs/cs/0606113v1 | http://arxiv.org/pdf/cs/0606113v1 | 2006-06-27 | 2006-06-27 | [
"Marius Marin",
"Leon Moonen",
"Arie van Deursen"
] | [
"",
"",
""
] | The increasing number of aspect mining techniques proposed in literature
calls for a methodological way of comparing and combining them in order to
assess, and improve on, their quality. This paper addresses this situation by
proposing a common framework based on crosscutting concern sorts which allows
for consistent assessment, comparison and combination of aspect mining
techniques. The framework identifies a set of requirements that ensure
homogeneity in formulating the mining goals, presenting the results and
assessing their quality.
We demonstrate feasibility of the approach by retrofitting an existing aspect
mining technique to the framework, and by using it to design and implement two
new mining techniques. We apply the three techniques to a known aspect mining
benchmark and show how they can be consistently assessed and combined to
increase the quality of the results. The techniques and combinations are
implemented in FINT, our publicly available free aspect mining tool. | 14 pages | Proceedings Working Conference on Reverse Engineering (WCRE), IEEE
Computer Society, 2006, pages 29-38 | 10.1109/WCRE.2006.6 | cs.SE | [
"cs.SE",
"cs.PL"
] |
Toward Functionality Oriented Programming | http://arxiv.org/abs/cs/0606102v3 | http://arxiv.org/abs/cs/0606102v3 | http://arxiv.org/pdf/cs/0606102v3 | 2006-06-24 | 2011-02-18 | [
"Chengpu Wang"
] | [
""
] | The concept of functionality oriented programming is proposed, and some of
its aspects are discussed, such as: (1) implementation independent basic types
and generic collection types; (2) syntax requirements and recommendations for
implementation independence; (3) unified documentation and code; (4)
cross-module interface; and (5) cross-language program making scheme. A
prototype example is given to demonstrate functionality oriented programming. | This paper has been withdrawn by the author. 21 Pages, 7 Figures | cs.PL | [
"cs.PL",
"cs.HC"
] |
||
On Typechecking Top-Down XML Tranformations: Fixed Input or Output
Schemas | http://arxiv.org/abs/cs/0606094v1 | http://arxiv.org/abs/cs/0606094v1 | http://arxiv.org/pdf/cs/0606094v1 | 2006-06-22 | 2006-06-22 | [
"Wim Martens",
"Frank Neven",
"Marc Gyssens"
] | [
"",
"",
""
] | Typechecking consists of statically verifying whether the output of an XML
transformation always conforms to an output type for documents satisfying a
given input type. In this general setting, both the input and output schema as
well as the transformation are part of the input for the problem. However,
scenarios where the input or output schema can be considered to be fixed, are
quite common in practice. In the present work, we investigate the computational
complexity of the typechecking problem in the latter setting. | cs.DB | [
"cs.DB",
"cs.PL"
] |
|||
Relational Parametricity and Control | http://arxiv.org/abs/cs/0606072v2 | http://arxiv.org/abs/cs/0606072v2 | http://arxiv.org/pdf/cs/0606072v2 | 2006-06-15 | 2006-07-27 | [
"Masahito Hasegawa"
] | [
""
] | We study the equational theory of Parigot's second-order
λμ-calculus in connection with a call-by-name continuation-passing
style (CPS) translation into a fragment of the second-order λ-calculus.
It is observed that the relational parametricity on the target calculus induces
a natural notion of equivalence on the λμ-terms. On the other hand,
the unconstrained relational parametricity on the λμ-calculus turns
out to be inconsistent with this CPS semantics. Following these facts, we
propose to formulate the relational parametricity on the λμ-calculus
in a constrained way, which might be called ``focal parametricity''. | 22 pages, for Logical Methods in Computer Science | Logical Methods in Computer Science, Volume 2, Issue 3 (July 27,
2006) lmcs:2245 | 10.2168/LMCS-2(3:3)2006 | cs.PL | [
"cs.PL",
"cs.LO",
"F.3.2"
] |
Parallel Evaluation of Mathematica Programs in Remote Computers
Available in Network | http://arxiv.org/abs/cs/0606023v3 | http://arxiv.org/abs/cs/0606023v3 | http://arxiv.org/pdf/cs/0606023v3 | 2006-06-06 | 2008-10-17 | [
"Santanu K. Maiti"
] | [
""
] | Mathematica is a powerful application package for doing mathematics and is
used almost in all branches of science. It has widespread applications ranging
from quantum computation, statistical analysis, number theory, zoology,
astronomy, and many more. Mathematica gives a rich set of programming
extensions to its end-user language, and it permits us to write programs in
procedural, functional, or logic (rule-based) style, or a mixture of all three.
For tasks requiring interfaces to the external environment, mathematica
provides mathlink, which allows us to communicate mathematica programs with
external programs written in C, C++, F77, F90, F95, Java, or other languages.
It has also extensive capabilities for editing graphics, equations, text, etc.
In this article, we explore the basic mechanisms of parallelization of a
mathematica program by sharing different parts of the program into all other
computers available in the network. Doing the parallelization, we can perform
large computational operations within a very short period of time, and
therefore, the efficiency of the numerical works can be achieved. Parallel
computation supports any version of mathematica and it also works as well even
if different versions of mathematica are installed in different computers. The
whole operation can run under any supported operating system like Unix,
Windows, Macintosh, etc. Here we focus our study only for the Unix based
operating system, but this method works as well for all other cases. | 10 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:cs/0605090 | cs.MS | [
"cs.MS",
"cs.PL"
] |
||
A synchronous pi-calculus | http://arxiv.org/abs/cs/0606019v2 | http://arxiv.org/abs/cs/0606019v2 | http://arxiv.org/pdf/cs/0606019v2 | 2006-06-05 | 2007-02-09 | [
"Roberto Amadio"
] | [
""
] | The SL synchronous programming model is a relaxation of the Esterel
synchronous model where the reaction to the absence of a signal within an
instant can only happen at the next instant. In previous work, we have
revisited the SL synchronous programming model. In particular, we have
discussed an alternative design of the model including thread spawning and
recursive definitions, introduced a CPS translation to a tail recursive form,
and proposed a notion of bisimulation equivalence. In the present work, we
extend the tail recursive model with first-order data types obtaining a
non-deterministic synchronous model whose complexity is comparable to the one
of the pi-calculus. We show that our approach to bisimulation equivalence can
cope with this extension and in particular that labelled bisimulation can be
characterised as a contextual bisimulation. | Journal of Information and Computation 205, 9 (2007) 1470-1490 | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
Modeling Aspect Mechanisms: A Top-Down Approach | http://arxiv.org/abs/cs/0606003v1 | http://arxiv.org/abs/cs/0606003v1 | http://arxiv.org/pdf/cs/0606003v1 | 2006-06-01 | 2006-06-01 | [
"Sergei Kojarski",
"David H. Lorenz"
] | [
"",
""
] | A plethora of diverse aspect mechanisms exist today, all of which integrate
concerns into artifacts that exhibit crosscutting structure. What we lack and
need is a characterization of the design space that these aspect mechanisms
inhabit and a model description of their weaving processes. A good design space
representation provides a common framework for understanding and evaluating
existing mechanisms. A well-understood model of the weaving process can guide
the implementor of new aspect mechanisms. It can guide the designer when
mechanisms implementing new kinds of weaving are needed. It can also help teach
aspect-oriented programming (AOP). In this paper we present and evaluate such a
model of the design space for aspect mechanisms and their weaving processes. We
model weaving, at an abstract level, as a concern integration process. We
derive a weaving process model (WPM) top-down, differentiating a reactive from
a nonreactive process. The model provides an in-depth explanation of the key
subpro existing aspect mechanisms. | In Proceedings of the 28th International Conference on Software
Engineering (ICSE'06), pages 212--221, Shanghai, China, May 20-28, 2006 | cs.SE | [
"cs.SE",
"cs.PL",
"D.2.10; D.1.5; D.3.2"
] |
||
Parsing Transformative LR(1) Languages | http://arxiv.org/abs/cs/0605104v2 | http://arxiv.org/abs/cs/0605104v2 | http://arxiv.org/pdf/cs/0605104v2 | 2006-05-24 | 2006-07-26 | [
"Blake Hegerle"
] | [
""
] | We consider, as a means of making programming languages more flexible and
powerful, a parsing algorithm in which the parser may freely modify the grammar
while parsing. We are particularly interested in a modification of the
canonical LR(1) parsing algorithm in which, after the reduction of certain
productions, we examine the source sentence seen so far to determine the
grammar to use to continue parsing. A naive modification of the canonical LR(1)
parsing algorithm along these lines cannot be guaranteed to halt; as a result,
we develop a test which examines the grammar as it changes, stopping the parse
if the grammar changes in a way that would invalidate earlier assumptions made
by the parser. With this test in hand, we can develop our parsing algorithm and
prove that it is correct. That being done, we turn to earlier, related work;
the idea of programming languages which can be extended to include new
syntactic constructs has existed almost as long as the idea of high-level
programming languages. Early efforts to construct such a programming language
were hampered by an immature theory of formal languages. More recent efforts to
construct transformative languages relied either on an inefficient chain of
source-to-source translators; or they have a defect, present in our naive
parsing algorithm, in that they cannot be known to halt. The present algorithm
does not have these undesirable properties, and as such, it should prove a
useful foundation for a new kind of programming language. | 68 pages, 4 figures. Numerous stylistic and grammatical fixes; new
material in Sections 2.2 and 4.2 | cs.PL | [
"cs.PL",
"D.3.2; D.3.4; F.4.2; F.4.3"
] |
||
Mathematica: A System of Computer Programs | http://arxiv.org/abs/cs/0605090v4 | http://arxiv.org/abs/cs/0605090v4 | http://arxiv.org/pdf/cs/0605090v4 | 2006-05-20 | 2015-10-28 | [
"Santanu K. Maiti"
] | [
""
] | Starting from the basic level of mathematica here we illustrate how to use a
mathematica notebook and write a program in the notebook. Next, we investigate
elaborately the way of linking of external programs with mathematica, so-called
the mathlink operation. Using this technique we can run very tedious jobs quite
efficiently, and the operations become extremely fast. Sometimes it is quite
desirable to run jobs in background of a computer which can take considerable
amount of time to finish, and this allows us to do work on other tasks, while
keeping the jobs running. The way of running jobs, written in a mathematica
notebook, in background is quite different from the conventional methods i.e.,
the techniques for the programs written in other languages like C, C++, F77,
F90, F95, etc. To illustrate it, in the present article we study how to create
a mathematica batch-file from a mathematica notebook and run it in the
background. Finally, we explore the most significant issue of this article.
Here we describe the basic ideas for parallelizing a mathematica program by
sharing its independent parts into all other remote computers available in the
network. Doing the parallelization, we can perform large computational
operations within a very short period of time, and therefore, the efficiency of
the numerical works can be achieved. Parallel computation supports any version
of mathematica and it also works significantly well even if different versions
of mathematica are installed in different computers. All the operations studied
in this article run under any supported operating system like Unix, Windows,
Macintosh, etc. For the sake of our illustrations, here we concentrate all the
discussions only for the Unix based operating system. | 17 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:cs/0603005, arXiv:cs/0604088 | cs.MS | [
"cs.MS",
"cs.PL"
] |
||
Continuations, proofs and tests | http://arxiv.org/abs/cs/0605043v1 | http://arxiv.org/abs/cs/0605043v1 | http://arxiv.org/pdf/cs/0605043v1 | 2006-05-09 | 2006-05-09 | [
"Stefano Guerrini",
"Andrea Masini"
] | [
"",
""
] | Continuation Passing Style (CPS) is one of the most important issues in the
field of functional programming languages, and the quest for a primitive notion
of types for continuation is still open. Starting from the notion of ``test''
proposed by Girard, we develop a notion of test for intuitionistic logic. We
give a complete deductive system for tests and we show that it is good to deal
with ``continuations''. In particular, in the proposed system it is possible to
work with Call by Value and Call by Name translations in a uniform way. | 32 pages, uses xy-pic | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
Demand-driven Inlining in a Region-based Optimizer for ILP Architectures | http://arxiv.org/abs/cs/0604043v1 | http://arxiv.org/abs/cs/0604043v1 | http://arxiv.org/pdf/cs/0604043v1 | 2006-04-11 | 2006-04-11 | [
"Thomas P. Way",
"Lori L. Pollock"
] | [
"",
""
] | Region-based compilation repartitions a program into more desirable
compilation units using profiling information and procedure inlining to enable
region formation analysis. Heuristics play a key role in determining when it is
most beneficial to inline procedures during region formation. An ILP optimizing
compiler using a region-based approach restructures a program to better reflect
dynamic behavior and increase interprocedural optimization and scheduling
opportunities. This paper presents an interprocedural compilation technique
which performs procedure inlining on-demand, rather than as a separate phase,
to improve the ability of a region-based optimizer to control code growth,
compilation time and memory usage while improving performance. The
interprocedural region formation algorithm utilizes a demand-driven,
heuristics-guided approach to inlining, restructuring an input program into
interprocedural regions. Experimental results are presented to demonstrate the
impact of the algorithm and several inlining heuristics upon a number of
traditional and novel compilation characteristics within a region-based ILP
compiler and simulator. | 23 pages | cs.DC | [
"cs.DC",
"cs.PL",
"D.3.4"
] |
||
Enhanced Prolog Remote Predicate Call Protocol | http://arxiv.org/abs/cs/0603102v1 | http://arxiv.org/abs/cs/0603102v1 | http://arxiv.org/pdf/cs/0603102v1 | 2006-03-26 | 2006-03-26 | [
"Alin Suciu",
"Kalman Pusztai",
"Andrei Diaconu"
] | [
"",
"",
""
] | Following the ideas of the Remote Procedure Call model, we have developed a
logic programming counterpart, naturally called Prolog Remote Predicate Call
(Prolog RPC). The Prolog RPC protocol facilitates the integration of Prolog
code in multi-language applications as well as the development of distributed
intelligent applications. One use of the protocol's most important uses could
be the development of distributed applications that use Prolog at least
partially to achieve their goals. Most notably the Distributed Artificial
Intelligence (DAI) applications that are suitable for logic programming can
profit from the use of the protocol. After proving its usefulness, we went
further, developing a new version of the protocol, making it more reliable and
extending its functionality. Because it has a new syntax and the new set of
commands, we call this version Enhanced Prolog Remote Procedure Call. This
paper describes the new features and modifications this second version
introduced. | cs.NI | [
"cs.NI",
"cs.PL"
] |
|||
Prolog Server Pages | http://arxiv.org/abs/cs/0603101v1 | http://arxiv.org/abs/cs/0603101v1 | http://arxiv.org/pdf/cs/0603101v1 | 2006-03-26 | 2006-03-26 | [
"Alin Suciu",
"Kalman Pusztai",
"Andrei Vancea"
] | [
"",
"",
""
] | Prolog Server Pages (PSP) is a scripting language, based on Prolog, than can
be embedded in HTML documents. To run PSP applications one needs a web server,
a web browser and a PSP interpreter. The code is executed, by the interpreter,
on the server-side (web server) and the output (together with the html code in
witch the PSP code is embedded) is sent to the client-side (browser). The
current implementation supports Apache Web Server. We implemented an Apache web
server module that handles PSP files, and sends the result (an html document)
to the client. PSP supports both GET and POST http requests. It also provides
methods for working with http cookies. | cs.NI | [
"cs.NI",
"cs.PL"
] |
|||
Efficient Compression of Prolog Programs | http://arxiv.org/abs/cs/0603100v1 | http://arxiv.org/abs/cs/0603100v1 | http://arxiv.org/pdf/cs/0603100v1 | 2006-03-26 | 2006-03-26 | [
"Alin Suciu",
"Kalman Pusztai"
] | [
"",
""
] | We propose a special-purpose class of compression algorithms for efficient
compression of Prolog programs. It is a dictionary-based compression method,
specially designed for the compression of Prolog code, and therefore we name it
PCA (Prolog Compression Algorithm). According to the experimental results this
method provides better compression than state-of-the-art general-purpose
compression algorithms. Since the algorithm works with Prolog syntactic
entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is
straightforward and very easy to use in any Prolog application that needs
compression. Although the algorithm is designed for Prolog programs, the idea
can be easily applied for the compression of programs written in other (logic)
languages. | cs.PL | [
"cs.PL"
] |
|||
A compositional Semantics for CHR | http://arxiv.org/abs/cs/0603079v1 | http://arxiv.org/abs/cs/0603079v1 | http://arxiv.org/pdf/cs/0603079v1 | 2006-03-20 | 2006-03-20 | [
"Maurizio Gabbrielli",
"Maria Chiara Meo"
] | [
"",
""
] | Constraint Handling Rules (CHR) are a committed-choice declarative language
which has been designed for writing constraint solvers. A CHR program consists
of multi-headed guarded rules which allow one to rewrite constraints into
simpler ones until a solved form is reached.
CHR has received a considerable attention, both from the practical and from
the theoretical side. Nevertheless, due the use of multi-headed clauses, there
are several aspects of the CHR semantics which have not been clarified yet. In
particular, no compositional semantics for CHR has been defined so far.
In this paper we introduce a fix-point semantics which characterizes the
input/output behavior of a CHR program and which is and-compositional, that is,
which allows to retrieve the semantics of a conjunctive query from the
semantics of its components. Such a semantics can be used as a basis to define
incremental and modular analysis and verification tools. | cs.PL | [
"cs.PL"
] |
|||
Packrat Parsing: Simple, Powerful, Lazy, Linear Time | http://arxiv.org/abs/cs/0603077v1 | http://arxiv.org/abs/cs/0603077v1 | http://arxiv.org/pdf/cs/0603077v1 | 2006-03-18 | 2006-03-18 | [
"Bryan Ford"
] | [
""
] | Packrat parsing is a novel technique for implementing parsers in a lazy
functional programming language. A packrat parser provides the power and
flexibility of top-down parsing with backtracking and unlimited lookahead, but
nevertheless guarantees linear parse time. Any language defined by an LL(k) or
LR(k) grammar can be recognized by a packrat parser, in addition to many
languages that conventional linear-time algorithms do not support. This
additional power simplifies the handling of common syntactic idioms such as the
widespread but troublesome longest-match rule, enables the use of sophisticated
disambiguation strategies such as syntactic and semantic predicates, provides
better grammar composition properties, and allows lexical analysis to be
integrated seamlessly into parsing. Yet despite its power, packrat parsing
shares the same simplicity and elegance as recursive descent parsing; in fact
converting a backtracking recursive descent parser into a linear-time packrat
parser often involves only a fairly straightforward structural change. This
paper describes packrat parsing informally with emphasis on its use in
practical applications, and explores its advantages and disadvantages with
respect to the more conventional alternatives. | 12 pages, 5 figures | International Conference on Functional Programming (ICFP '02),
October 2002, Pittsburgh, PA | cs.DS | [
"cs.DS",
"cs.CC",
"cs.PL",
"D.3.4; D.1.1; F.4.2"
] |
|
On the tree-transformation power of XSLT | http://arxiv.org/abs/cs/0603028v1 | http://arxiv.org/abs/cs/0603028v1 | http://arxiv.org/pdf/cs/0603028v1 | 2006-03-08 | 2006-03-08 | [
"Wim Janssen",
"Alexandr Korlyukov",
"Jan Van den Bussche"
] | [
"",
"",
""
] | XSLT is a standard rule-based programming language for expressing
transformations of XML data. The language is currently in transition from
version 1.0 to 2.0. In order to understand the computational consequences of
this transition, we restrict XSLT to its pure tree-transformation capabilities.
Under this focus, we observe that XSLT~1.0 was not yet a computationally
complete tree-transformation language: every 1.0 program can be implemented in
exponential time. A crucial new feature of version~2.0, however, which allows
nodesets over temporary trees, yields completeness. We provide a formal
operational semantics for XSLT programs, and establish confluence for this
semantics. | Acta Informatica, Volume 43, Number 6 / January, 2007 | 10.1007/s00236-006-0026-8 | cs.PL | [
"cs.PL",
"cs.DB",
"D.3.1; H.2.3; F.1.1"
] |
|
Language Support for Optional Functionality | http://arxiv.org/abs/cs/0603021v1 | http://arxiv.org/abs/cs/0603021v1 | http://arxiv.org/pdf/cs/0603021v1 | 2006-03-06 | 2006-03-06 | [
"Joy Mukherjee",
"Srinidhi Varadarajan"
] | [
"",
""
] | We recommend a programming construct - availability check - for programs that
need to automatically adjust to presence or absence of segments of code. The
idea is to check the existence of a valid definition before a function call is
invoked. The syntax is that of a simple 'if' statement. The vision is to enable
customization of application functionality through addition or removal of
optional components, but without requiring complete re-building. Focus is on
C-like compiled procedural languages and UNIX-based systems. Essentially, our
approach attempts to combine the flexibility of dynamic libraries with the
usability of utility (dependency) libraries. We outline the benefits over
prevalent strategies mainly in terms of development complexity, crudely
measured as lesser lines of code. We also allude to performance and flexibility
facets. A Preliminary implementation and figures from early experimental
evaluation are presented. | 6 pages, 8 figures | cs.PL | [
"cs.PL",
"cs.OS",
"cs.SE"
] |
||
Object-Oriented Modeling of Programming Paradigms | http://arxiv.org/abs/cs/0603016v2 | http://arxiv.org/abs/cs/0603016v2 | http://arxiv.org/pdf/cs/0603016v2 | 2006-03-03 | 2006-11-09 | [
"M. H. van Emden",
"S. C. Somosan"
] | [
"",
""
] | For the right application, the use of programming paradigms such as
functional or logic programming can enormously increase productivity in
software development. But these powerful paradigms are tied to exotic
programming languages, while the management of software development dictates
standardization on a single language.
This dilemma can be resolved by using object-oriented programming in a new
way. It is conventional to analyze an application by object-oriented modeling.
In the new approach, the analysis identifies the paradigm that is ideal for the
application; development starts with object-oriented modeling of the paradigm.
In this paper we illustrate the new approach by giving examples of
object-oriented modeling of dataflow and constraint programming. These examples
suggest that it is no longer necessary to embody a programming paradigm in a
language dedicated to it. | Re-written introduction and abstract, new title, some references
deleted; 10 pages; 4 figures | cs.SE | [
"cs.SE",
"cs.PL",
"D.1.5; D.2.2; D.2.3; D.3.3"
] |
||
A Basic Introduction on Math-Link in Mathematica | http://arxiv.org/abs/cs/0603005v4 | http://arxiv.org/abs/cs/0603005v4 | http://arxiv.org/pdf/cs/0603005v4 | 2006-03-01 | 2008-10-20 | [
"Santanu K. Maiti"
] | [
""
] | Starting from the basic ideas of mathematica, we give a detailed description
about the way of linking of external programs with mathematica through proper
mathlink commands. This article may be quite helpful for the beginners to start
with and write programs in mathematica.
In the first part, we illustrate how to use a mathemtica notebook and write a
complete program in the notebook. Following with this, we also mention
elaborately about the utility of the local and global variables those are very
essential for writing a program in mathematica. All the commands needed for
doing different mathematical operations can be found with some proper examples
in the mathematica book written by Stephen Wolfram \cite{wolfram}.
In the rest of this article, we concentrate our study on the most significant
issue which is the process of linking of {\em external programs} with
mathematica, so-called the mathlink operation. By using proper mathlink
commands one can run very tedious jobs efficiently and the operations become
extremely fast. | 14 pages, 2 figures | cs.MS | [
"cs.MS",
"cs.PL"
] |
||
Towards Applicative Relational Programming | http://arxiv.org/abs/cs/0602099v1 | http://arxiv.org/abs/cs/0602099v1 | http://arxiv.org/pdf/cs/0602099v1 | 2006-02-28 | 2006-02-28 | [
"H. Ibrahim",
"M. H. van Emden"
] | [
"",
""
] | Functional programming comes in two flavours: one where ``functions are
first-class citizens'' (we call this applicative) and one which is based on
equations (we call this declarative). In relational programming clauses play
the role of equations. Hence Prolog is declarative. The purpose of this paper
is to provide in relational programming a mathematical basis for the relational
analog of applicative functional programming. We use the cylindric semantics of
first-order logic due to Tarski and provide a new notation for the required
cylinders that we call tables. We define the Table/Relation Algebra with
operators sufficient to translate Horn clauses into algebraic form. We
establish basic mathematical properties of these operators. We show how
relations can be first-class citizens, and devise mechanisms for modularity,
for local scoping of predicates, and for exporting/importing relations between
programs. | 10 pages; no figures | cs.PL | [
"cs.PL",
"D.1.6; F.3.2"
] |
||
Compositional Semantics for the Procedural Interpretation of Logic | http://arxiv.org/abs/cs/0602098v2 | http://arxiv.org/abs/cs/0602098v2 | http://arxiv.org/pdf/cs/0602098v2 | 2006-02-28 | 2006-05-07 | [
"M. H. van Emden"
] | [
""
] | Semantics of logic programs has been given by proof theory, model theory and
by fixpoint of the immediate-consequence operator. If clausal logic is a
programming language, then it should also have a compositional semantics.
Compositional semantics for programming languages follows the abstract syntax
of programs, composing the meaning of a unit by a mathematical operation on the
meanings of its constituent units. The procedural interpretation of logic has
only yielded an incomplete abstract syntax for logic programs. We complete it
and use the result as basis of a compositional semantics. We present for
comparison Tarski's algebraization of first-order predicate logic, which is in
substance the compositional semantics for his choice of syntax. We characterize
our semantics by equivalence with the immediate-consequence operator. | 17 pages; no figures | cs.PL | [
"cs.PL",
"D.1.6; F.3.2"
] |
||
Quantum Predicative Programming | http://arxiv.org/abs/quant-ph/0602156v1 | http://arxiv.org/abs/quant-ph/0602156v1 | http://arxiv.org/pdf/quant-ph/0602156v1 | 2006-02-17 | 2006-02-17 | [
"Anya Tafliovich",
"E. C. R. Hehner"
] | [
"",
""
] | The subject of this work is quantum predicative programming -- the study of
developing of programs intended for execution on a quantum computer. We look at
programming in the context of formal methods of program development, or
programming methodology. Our work is based on probabilistic predicative
programming, a recent generalisation of the well-established predicative
programming. It supports the style of program development in which each
programming step is proven correct as it is made. We inherit the advantages of
the theory, such as its generality, simple treatment of recursive programs,
time and space complexity, and communication. Our theory of quantum programming
provides tools to write both classical and quantum specifications, develop
quantum programs that implement these specifications, and reason about their
comparative time and space complexity all in the same framework. | quant-ph | [
"quant-ph",
"cs.PL"
] |
|||
Explaining Constraint Programming | http://arxiv.org/abs/cs/0602027v1 | http://arxiv.org/abs/cs/0602027v1 | http://arxiv.org/pdf/cs/0602027v1 | 2006-02-07 | 2006-02-07 | [
"Krzysztof R. Apt"
] | [
""
] | We discuss here constraint programming (CP) by using a proof-theoretic
perspective. To this end we identify three levels of abstraction. Each level
sheds light on the essence of CP.
In particular, the highest level allows us to bring CP closer to the
computation as deduction paradigm. At the middle level we can explain various
constraint propagation algorithms. Finally, at the lowest level we can address
the issue of automatic generation and optimization of the constraint
propagation algorithms. | 15 pages, appeared in "Processes, Terms and Cycles: Steps on the Road
to Infinity", (A. Middeldorp, V. van Oostrom, F. van Raamsdonk, R. de Vrijer,
eds.), LNCS 3838, pp. 55-69. (2005) | cs.PL | [
"cs.PL",
"cs.AI",
"D.3.2; F.4.1"
] |
||
Demand Analysis with Partial Predicates | http://arxiv.org/abs/cs/0602008v1 | http://arxiv.org/abs/cs/0602008v1 | http://arxiv.org/pdf/cs/0602008v1 | 2006-02-04 | 2006-02-04 | [
"Julio Marino",
"Angel Herranz",
"Juan Jose Moreno-Navarro"
] | [
"",
"",
""
] | In order to alleviate the inefficiencies caused by the interaction of the
logic and functional sides, integrated languages may take advantage of
\emph{demand} information -- i.e. knowing in advance which computations are
needed and, to which extent, in a particular context. This work studies
\emph{demand analysis} -- which is closely related to \emph{backwards
strictness analysis} -- in a semantic framework of \emph{partial predicates},
which in turn are constructive realizations of ideals in a domain. This will
allow us to give a concise, unified presentation of demand analysis, to relate
it to other analyses based on abstract interpretation or strictness logics,
some hints for the implementation, and, more important, to prove the soundness
of our analysis based on \emph{demand equations}. There are also some
innovative results. One of them is that a set constraint-based analysis has
been derived in a stepwise manner using ideas taken from the area of program
transformation. The other one is the possibility of using program
transformation itself to perform the analysis, specially in those domains of
properties where algorithms based on constraint solving are too weak. | This is the extended version of a paper accepted for publication in a
forthcoming special issue of Theory and Practice of Logic Programming on
Multiparadigm and Constraint Programming (Falaschi and Maher, eds.)
Appendices are missing in the printed version | cs.PL | [
"cs.PL",
"cs.SC"
] |
||
Fast Frequent Querying with Lazy Control Flow Compilation | http://arxiv.org/abs/cs/0601072v1 | http://arxiv.org/abs/cs/0601072v1 | http://arxiv.org/pdf/cs/0601072v1 | 2006-01-16 | 2006-01-16 | [
"Remko Tronçon",
"Gerda Janssens",
"Bart Demoen",
"Henk Vandecasteele"
] | [
"",
"",
"",
""
] | Control flow compilation is a hybrid between classical WAM compilation and
meta-call, limited to the compilation of non-recursive clause bodies. This
approach is used successfully for the execution of dynamically generated
queries in an inductive logic programming setting (ILP). Control flow
compilation reduces compilation times up to an order of magnitude, without
slowing down execution. A lazy variant of control flow compilation is also
presented. By compiling code by need, it removes the overhead of compiling
unreached code (a frequent phenomenon in practical ILP settings), and thus
reduces the size of the compiled code. Both dynamic compilation approaches have
been implemented and were combined with query packs, an efficient ILP execution
mechanism. It turns out that locality of data and code is important for
performance. The experiments reported in the paper show that lazy control flow
compilation is superior in both artificial and real life settings. | cs.PL | [
"cs.PL",
"cs.AI",
"cs.SE"
] |
|||
Constraint Functional Logic Programming over Finite Domains | http://arxiv.org/abs/cs/0601071v1 | http://arxiv.org/abs/cs/0601071v1 | http://arxiv.org/pdf/cs/0601071v1 | 2006-01-16 | 2006-01-16 | [
"Antonio J. Fernandez",
"Teresa Hortala-Gonzalez",
"Fernando Saenz-Perez",
"Rafael del Vado-Virseda"
] | [
"",
"",
"",
""
] | In this paper, we present our proposal to Constraint Functional Logic
Programming over Finite Domains (CFLP(FD)) with a lazy functional logic
programming language which seamlessly embodies finite domain (FD) constraints.
This proposal increases the expressiveness and power of constraint logic
programming over finite domains (CLP(FD)) by combining functional and
relational notation, curried expressions, higher-order functions, patterns,
partial applications, non-determinism, lazy evaluation, logical variables,
types, domain variables, constraint composition, and finite domain constraints.
We describe the syntax of the language, its type discipline, and its
declarative and operational semantics. We also describe TOY(FD), an
implementation for CFLPFD(FD), and a comparison of our approach with respect to
CLP(FD) from a programming point of view, showing the new features we
introduce. And, finally, we show a performance analysis which demonstrates that
our implementation is competitive with respect to existing CLP(FD) systems and
that clearly outperforms the closer approach to CFLP(FD). | Accepted for publication in Theory and Practice of Logic programming
(TPLP); 47 pages | cs.PL | [
"cs.PL",
"D.3.2; D.3.3; F.3.2"
] |
||
A Constructive Semantic Characterization of Aggregates in ASP | http://arxiv.org/abs/cs/0601051v2 | http://arxiv.org/abs/cs/0601051v2 | http://arxiv.org/pdf/cs/0601051v2 | 2006-01-13 | 2006-02-08 | [
"Tran Cao Son",
"Enrico Pontelli"
] | [
"",
""
] | This technical note describes a monotone and continuous fixpoint operator to
compute the answer sets of programs with aggregates. The fixpoint operator
relies on the notion of aggregate solution. Under certain conditions, this
operator behaves identically to the three-valued immediate consequence operator
$\Phi^{aggr}_P$ for aggregate programs, independently proposed Pelov et al.
This operator allows us to closely tie the computational complexity of the
answer set checking and answer sets existence problems to the cost of checking
a solution of the aggregates in the program. Finally, we relate the semantics
described by the operator to other proposals for logic programming with
aggregates.
To appear in Theory and Practice of Logic Programming (TPLP). | 21 pages | cs.AI | [
"cs.AI",
"cs.LO",
"cs.PL",
"cs.SC",
"D.1.6; D.3.1; D.3.2; D.3.3"
] |
||
Removing Redundant Arguments Automatically | http://arxiv.org/abs/cs/0601039v1 | http://arxiv.org/abs/cs/0601039v1 | http://arxiv.org/pdf/cs/0601039v1 | 2006-01-10 | 2006-01-10 | [
"Maria Alpuente",
"Santiago Escobar",
"Salvador Lucas"
] | [
"",
"",
""
] | The application of automatic transformation processes during the formal
development and optimization of programs can introduce encumbrances in the
generated code that programmers usually (or presumably) do not write. An
example is the introduction of redundant arguments in the functions defined in
the program. Redundancy of a parameter means that replacing it by any
expression does not change the result. In this work, we provide methods for the
analysis and elimination of redundant arguments in term rewriting systems as a
model for the programs that can be written in more sophisticated languages. On
the basis of the uselessness of redundant arguments, we also propose an erasure
procedure which may avoid wasteful computations while still preserving the
semantics (under ascertained conditions). A prototype implementation of these
methods has been undertaken, which demonstrates the practicality of our
approach. | Accepted for publication in Theory and Practice of Logic Programming | cs.PL | [
"cs.PL",
"D.2.4; F.3.1; F.3.3; I.2.2; I.2.5"
] |
||
Constraint-based automatic verification of abstract models of
multithreaded programs | http://arxiv.org/abs/cs/0601038v1 | http://arxiv.org/abs/cs/0601038v1 | http://arxiv.org/pdf/cs/0601038v1 | 2006-01-10 | 2006-01-10 | [
"Giorgio Delzanno"
] | [
""
] | We present a technique for the automated verification of abstract models of
multithreaded programs providing fresh name generation, name mobility, and
unbounded control.
As high level specification language we adopt here an extension of
communication finite-state machines with local variables ranging over an
infinite name domain, called TDL programs. Communication machines have been
proved very effective for representing communication protocols as well as for
representing abstractions of multithreaded software.
The verification method that we propose is based on the encoding of TDL
programs into a low level language based on multiset rewriting and constraints
that can be viewed as an extension of Petri Nets. By means of this encoding,
the symbolic verification procedure developed for the low level language in our
previous work can now be applied to TDL programs. Furthermore, the encoding
allows us to isolate a decidable class of verification problems for TDL
programs that still provide fresh name generation, name mobility, and unbounded
control. Our syntactic restrictions are in fact defined on the internal
structure of threads: In order to obtain a complete and terminating method,
threads are only allowed to have at most one local variable (ranging over an
infinite domain of names). | To appear in Theory and Practice of Logic Programming | cs.LO | [
"cs.LO",
"cs.PL"
] |
||
Constraint-based verification of abstract models of multitreaded
programs | http://arxiv.org/abs/cs/0601037v1 | http://arxiv.org/abs/cs/0601037v1 | http://arxiv.org/pdf/cs/0601037v1 | 2006-01-10 | 2006-01-10 | [
"Giorgio Delzanno"
] | [
""
] | We present a technique for the automated verification of abstract models of
multithreaded programs providing fresh name generation, name mobility, and
unbounded control.
As high level specification language we adopt here an extension of
communication finite-state machines with local variables ranging over an
infinite name domain, called TDL programs. Communication machines have been
proved very effective for representing communication protocols as well as for
representing abstractions of multithreaded software.
The verification method that we propose is based on the encoding of TDL
programs into a low level language based on multiset rewriting and constraints
that can be viewed as an extension of Petri Nets. By means of this encoding,
the symbolic verification procedure developed for the low level language in our
previous work can now be applied to TDL programs. Furthermore, the encoding
allows us to isolate a decidable class of verification problems for TDL
programs that still provide fresh name generation, name mobility, and unbounded
control. Our syntactic restrictions are in fact defined on the internal
structure of threads: In order to obtain a complete and terminating method,
threads are only allowed to have at most one local variable (ranging over an
infinite domain of names). | To appear in Theory and Practice of Logic Programming | cs.CL | [
"cs.CL",
"cs.PL"
] |
||
Canonical Abstract Syntax Trees | http://arxiv.org/abs/cs/0601019v2 | http://arxiv.org/abs/cs/0601019v2 | http://arxiv.org/pdf/cs/0601019v2 | 2006-01-06 | 2006-11-21 | [
"Antoine Reilles"
] | [
""
] | This paper presents Gom, a language for describing abstract syntax trees and
generating a Java implementation for those trees. Gom includes features
allowing the user to specify and modify the interface of the data structure.
These features provide in particular the capability to maintain the internal
representation of data in canonical form with respect to a rewrite system. This
explicitly guarantees that the client program only manipulates normal forms for
this rewrite system, a feature which is only implicitly used in many
implementations. | Dans Workshop on Rewriting Techniques and Applications (2006) | cs.PL | [
"cs.PL"
] |
||
Forward slicing of functional logic programs by partial evaluation | http://arxiv.org/abs/cs/0601013v1 | http://arxiv.org/abs/cs/0601013v1 | http://arxiv.org/pdf/cs/0601013v1 | 2006-01-06 | 2006-01-06 | [
"Josep Silva",
"Germán Vidal"
] | [
"",
""
] | Program slicing has been mainly studied in the context of imperative
languages, where it has been applied to a wide variety of software engineering
tasks, like program understanding, maintenance, debugging, testing, code reuse,
etc. This work introduces the first forward slicing technique for declarative
multi-paradigm programs which integrate features from functional and logic
programming. Basically, given a program and a slicing criterion (a function
call in our setting), the computed forward slice contains those parts of the
original program which are reachable from the slicing criterion. Our approach
to program slicing is based on an extension of (online) partial evaluation.
Therefore, it provides a simple way to develop program slicing tools from
existing partial evaluators and helps to clarify the relation between both
methodologies. A slicing tool for the multi-paradigm language Curry, which
demonstrates the usefulness of our approach, has been implemented in Curry
itself. | To appear in Theory and Practice of Logic Programming (TPLP) | cs.PL | [
"cs.PL",
"cs.LO",
"D.3.4; I.2.2"
] |
||
Incremental copying garbage collection for WAM-based Prolog systems | http://arxiv.org/abs/cs/0601003v1 | http://arxiv.org/abs/cs/0601003v1 | http://arxiv.org/pdf/cs/0601003v1 | 2006-01-02 | 2006-01-02 | [
"Ruben Vandeginste",
"Bart Demoen"
] | [
"",
""
] | The design and implementation of an incremental copying heap garbage
collector for WAM-based Prolog systems is presented. Its heap layout consists
of a number of equal-sized blocks. Other changes to the standard WAM allow
these blocks to be garbage collected independently. The independent collection
of heap blocks forms the basis of an incremental collecting algorithm which
employs copying without marking (contrary to the more frequently used mark©
or mark&slide algorithms in the context of Prolog). Compared to standard
semi-space copying collectors, this approach to heap garbage collection lowers
in many cases the memory usage and reduces pause times. The algorithm also
allows for a wide variety of garbage collection policies including generational
ones. The algorithm is implemented and evaluated in the context of hProlog. | 33 pages, 22 figures, 5 tables. To appear in Theory and Practice of
Logic Programming (TPLP) | cs.PL | [
"cs.PL"
] |
||
Book review "The Haskell Road to Logic, Maths and Programming" | http://arxiv.org/abs/cs/0512096v2 | http://arxiv.org/abs/cs/0512096v2 | http://arxiv.org/pdf/cs/0512096v2 | 2005-12-24 | 2006-06-22 | [
"Ralf Laemmel"
] | [
""
] | The textbook by Doets and van Eijck puts the Haskell programming language
systematically to work for presenting a major piece of logic and mathematics.
The reader is taken through chapters on basic logic, proof recipes, sets and
lists, relations and functions, recursion and co-recursion, the number systems,
polynomials and power series, ending with Cantor's infinities. The book uses
Haskell for the executable and strongly typed manifestation of various
mathematical notions at the level of declarative programming. The book adopts a
systematic but relaxed mathematical style (definition, example, exercise, ...);
the text is very pleasant to read due to a small amount of anecdotal
information, and due to the fact that definitions are fluently integrated in
the running text. An important goal of the book is to get the reader acquainted
with reasoning about programs. | To appear in the JoLLI journal in 2006 | cs.PL | [
"cs.PL",
"cs.LO",
"D.1.1; F.3.1; G.0"
] |
||
Solving Partial Order Constraints for LPO Termination | http://arxiv.org/abs/cs/0512067v1 | http://arxiv.org/abs/cs/0512067v1 | http://arxiv.org/pdf/cs/0512067v1 | 2005-12-16 | 2005-12-16 | [
"Michael Codish",
"Vitaly Lagoon",
"Peter J. Stuckey"
] | [
"",
"",
""
] | This paper introduces a new kind of propositional encoding for reasoning
about partial orders. The symbols in an unspecified partial order are viewed as
variables which take integer values and are interpreted as indices in the
order. For a partial order statement on n symbols each index is represented in
log2 n propositional variables and partial order constraints between symbols
are modeled on the bit representations. We illustrate the application of our
approach to determine LPO termination for term rewrite systems. Experimental
results are unequivocal, indicating orders of magnitude speedups in comparison
with current implementations for LPO termination. The proposed encoding is
general and relevant to other applications which involve propositional
reasoning about partial orders. | 15 pages, 2 figures, 2 tables | Journal of Satisfiability, Boolean Modeling and Computation,
5:193-215: 2008 | cs.PL | [
"cs.PL",
"cs.LO",
"cs.SC",
"F.3.1; F.4.1"
] |
|
Tradeoffs in Metaprogramming | http://arxiv.org/abs/cs/0512065v1 | http://arxiv.org/abs/cs/0512065v1 | http://arxiv.org/pdf/cs/0512065v1 | 2005-12-15 | 2005-12-15 | [
"Todd L. Veldhuizen"
] | [
""
] | The design of metaprogramming languages requires appreciation of the
tradeoffs that exist between important language characteristics such as safety
properties, expressive power, and succinctness. Unfortunately, such tradeoffs
are little understood, a situation we try to correct by embarking on a study of
metaprogramming language tradeoffs using tools from computability theory.
Safety properties of metaprograms are in general undecidable; for example, the
property that a metaprogram always halts and produces a type-correct instance
is $\Pi^0_2$-complete. Although such safety properties are undecidable, they
may sometimes be captured by a restricted language, a notion we adapt from
complexity theory. We give some sufficient conditions and negative results on
when languages capturing properties can exist: there can be no languages
capturing total correctness for metaprograms, and no `functional' safety
properties above $\Sigma^0_3$ can be captured. We prove that translating a
metaprogram from a general-purpose to a restricted metaprogramming language
capturing a property is tantamount to proving that property for the
metaprogram. Surprisingly, when one shifts perspective from programming to
metaprogramming, the corresponding safety questions do not become substantially
harder -- there is no `jump' of Turing degree for typical safety properties. | 2006 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-Based
Program Manipulation (PEPM 2006) | cs.PL | [
"cs.PL",
"D.3.1"
] |
||
Reactive concurrent programming revisited | http://arxiv.org/abs/cs/0512058v1 | http://arxiv.org/abs/cs/0512058v1 | http://arxiv.org/pdf/cs/0512058v1 | 2005-12-14 | 2005-12-14 | [
"Roberto Amadio",
"Gerard Boudol",
"Ilaria Castellani",
"Frederic Boussinot"
] | [
"",
"",
"",
""
] | In this note we revisit the so-called reactive programming style, which
evolves from the synchronous programming model of the Esterel language by
weakening the assumption that the absence of an event can be detected
instantaneously. We review some research directions that have been explored
since the emergence of the reactive model ten years ago. We shall also outline
some questions that remain to be investigated. | Workshop on Process Algebra (29/09/2006) 49-60 | cs.PL | [
"cs.PL"
] |
||
Resource Control for Synchronous Cooperative Threads | http://arxiv.org/abs/cs/0512057v1 | http://arxiv.org/abs/cs/0512057v1 | http://arxiv.org/pdf/cs/0512057v1 | 2005-12-14 | 2005-12-14 | [
"Roberto Amadio",
"Silvano Dal Zilio"
] | [
"",
""
] | We develop new methods to statically bound the resources needed for the
execution of systems of concurrent, interactive threads. Our study is concerned
with a \emph{synchronous} model of interaction based on cooperative threads
whose execution proceeds in synchronous rounds called instants. Our
contribution is a system of compositional static analyses to guarantee that
each instant terminates and to bound the size of the values computed by the
system as a function of the size of its parameters at the beginning of the
instant. Our method generalises an approach designed for first-order functional
languages that relies on a combination of standard termination techniques for
term rewriting systems and an analysis of the size of the computed values based
on the notion of quasi-interpretation. We show that these two methods can be
combined to obtain an explicit polynomial bound on the resources needed for the
execution of the system during an instant. As a second contribution, we
introduce a virtual machine and a related bytecode thus producing a precise
description of the resources needed for the execution of a system. In this
context, we present a suitable control flow analysis that allows to formulte
the static analyses for resource control at byte code level. | Journal of Theoretical Computer Science (TCS) 358 (15/08/2006)
229-254 | cs.PL | [
"cs.PL"
] |
||
Termination Analysis of General Logic Programs for Moded Queries: A
Dynamic Approach | http://arxiv.org/abs/cs/0512055v1 | http://arxiv.org/abs/cs/0512055v1 | http://arxiv.org/pdf/cs/0512055v1 | 2005-12-14 | 2005-12-14 | [
"Yi-Dong Shen",
"Danny De Schreye"
] | [
"",
""
] | The termination problem of a logic program can be addressed in either a
static or a dynamic way. A static approach performs termination analysis at
compile time, while a dynamic approach characterizes and tests termination of a
logic program by applying a loop checking technique. In this paper, we present
a novel dynamic approach to termination analysis for general logic programs
with moded queries. We address several interesting questions, including how to
formulate an SLDNF-derivation for a moded query, how to characterize an
infinite SLDNF-derivation with a moded query, and how to apply a loop checking
mechanism to cut infinite SLDNF-derivations for the purpose of termination
analysis. The proposed approach is very powerful and useful. It can be used (1)
to test if a logic program terminates for a given concrete or moded query, (2)
to test if a logic program terminates for all concrete or moded queries, and
(3) to find all (most general) concrete/moded queries that are most likely
terminating (or non-terminating). | 24 Pages | cs.LO | [
"cs.LO",
"cs.PL",
"D.1.6"
] |
||
Checking C++ Programs for Dimensional Consistency | http://arxiv.org/abs/cs/0512026v1 | http://arxiv.org/abs/cs/0512026v1 | http://arxiv.org/pdf/cs/0512026v1 | 2005-12-07 | 2005-12-07 | [
"I. Josopait"
] | [
""
] | I will present my implementation 'n-units' of physical units into C++
programs. It allows the compiler to check for dimensional consistency. | submitted to "Computing in Science and Engineering" | cs.PL | [
"cs.PL",
"D.1.2; I.2.2"
] |
||
A Machine-Independent port of the MPD language run time system to NetBSD | http://arxiv.org/abs/cs/0511094v1 | http://arxiv.org/abs/cs/0511094v1 | http://arxiv.org/pdf/cs/0511094v1 | 2005-11-28 | 2005-11-28 | [
"Ignatios Souvatzis"
] | [
""
] | SR (synchronizing resources) is a PASCAL - style language enhanced with
constructs for concurrent programming developed at the University of Arizona in
the late 1980s. MPD (presented in Gregory Andrews' book about Foundations of
Multithreaded, Parallel, and Distributed Programming) is its successor,
providing the same language primitives with a different, more C-style, syntax.
The run-time system (in theory, identical, but not designed for sharing) of
those languages provides the illusion of a multiprocessor machine on a single
Unix-like system or a (local area) network of Unix-like machines.
Chair V of the Computer Science Department of the University of Bonn is
operating a laboratory for a practical course in parallel programming
consisting of computing nodes running NetBSD/arm, normally used via PVM, MPI
etc.
We are considering to offer SR and MPD for this, too. As the original
language distributions were only targeted at a few commercial Unix systems,
some porting effort is needed. However, some of the porting effort of our
earlier SR port should be reusable.
The integrated POSIX threads support of NetBSD-2.0 and later allows us to use
library primitives provided for NetBSD's phtread system to implement the
primitives needed by the SR run-time system, thus implementing 13 target CPUs
at once and automatically making use of SMP on VAX, Alpha, PowerPC, Sparc,
32-bit Intel and 64 bit AMD CPUs.
We'll present some methods used for the impementation and compare some
performance values to the traditional implementation. | 6 pages | Christian Tschudin et al. (Eds.): Proceedings of the Fourth
European BSD Conference, 2005 Basel, Switzerland | cs.DC | [
"cs.DC",
"cs.PL",
"D.1.3; D.3.4"
] |
|
The SL synchronous language, revisited | http://arxiv.org/abs/cs/0511092v1 | http://arxiv.org/abs/cs/0511092v1 | http://arxiv.org/pdf/cs/0511092v1 | 2005-11-28 | 2005-11-28 | [
"Roberto Amadio"
] | [
""
] | We revisit the SL synchronous programming model introduced by Boussinot and
De Simone (IEEE, Trans. on Soft. Eng., 1996). We discuss an alternative design
of the model including thread spawning and recursive definitions and we explore
some basic properties of the revised model: determinism, reactivity, CPS
translation to a tail recursive form, computational expressivity, and a
compositional notion of program equivalence. | Journal of Logic and Algebraic Programming 70 (15/02/2007) 121-150 | cs.PL | [
"cs.PL"
] |
||
Integration of Declarative and Constraint Programming | http://arxiv.org/abs/cs/0511090v2 | http://arxiv.org/abs/cs/0511090v2 | http://arxiv.org/pdf/cs/0511090v2 | 2005-11-27 | 2006-01-14 | [
"Petra Hofstedt",
"Peter Pepper"
] | [
"",
""
] | Combining a set of existing constraint solvers into an integrated system of
cooperating solvers is a useful and economic principle to solve hybrid
constraint problems. In this paper we show that this approach can also be used
to integrate different language paradigms into a unified framework.
Furthermore, we study the syntactic, semantic and operational impacts of this
idea for the amalgamation of declarative and constraint programming. | 30 pages, 9 figures, To appear in Theory and Practice of Logic
Programming (TPLP) | cs.PL | [
"cs.PL",
"cs.AI",
"D.3.2"
] |
||
Semantics and simulation of communication in quantum programming | http://arxiv.org/abs/quant-ph/0511145v1 | http://arxiv.org/abs/quant-ph/0511145v1 | http://arxiv.org/pdf/quant-ph/0511145v1 | 2005-11-15 | 2005-11-15 | [
"Wolfgang Mauerer"
] | [
""
] | We present the quantum programming language cQPL which is an extended version
of QPL [P. Selinger, Math. Struct. in Comp. Sci. 14(4):527-586, 2004]. It is
capable of quantum communication and it can be used to formulate all possible
quantum algorithms. Additionally, it possesses a denotational semantics based
on a partial order of superoperators and uses fixed points on a generalised
Hilbert space to formalise (in addition to all standard features expected from
a quantum programming language) the exchange of classical and quantum data
between an arbitrary number of participants. Additionally, we present the
implementation of a cQPL compiler which generates code for a quantum simulator. | Master's thesis, 101 pages | quant-ph | [
"quant-ph",
"cs.PL"
] |
||
Practical Datatype Specializations with Phantom Types and Recursion
Schemes | http://arxiv.org/abs/cs/0510074v1 | http://arxiv.org/abs/cs/0510074v1 | http://arxiv.org/pdf/cs/0510074v1 | 2005-10-24 | 2005-10-24 | [
"Matthew Fluet",
"Riccardo Pucella"
] | [
"",
""
] | Datatype specialization is a form of subtyping that captures program
invariants on data structures that are expressed using the convenient and
intuitive datatype notation. Of particular interest are structural invariants
such as well-formedness. We investigate the use of phantom types for describing
datatype specializations. We show that it is possible to express
statically-checked specializations within the type system of Standard ML. We
also show that this can be done in a way that does not lose useful programming
facilities such as pattern matching in case expressions. | 25 pages. Appeared in the Proc. of the 2005 ACM SIGPLAN Workshop on
ML | cs.PL | [
"cs.PL",
"D.1.1; D.3.3; F.3.3"
] |
||
Semantics of UML 2.0 Activity Diagram for Business Modeling by Means of
Virtual Machine | http://arxiv.org/abs/cs/0509089v1 | http://arxiv.org/abs/cs/0509089v1 | http://arxiv.org/pdf/cs/0509089v1 | 2005-09-28 | 2005-09-28 | [
"Valdis Vitolins",
"Audris Kalnins"
] | [
"",
""
] | The paper proposes a more formalized definition of UML 2.0 Activity Diagram
semantics. A subset of activity diagram constructs relevant for business
process modeling is considered. The semantics definition is based on the
original token flow methodology, but a more constructive approach is used. The
Activity Diagram Virtual machine is defined by means of a metamodel, with
operations defined by a mix of pseudocode and OCL pre- and postconditions. A
formal procedure is described which builds the virtual machine for any activity
diagram. The relatively complicated original token movement rules in control
nodes and edges are combined into paths from an action to action. A new
approach is the use of different (push and pull) engines, which move tokens
along the paths. Pull engines are used for paths containing join nodes, where
the movement of several tokens must be coordinated. The proposed virtual
machine approach makes the activity semantics definition more transparent where
the token movement can be easily traced. However, the main benefit of the
approach is the possibility to use the defined virtual machine as a basis for
UML activity diagram based workflow or simulation engine. | 12 pages, 7 figures, Proceedings of the conference "EDOC 2005", 19-23
September 2005 | Valdis Vitolins, Audris Kalnins, Proceedings Ninth IEEE
International EDOC Enterprise Computing Conference, IEEE, 2005, pp. 181.-192 | 10.1109/EDOC.2005.29 | cs.CE | [
"cs.CE",
"cs.PL"
] |
Language embeddings that preserve staging and safety | http://arxiv.org/abs/cs/0509057v1 | http://arxiv.org/abs/cs/0509057v1 | http://arxiv.org/pdf/cs/0509057v1 | 2005-09-19 | 2005-09-19 | [
"Todd L. Veldhuizen"
] | [
""
] | We study embeddings of programming languages into one another that preserve
what reductions take place at compile-time, i.e., staging. A certain condition
-- what we call a `Turing complete kernel' -- is sufficient for a language to
be stage-universal in the sense that any language may be embedded in it while
preserving staging. A similar line of reasoning yields the notion of
safety-preserving embeddings, and a useful characterization of
safety-universality. Languages universal with respect to staging and safety are
good candidates for realizing domain-specific embedded languages (DSELs) and
`active libraries' that provide domain-specific optimizations and safety
checks. | cs.PL | [
"cs.PL",
"D.3.4"
] |
|||
Haskell's overlooked object system | http://arxiv.org/abs/cs/0509027v1 | http://arxiv.org/abs/cs/0509027v1 | http://arxiv.org/pdf/cs/0509027v1 | 2005-09-10 | 2005-09-10 | [
"Oleg Kiselyov",
"Ralf Laemmel"
] | [
"",
""
] | Haskell provides type-class-bounded and parametric polymorphism as opposed to
subtype polymorphism of object-oriented languages such as Java and OCaml. It is
a contentious question whether Haskell 98 without extensions, or with common
extensions, or with new extensions can fully support conventional
object-oriented programming with encapsulation, mutable state, inheritance,
overriding, statically checked implicit and explicit subtyping, and so on. We
systematically substantiate that Haskell 98, with some common extensions,
supports all the conventional OO features plus more advanced ones, including
first-class lexically scoped classes, implicitly polymorphic classes, flexible
multiple inheritance, safe downcasts and safe co-variant arguments. Haskell
indeed can support width and depth, structural and nominal subtyping. We
address the particular challenge to preserve Haskell's type inference even for
objects and object-operating functions. The OO features are introduced in
Haskell as the OOHaskell library. OOHaskell lends itself as a sandbox for typed
OO language design. | 79 pages; software available at
http://homepages.cwi.nl/~ralf/OOHaskell/ | cs.PL | [
"cs.PL",
"D.1.5; D.1.1; D.2.3; D.3.3"
] |
||
Temporal Phylogenetic Networks and Logic Programming | http://arxiv.org/abs/cs/0508129v1 | http://arxiv.org/abs/cs/0508129v1 | http://arxiv.org/pdf/cs/0508129v1 | 2005-08-30 | 2005-08-30 | [
"Esra Erdem",
"Vladimir Lifschitz",
"Don Ringe"
] | [
"",
"",
""
] | The concept of a temporal phylogenetic network is a mathematical model of
evolution of a family of natural languages. It takes into account the fact that
languages can trade their characteristics with each other when linguistic
communities are in contact, and also that a contact is only possible when the
languages are spoken at the same time. We show how computational methods of
answer set programming and constraint logic programming can be used to generate
plausible conjectures about contacts between prehistoric linguistic
communities, and illustrate our approach by applying it to the evolutionary
history of Indo-European languages.
To appear in Theory and Practice of Logic Programming (TPLP). | cs.LO | [
"cs.LO",
"cs.AI",
"cs.PL"
] |
|||
On Algorithms and Complexity for Sets with Cardinality Constraints | http://arxiv.org/abs/cs/0508123v1 | http://arxiv.org/abs/cs/0508123v1 | http://arxiv.org/pdf/cs/0508123v1 | 2005-08-28 | 2005-08-28 | [
"Bruno Marnette",
"Viktor Kuncak",
"Martin Rinard"
] | [
"",
"",
""
] | Typestate systems ensure many desirable properties of imperative programs,
including initialization of object fields and correct use of stateful library
interfaces. Abstract sets with cardinality constraints naturally generalize
typestate properties: relationships between the typestates of objects can be
expressed as subset and disjointness relations on sets, and elements of sets
can be represented as sets of cardinality one. Motivated by these applications,
this paper presents new algorithms and new complexity results for constraints
on sets and their cardinalities. We study several classes of constraints and
demonstrate a trade-off between their expressive power and their complexity.
Our first result concerns a quantifier-free fragment of Boolean Algebra with
Presburger Arithmetic. We give a nondeterministic polynomial-time algorithm for
reducing the satisfiability of sets with symbolic cardinalities to constraints
on constant cardinalities, and give a polynomial-space algorithm for the
resulting problem.
In a quest for more efficient fragments, we identify several subclasses of
sets with cardinality constraints whose satisfiability is NP-hard. Finally, we
identify a class of constraints that has polynomial-time satisfiability and
entailment problems and can serve as a foundation for efficient program
analysis. | 20 pages. 12 figures | cs.PL | [
"cs.PL",
"cs.LO",
"cs.SE"
] |
||
A Generic Framework for the Analysis and Specialization of Logic
Programs | http://arxiv.org/abs/cs/0508111v1 | http://arxiv.org/abs/cs/0508111v1 | http://arxiv.org/pdf/cs/0508111v1 | 2005-08-24 | 2005-08-24 | [
"German Puebla",
"Elvira Albert",
"Manuel Hermenegildo"
] | [
"",
"",
""
] | The relationship between abstract interpretation and partial deduction has
received considerable attention and (partial) integrations have been proposed
starting from both the partial deduction and abstract interpretation
perspectives. In this work we present what we argue is the first fully
described generic algorithm for efficient and precise integration of abstract
interpretation and partial deduction. Taking as starting point state-of-the-art
algorithms for context-sensitive, polyvariant abstract interpretation and
(abstract) partial deduction, we present an algorithm which combines the best
of both worlds. Key ingredients include the accurate success propagation
inherent to abstract interpretation and the powerful program transformations
achievable by partial deduction. In our algorithm, the calls which appear in
the analysis graph are not analyzed w.r.t. the original definition of the
procedure but w.r.t. specialized definitions of these procedures. Such
specialized definitions are obtained by applying both unfolding and abstract
executability. Our framework is parametric w.r.t. different control strategies
and abstract domains. Different combinations of such parameters correspond to
existing algorithms for program analysis and specialization. Simultaneously,
our approach opens the door to the efficient computation of strictly more
precise results than those achievable by each of the individual techniques. The
algorithm is now one of the key components of the CiaoPP analysis and
specialization system. | In A. Serebrenik and S. Munoz-Hernandez (editors), Proceedings of the
15th Workshop on Logic-based methods in Programming Environments October
2005, Sitges. cs.PL/0508078 | cs.PL | [
"cs.PL",
"cs.SE",
"D.2.6"
] |
||
Proving or Disproving likely Invariants with Constraint Reasoning | http://arxiv.org/abs/cs/0508108v1 | http://arxiv.org/abs/cs/0508108v1 | http://arxiv.org/pdf/cs/0508108v1 | 2005-08-24 | 2005-08-24 | [
"Tristan Denmat",
"Arnaud Gotlieb",
"Mireille Ducasse"
] | [
"",
"",
""
] | A program invariant is a property that holds for every execution of the
program. Recent work suggest to infer likely-only invariants, via dynamic
analysis. A likely invariant is a property that holds for some executions but
is not guaranteed to hold for all executions. In this paper, we present work in
progress addressing the challenging problem of automatically verifying that
likely invariants are actual invariants. We propose a constraint-based
reasoning approach that is able, unlike other approaches, to both prove or
disprove likely invariants. In the latter case, our approach provides
counter-examples. We illustrate the approach on a motivating example where
automatically generated likely invariants are verified. | In A. Serebrenik and S. Munoz-Hernandez (editors), Proceedings of the
15th Workshop on Logic-based methods in Programming Environments October
2005, Sitges. cs.PL/0508078 | cs.SE | [
"cs.SE",
"cs.PL",
"D.2.6"
] |
||
An Improved Non-Termination Criterion for Binary Constraint Logic
Programs | http://arxiv.org/abs/cs/0508106v1 | http://arxiv.org/abs/cs/0508106v1 | http://arxiv.org/pdf/cs/0508106v1 | 2005-08-24 | 2005-08-24 | [
"Etienne Payet",
"Fred Mesnard"
] | [
"",
""
] | On one hand, termination analysis of logic programs is now a fairly
established research topic within the logic programming community. On the other
hand, non-termination analysis seems to remain a much less attractive subject.
If we divide this line of research into two kinds of approaches: dynamic versus
static analysis, this paper belongs to the latter. It proposes a criterion for
detecting non-terminating atomic queries with respect to binary CLP clauses,
which strictly generalizes our previous works on this subject. We give a
generic operational definition and a logical form of this criterion. Then we
show that the logical form is correct and complete with respect to the
operational definition. | In A. Serebrenik and S. Munoz-Hernandez (editors), Proceedings of the
15th Workshop on Logic-based methods in Programming Environments October
2005, Sitges. cs.PL/0508078 | cs.PL | [
"cs.PL",
"D.2.6"
] |
||
Extending Prolog with Incomplete Fuzzy Information | http://arxiv.org/abs/cs/0508091v1 | http://arxiv.org/abs/cs/0508091v1 | http://arxiv.org/pdf/cs/0508091v1 | 2005-08-22 | 2005-08-22 | [
"Susana Munoz-Hernandez",
"Claudio Vaucheret"
] | [
"",
""
] | Incomplete information is a problem in many aspects of actual environments.
Furthermore, in many sceneries the knowledge is not represented in a crisp way.
It is common to find fuzzy concepts or problems with some level of uncertainty.
There are not many practical systems which handle fuzziness and uncertainty and
the few examples that we can find are used by a minority. To extend a popular
system (which many programmers are using) with the ability of combining crisp
and fuzzy knowledge representations seems to be an interesting issue.
Our first work (Fuzzy Prolog) was a language that models
$\mathcal{B}([0,1])$-valued Fuzzy Logic. In the Borel algebra,
$\mathcal{B}([0,1])$, truth value is represented using unions of intervals of
real numbers. This work was more general in truth value representation and
propagation than previous works.
An interpreter for this language using Constraint Logic Programming over Real
numbers (CLP(${\cal R}$)) was implemented and is available in the Ciao system.
Now, we enhance our former approach by using default knowledge to represent
incomplete information in Logic Programming. We also provide the implementation
of this new framework. This new release of Fuzzy Prolog handles incomplete
information, it has a complete semantics (the previous one was incomplete as
Prolog) and moreover it is able to combine crisp and fuzzy logic in Prolog
programs. Therefore, new Fuzzy Prolog is more expressive to represent real
world.
Fuzzy Prolog inherited from Prolog its incompleteness. The incorporation of
default reasoning to Fuzzy Prolog removes this problem and requires a richer
semantics which we discuss. | cs.PL | [
"cs.PL",
"cs.SE",
"D.2.6"
] |
|||
Proceedings of the 15th Workshop on Logic-based methods in Programming
Environments WLPE'05 -- October 5, 2005 -- Sitges (Barcelona), Spain | http://arxiv.org/abs/cs/0508078v4 | http://arxiv.org/abs/cs/0508078v4 | http://arxiv.org/pdf/cs/0508078v4 | 2005-08-17 | 2005-08-25 | [
"Alexander Serebrenik",
"Susana Munoz-Hernandez"
] | [
"",
""
] | This volume contains papers presented at WLPE 2005, 15th International
Workshop on Logic-based methods in Programming Environments.
The aim of the workshop is to provide an informal meeting for the researchers
working on logic-based tools for development and analysis of programs. This
year we emphasized two aspects: on one hand the presentation, pragmatics and
experiences of tools for logic programming environments; on the other one,
logic-based environmental tools for programming in general.
The workshop took place in Sitges (Barcelona), Spain as a satellite workshop
of the 21th International Conference on Logic Programming (ICLP 2005). This
workshop continues the series of successful international workshops on logic
programming environments held in Ohio, USA (1989), Eilat, Israel (1990), Paris,
France (1991), Washington, USA (1992), Vancouver, Canada (1993), Santa
Margherita Ligure, Italy (1994), Portland, USA (1995), Leuven, Belgium and Port
Jefferson, USA (1997), Las Cruces, USA (1999), Paphos, Cyprus (2001),
Copenhagen, Denmark (2002), Mumbai, India (2003) and Saint Malo, France (2004).
We have received eight submissions (2 from France, 2 Spain-US cooperations,
one Spain-Argentina cooperation, one from Japan, one from the United Kingdom
and one Sweden-France cooperation). Program committee has decided to accept
seven papers. This volume contains revised versions of the accepted papers.
We are grateful to the authors of the papers, the reviewers and the members
of the Program Committee for the help and fruitful discussions. | Seven accepted papers | cs.PL | [
"cs.PL",
"cs.LO",
"cs.SE",
"D.2.6; D.1.6"
] |
||
A probabilistic branching bisimulation for quantum processes | http://arxiv.org/abs/quant-ph/0508116v1 | http://arxiv.org/abs/quant-ph/0508116v1 | http://arxiv.org/pdf/quant-ph/0508116v1 | 2005-08-16 | 2005-08-16 | [
"Marie Lalire"
] | [
""
] | Full formal descriptions of algorithms making use of quantum principles must
take into account both quantum and classical computing components and assemble
them so that they communicate and cooperate.Moreover, to model concurrent and
distributed quantum computations, as well as quantum communication protocols,
quantum to quantum communications which move qubits physically from one place
to another must also be taken into account.
Inspired by classical process algebras, which provide a framework for
modeling cooperating computations, a process algebraic notation is defined,
which provides a homogeneous style to formal descriptions of concurrent and
distributed computations comprising both quantum and classical parts.Based upon
an operational semantics which makes sure that quantum objects, operations and
communications operate according to the postulates of quantum mechanics, a
probabilistic branching bisimulation is defined among processes considered as
having the same behavior. | 25 pages, 3 figures, submitted to a special issue of MSCS | quant-ph | [
"quant-ph",
"cs.PL"
] |
||
An Operational Foundation for Delimited Continuations in the CPS
Hierarchy | http://arxiv.org/abs/cs/0508048v4 | http://arxiv.org/abs/cs/0508048v4 | http://arxiv.org/pdf/cs/0508048v4 | 2005-08-08 | 2005-12-08 | [
"Malgorzata Biernacka",
"Dariusz Biernacki",
"Olivier Danvy"
] | [
"",
"",
""
] | We present an abstract machine and a reduction semantics for the
lambda-calculus extended with control operators that give access to delimited
continuations in the CPS hierarchy. The abstract machine is derived from an
evaluator in continuation-passing style (CPS); the reduction semantics (i.e., a
small-step operational semantics with an explicit representation of evaluation
contexts) is constructed from the abstract machine; and the control operators
are the shift and reset family. We also present new applications of delimited
continuations in the CPS hierarchy: finding list prefixes and normalization by
evaluation for a hierarchical language of units and products. | 39 pages | Logical Methods in Computer Science, Volume 1, Issue 2 (November
8, 2005) lmcs:2269 | 10.2168/LMCS-1(2:5)2005 | cs.LO | [
"cs.LO",
"cs.PL",
"D.1.1; F.3.2"
] |
Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and
Zipf's Law | http://arxiv.org/abs/cs/0508023v3 | http://arxiv.org/abs/cs/0508023v3 | http://arxiv.org/pdf/cs/0508023v3 | 2005-08-03 | 2005-10-02 | [
"Todd L. Veldhuizen"
] | [
""
] | We analyze software reuse from the perspective of information theory and
Kolmogorov complexity, assessing our ability to ``compress'' programs by
expressing them in terms of software components reused from libraries. A common
theme in the software reuse literature is that if we can only get the right
environment in place-- the right tools, the right generalizations, economic
incentives, a ``culture of reuse'' -- then reuse of software will soar, with
consequent improvements in productivity and software quality. The analysis
developed in this paper paints a different picture: the extent to which
software reuse can occur is an intrinsic property of a problem domain, and
better tools and culture can have only marginal impact on reuse rates if the
domain is inherently resistant to reuse. We define an entropy parameter $H \in
[0,1]$ of problem domains that measures program diversity, and deduce from this
upper bounds on code reuse and the scale of components with which we may work.
For ``low entropy'' domains with $H$ near 0, programs are highly similar to one
another and the domain is amenable to the Component-Based Software Engineering
(CBSE) dream of programming by composing large-scale components. For problem
domains with $H$ near 1, programs require substantial quantities of new code,
with only a modest proportion of an application comprised of reused,
small-scale components. Preliminary empirical results from Unix platforms
support some of the predictions of our model. | Library-Centric Software Design (LCSD 2005), an OOPSLA 2005
workshop | cs.SE | [
"cs.SE",
"cs.IT",
"cs.PL",
"math.IT",
"D.2.8; E.4; D.2.13"
] |
||
Proof rules for purely quantum programs | http://arxiv.org/abs/cs/0507043v3 | http://arxiv.org/abs/cs/0507043v3 | http://arxiv.org/pdf/cs/0507043v3 | 2005-07-18 | 2006-03-16 | [
"Yuan Feng",
"Runyao Duan",
"Zhengfeng Ji",
"Mingsheng Ying"
] | [
"",
"",
"",
""
] | We apply the notion of quantum predicate proposed by D'Hondt and Panangaden
to analyze a purely quantum language fragment which describes the quantum part
of a future quantum computer in Knill's architecture. The denotational
semantics, weakest precondition semantics, and weakest liberal precondition
semantics of this language fragment are introduced. To help reasoning about
quantum programs involving quantum loops, we extend proof rules for classical
probabilistic programs to our purely quantum programs. | Now 12 pages, introduction and Section 3 rewritten, some errors
corrected | cs.PL | [
"cs.PL",
"quant-ph",
"D.3.1; F.3.1"
] |
||
Type Inference for Guarded Recursive Data Types | http://arxiv.org/abs/cs/0507037v1 | http://arxiv.org/abs/cs/0507037v1 | http://arxiv.org/pdf/cs/0507037v1 | 2005-07-14 | 2005-07-14 | [
"Peter J. Stuckey",
"Martin Sulzmann"
] | [
"",
""
] | We consider type inference for guarded recursive data types (GRDTs) -- a
recent generalization of algebraic data types. We reduce type inference for
GRDTs to unification under a mixed prefix. Thus, we obtain efficient type
inference. Inference is incomplete because the set of type constraints allowed
to appear in the type system is only a subset of those type constraints
generated by type inference. Hence, inference only succeeds if the program is
sufficiently type annotated. We present refined procedures to infer types
incrementally and to assist the user in identifying which pieces of type
information are missing. Additionally, we introduce procedures to test if a
type is not principal and to find a principal type if one exists. | cs.PL | [
"cs.PL",
"cs.LO"
] |
|||
Improved Inference for Checking Annotations | http://arxiv.org/abs/cs/0507036v1 | http://arxiv.org/abs/cs/0507036v1 | http://arxiv.org/pdf/cs/0507036v1 | 2005-07-14 | 2005-07-14 | [
"Peter J Stuckey",
"Martin Sulzmann",
"Jeremy Wazny"
] | [
"",
"",
""
] | We consider type inference in the Hindley/Milner system extended with type
annotations and constraints with a particular focus on Haskell-style type
classes. We observe that standard inference algorithms are incomplete in the
presence of nested type annotations. To improve the situation we introduce a
novel inference scheme for checking type annotations. Our inference scheme is
also incomplete in general but improves over existing implementations as found
e.g. in the Glasgow Haskell Compiler (GHC). For certain cases (e.g. Haskell 98)
our inference scheme is complete. Our approach has been fully implemented as
part of the Chameleon system (experimental version of Haskell). | cs.PL | [
"cs.PL",
"cs.LO"
] |
|||
Security Policies as Membranes in Systems for Global Computing | http://arxiv.org/abs/cs/0506061v5 | http://arxiv.org/abs/cs/0506061v5 | http://arxiv.org/pdf/cs/0506061v5 | 2005-06-14 | 2008-10-07 | [
"Daniele Gorla",
"Matthew Hennessy",
"Vladimiro Sassone"
] | [
"",
"",
""
] | We propose a simple global computing framework, whose main concern is code
migration. Systems are structured in sites, and each site is divided into two
parts: a computing body, and a membrane, which regulates the interactions
between the computing body and the external environment. More precisely,
membranes are filters which control access to the associated site, and they
also rely on the well-established notion of trust between sites. We develop a
basic theory to express and enforce security policies via membranes. Initially,
these only control the actions incoming agents intend to perform locally. We
then adapt the basic theory to encompass more sophisticated policies, where the
number of actions an agent wants to perform, and also their order, are
considered. | 23 pages; to appear in Logical Methods in Computer Science | Logical Methods in Computer Science, Volume 1, Issue 3 (December
20, 2005) lmcs:2262 | 10.2168/LMCS-1(3:2)2005 | cs.PL | [
"cs.PL",
"cs.LO",
"D.2.4; D.3.1; F.3.2; F.3.3; F.4.3"
] |
Fast Recompilation of Object Oriented Modules | http://arxiv.org/abs/cs/0506035v1 | http://arxiv.org/abs/cs/0506035v1 | http://arxiv.org/pdf/cs/0506035v1 | 2005-06-10 | 2005-06-10 | [
"Jerome Collin",
"Michel Dagenais"
] | [
"",
""
] | Once a program file is modified, the recompilation time should be minimized,
without sacrificing execution speed or high level object oriented features. The
recompilation time is often a problem for the large graphical interactive
distributed applications tackled by modern OO languages. A compilation server
and fast code generator were developed and integrated with the SRC Modula-3
compiler and Linux ELF dynamic linker. The resulting compilation and
recompilation speedups are impressive. The impact of different language
features, processor speed, and application size are discussed. | cs.PL | [
"cs.PL"
] |
|||
Programming Finite-Domain Constraint Propagators in Action Rules | http://arxiv.org/abs/cs/0506005v1 | http://arxiv.org/abs/cs/0506005v1 | http://arxiv.org/pdf/cs/0506005v1 | 2005-06-02 | 2005-06-02 | [
"Neng-Fa Zhou"
] | [
""
] | In this paper, we propose a new language, called AR ({\it Action Rules}), and
describe how various propagators for finite-domain constraints can be
implemented in it. An action rule specifies a pattern for agents, an action
that the agents can carry out, and an event pattern for events that can
activate the agents. AR combines the goal-oriented execution model of logic
programming with the event-driven execution model. This hybrid execution model
facilitates programming constraint propagators. A propagator for a constraint
is an agent that maintains the consistency of the constraint and is activated
by the updates of the domain variables in the constraint. AR has a much
stronger descriptive power than {\it indexicals}, the language widely used in
the current finite-domain constraint systems, and is flexible for implementing
not only interval-consistency but also arc-consistency algorithms. As examples,
we present a weak arc-consistency propagator for the {\tt all\_distinct}
constraint and a hybrid algorithm for n-ary linear equality constraints.
B-Prolog has been extended to accommodate action rules. Benchmarking shows that
B-Prolog as a CLP(FD) system significantly outperforms other CLP(FD) systems. | TPLP Vol 5(4&5) 2005 | cs.PL | [
"cs.PL"
] |
||
Improving PARMA Trailing | http://arxiv.org/abs/cs/0505085v1 | http://arxiv.org/abs/cs/0505085v1 | http://arxiv.org/pdf/cs/0505085v1 | 2005-05-31 | 2005-05-31 | [
"Tom Schrijvers",
"Maria Garcia de la Banda",
"Bart Demoen",
"Peter J. Stuckey"
] | [
"",
"",
"",
""
] | Taylor introduced a variable binding scheme for logic variables in his PARMA
system, that uses cycles of bindings rather than the linear chains of bindings
used in the standard WAM representation. Both the HAL and dProlog languages
make use of the PARMA representation in their Herbrand constraint solvers.
Unfortunately, PARMA's trailing scheme is considerably more expensive in both
time and space consumption. The aim of this paper is to present several
techniques that lower the cost.
First, we introduce a trailing analysis for HAL using the classic PARMA
trailing scheme that detects and eliminates unnecessary trailings. The
analysis, whose accuracy comes from HAL's determinism and mode declarations,
has been integrated in the HAL compiler and is shown to produce space
improvements as well as speed improvements. Second, we explain how to modify
the classic PARMA trailing scheme to halve its trailing cost. This technique is
illustrated and evaluated both in the context of dProlog and HAL. Finally, we
explain the modifications needed by the trailing analysis in order to be
combined with our modified PARMA trailing scheme. Empirical evidence shows that
the combination is more effective than any of the techniques when used in
isolation.
To appear in Theory and Practice of Logic Programming. | 36 pages, 7 figures, 8 tables | cs.PL | [
"cs.PL",
"cs.PF",
"D.3.4; D.1.6; D.3.3"
] |
||
Pluggable AOP: Designing Aspect Mechanisms for Third-party Composition | http://arxiv.org/abs/cs/0505004v1 | http://arxiv.org/abs/cs/0505004v1 | http://arxiv.org/pdf/cs/0505004v1 | 2005-04-30 | 2005-04-30 | [
"Sergei Kojarski",
"David H. Lorenz"
] | [
"",
""
] | Studies of Aspect-Oriented Programming (AOP) usually focus on a language in
which a specific aspect extension is integrated with a base language. Languages
specified in this manner have a fixed, non-extensible AOP functionality. In
this paper we consider the more general case of integrating a base language
with a set of domain specific third-party aspect extensions for that language.
We present a general mixin-based method for implementing aspect extensions in
such a way that multiple, independently developed, dynamic aspect extensions
can be subject to third-party composition and work collaboratively. | (new version) In Proceedings of the 20th Annual ACM SIGPLAN
Conference on Object Oriented Programming Systems Languages and Applications
(San Diego, CA, USA, October 16 - 20, 2005). OOPSLA '05. ACM Press, New York,
NY, 247-263. | 10.1145/1094811.1094831 | cs.SE | [
"cs.SE",
"cs.PL",
"D.1.5; D.2.10; D.2.12; D.3.1; D.3.4"
] |
|
A Scalable Stream-Oriented Framework for Cluster Applications | http://arxiv.org/abs/cs/0504051v1 | http://arxiv.org/abs/cs/0504051v1 | http://arxiv.org/pdf/cs/0504051v1 | 2005-04-13 | 2005-04-13 | [
"Tassos S. Argyros",
"David R. Cheriton"
] | [
"",
""
] | This paper presents a stream-oriented architecture for structuring cluster
applications. Clusters that run applications based on this architecture can
scale to tenths of thousands of nodes with significantly less performance loss
or reliability problems. Our architecture exploits the stream nature of the
data flow and reduces congestion through load balancing, hides latency behind
data pushes and transparently handles node failures. In our ongoing work, we
are developing an implementation for this architecture and we are able to run
simple data mining applications on a cluster simulator. | cs.DC | [
"cs.DC",
"cs.DB",
"cs.NI",
"cs.OS",
"cs.PL"
] |
|||
Mapping Fusion and Synchronized Hyperedge Replacement into Logic
Programming | http://arxiv.org/abs/cs/0504050v2 | http://arxiv.org/abs/cs/0504050v2 | http://arxiv.org/pdf/cs/0504050v2 | 2005-04-13 | 2006-01-15 | [
"Ivan Lanese",
"Ugo Montanari"
] | [
"",
""
] | In this paper we compare three different formalisms that can be used in the
area of models for distributed, concurrent and mobile systems. In particular we
analyze the relationships between a process calculus, the Fusion Calculus,
graph transformations in the Synchronized Hyperedge Replacement with Hoare
synchronization (HSHR) approach and logic programming. We present a translation
from Fusion Calculus into HSHR (whereas Fusion Calculus uses Milner
synchronization) and prove a correspondence between the reduction semantics of
Fusion Calculus and HSHR transitions. We also present a mapping from HSHR into
a transactional version of logic programming and prove that there is a full
correspondence between the two formalisms. The resulting mapping from Fusion
Calculus to logic programming is interesting since it shows the tight analogies
between the two formalisms, in particular for handling name generation and
mobility. The intermediate step in terms of HSHR is convenient since graph
transformations allow for multiple, remote synchronizations, as required by
Fusion Calculus semantics. | 44 pages, 8 figures, to appear in a special issue of Theory and
Practice of Logic Programming, minor revision | cs.LO | [
"cs.LO",
"cs.PL",
"F.1.1; F.4.1"
] |
||
Incorporating LINQ, State Diagrams Templating and Package Extension Into
Java | http://arxiv.org/abs/cs/0504025v15 | http://arxiv.org/abs/cs/0504025v15 | http://arxiv.org/pdf/cs/0504025v15 | 2005-04-07 | 2009-08-25 | [
"Raju Renjit. G"
] | [
""
] | This submission has been withdrawn at the request of the author. | cs.PL | [
"cs.PL"
] |
|||
Super Object Oriented Programming | http://arxiv.org/abs/cs/0504008v10 | http://arxiv.org/abs/cs/0504008v10 | http://arxiv.org/pdf/cs/0504008v10 | 2005-04-04 | 2009-08-25 | [
"Raju Renjit. G"
] | [
""
] | This submission has been withdrawn at the request of the author. | cs.PL | [
"cs.PL"
] |
|||
Contextual equivalence for higher-order pi-calculus revisited | http://arxiv.org/abs/cs/0503067v5 | http://arxiv.org/abs/cs/0503067v5 | http://arxiv.org/pdf/cs/0503067v5 | 2005-03-24 | 2006-01-24 | [
"Alan Jeffrey",
"Julian Rathke"
] | [
"",
""
] | The higher-order pi-calculus is an extension of the pi-calculus to allow
communication of abstractions of processes rather than names alone. It has been
studied intensively by Sangiorgi in his thesis where a characterisation of a
contextual equivalence for higher-order pi-calculus is provided using labelled
transition systems and normal bisimulations. Unfortunately the proof technique
used there requires a restriction of the language to only allow finite types.
We revisit this calculus and offer an alternative presentation of the labelled
transition system and a novel proof technique which allows us to provide a
fully abstract characterisation of contextual equivalence using labelled
transitions and bisimulations for higher-order pi-calculus with recursive types
also. | Logical Methods in Computer Science, Volume 1, Issue 1 (April 21,
2005) lmcs:2274 | 10.2168/LMCS-1(1:4)2005 | cs.PL | [
"cs.PL"
] |
|
Data-Structure Rewriting | http://arxiv.org/abs/cs/0503065v1 | http://arxiv.org/abs/cs/0503065v1 | http://arxiv.org/pdf/cs/0503065v1 | 2005-03-24 | 2005-03-24 | [
"Dominique Duval",
"Rachid Echahed",
"Frederic Prost"
] | [
"",
"",
""
] | We tackle the problem of data-structure rewriting including pointer
redirections. We propose two basic rewrite steps: (i) Local Redirection and
Replacement steps the aim of which is redirecting specific pointers determined
by means of a pattern, as well as adding new information to an existing data ;
and (ii) Global Redirection steps which are aimed to redirect all pointers
targeting a node towards another one. We define these two rewriting steps
following the double pushout approach. We define first the category of graphs
we consider and then define rewrite rules as pairs of graph homomorphisms of
the form "L <- K ->R". Unfortunately, inverse pushouts (complement pushouts)
are not unique in our setting and pushouts do not always exist. Therefore, we
define rewriting steps so that a rewrite rule can always be performed once a
matching is found. | cs.PL | [
"cs.PL",
"cs.DS",
"D1, D3, E1, F3.3, I1"
] |