title
stringlengths
4
246
id
stringlengths
32
39
arxiv_url
stringlengths
32
39
pdf_url
stringlengths
32
39
published_date
stringlengths
10
10
updated_date
stringlengths
10
10
authors
sequencelengths
1
535
affiliations
sequencelengths
1
535
summary
stringlengths
23
3.54k
comment
stringlengths
0
762
journal_ref
stringlengths
0
545
doi
stringlengths
0
151
primary_category
stringclasses
156 values
categories
sequencelengths
1
11
An Object-Oriented and Fast Lexicon for Semantic Generation
http://arxiv.org/abs/0905.3318v1
http://arxiv.org/abs/0905.3318v1
http://arxiv.org/pdf/0905.3318v1
2009-05-20
2009-05-20
[ "Maarten Hijzelendoorn", "Crit Cremers" ]
[ "", "" ]
This paper is about the technical design of a large computational lexicon, its storage, and its access from a Prolog environment. Traditionally, efficient access and storage of data structures is implemented by a relational database management system. In Delilah, a lexicon-based NLP system, efficient access to the lexicon by the semantic generator is vital. We show that our highly detailed HPSG-style lexical specifications do not fit well in the Relational Model, and that they cannot be efficiently retrieved. We argue that they fit more naturally in the Object-Oriented Model. Although storage of objects is redundant, we claim that efficient access is still possible by applying indexing, and compression techniques from the Relational Model to the Object-Oriented Model. We demonstrate that it is possible to implement object-oriented storage and fast access in ISO Prolog.
Paper presented at the 18th Computational Linguistics In the Netherlands Meeting (CLIN), Nijmegen, 10 December 2007, 15pp
cs.CL
[ "cs.CL", "cs.DB", "cs.DS", "cs.IR", "cs.PL" ]
Empirical study of software quality evolution in open source projects using agile practices
http://arxiv.org/abs/0905.3287v1
http://arxiv.org/abs/0905.3287v1
http://arxiv.org/pdf/0905.3287v1
2009-05-20
2009-05-20
[ "Alessandro Murgia", "Giulio Concas", "Sandro Pinna", "Roberto Tonelli", "Ivana Turnu" ]
[ "", "", "", "", "" ]
We analyse the time evolution of two open source Java projects: Eclipse and Netbeans, both developed following agile practices, though to a different extent. Our study is centered on quality analysis of the systems, measured as defects absence, and its relation with software metrics evolution. The two projects are described through a software graph in which nodes are represented by Java files and edges describe the existing relation between nodes. We propose a metrics suite for Java files based on Chidamber and Kemerer suite, and use it to study software evolution and its relationship with bug count.
12 pages, 6 figures 2 tables
cs.SE
[ "cs.SE", "cs.PL", "D.2.8" ]
An Analysis of Bug Distribution in Object Oriented Systems
http://arxiv.org/abs/0905.3296v1
http://arxiv.org/abs/0905.3296v1
http://arxiv.org/pdf/0905.3296v1
2009-05-20
2009-05-20
[ "Alessandro Murgia", "Giulio Concas", "Michele Marchesi", "Roberto Tonelli", "Ivana Turnu" ]
[ "", "", "", "", "" ]
We introduced a new approach to describe Java software as graph, where nodes represent a Java file - called compilation unit (CU) - and an edges represent a relations between them. The software system is characterized by the degree distribution of the graph properties, like in-or-out links, as well as by the distribution of Chidamber and Kemerer metrics computed on its CUs. Every CU can be related to one or more bugs during its life. We find a relationship among the software system and the bugs hitting its nodes. We found that the distribution of some metrics, and the number of bugs per CU, exhibit a power-law behavior in their tails, as well as the number of CUs influenced by a specific bug. We examine the evolution of software metrics across different releases to understand how relationships among CUs metrics and CUs faultness change with time.
17 pages, 8 figures, 10 tables
cs.SE
[ "cs.SE", "cs.PL", "D.2.8" ]
A Theory of Explicit Substitutions with Safe and Full Composition
http://arxiv.org/abs/0905.2539v3
http://arxiv.org/abs/0905.2539v3
http://arxiv.org/pdf/0905.2539v3
2009-05-15
2009-07-15
[ "Delia Kesner" ]
[ "" ]
Many different systems with explicit substitutions have been proposed to implement a large class of higher-order languages. Motivations and challenges that guided the development of such calculi in functional frameworks are surveyed in the first part of this paper. Then, very simple technology in named variable-style notation is used to establish a theory of explicit substitutions for the lambda-calculus which enjoys a whole set of useful properties such as full composition, simulation of one-step beta-reduction, preservation of beta-strong normalisation, strong normalisation of typed terms and confluence on metaterms. Normalisation of related calculi is also discussed.
29 pages Special Issue: Selected Papers of the Conference "International Colloquium on Automata, Languages and Programming 2008" edited by Giuseppe Castagna and Igor Walukiewicz
Logical Methods in Computer Science, Volume 5, Issue 3 (July 15, 2009) lmcs:816
10.2168/LMCS-5(3:1)2009
cs.PL
[ "cs.PL", "cs.LO", "F.3.2; D.1.1; F.4.1" ]
The Distribution of Program Sizes and Its Implications: An Eclipse Case Study
http://arxiv.org/abs/0905.2288v1
http://arxiv.org/abs/0905.2288v1
http://arxiv.org/pdf/0905.2288v1
2009-05-14
2009-05-14
[ "Hongyu Zhang", "Hee Beng Kuan Tan", "Michele Marchesi" ]
[ "", "", "" ]
A large software system is often composed of many inter-related programs of different sizes. Using the public Eclipse dataset, we replicate our previous study on the distribution of program sizes. Our results confirm that the program sizes follow the lognormal distribution. We also investigate the implications of the program size distribution on size estimation and quality predication. We find that the nature of size distribution can be used to estimate the size of a large Java system. We also find that a small percentage of largest programs account for a large percentage of defects, and the number of defects across programs follows the Weibull distribution when the programs are ranked by their sizes. Our results show that the distribution of program sizes is an important property for understanding large and complex software systems.
10 pages, 2 figures, 6 tables
cs.SE
[ "cs.SE", "cs.PL", "D.2.8" ]
A protocol for instruction stream processing
http://arxiv.org/abs/0905.2257v1
http://arxiv.org/abs/0905.2257v1
http://arxiv.org/pdf/0905.2257v1
2009-05-14
2009-05-14
[ "J. A. Bergstra", "C. A. Middelburg" ]
[ "", "" ]
The behaviour produced by an instruction sequence under execution is a behaviour to be controlled by some execution environment: each step performed actuates the processing of an instruction by the execution environment and a reply returned at completion of the processing determines how the behaviour proceeds. In this paper, we are concerned with the case where the processing takes place remotely. We describe a protocol to deal with the case where the behaviour produced by an instruction sequence under execution leads to the generation of a stream of instructions to be processed and a remote execution unit handles the processing of that stream of instructions.
15pages
cs.PL
[ "cs.PL", "cs.DC", "D.2.1; D.2.4; F.1.1; F.3.1" ]
Termination Prediction for General Logic Programs
http://arxiv.org/abs/0905.2004v1
http://arxiv.org/abs/0905.2004v1
http://arxiv.org/pdf/0905.2004v1
2009-05-13
2009-05-13
[ "Yi-Dong Shen", "Danny De Schreye", "Dean Voets" ]
[ "", "", "" ]
We present a heuristic framework for attacking the undecidable termination problem of logic programs, as an alternative to current termination/non-termination proof approaches. We introduce an idea of termination prediction, which predicts termination of a logic program in case that neither a termination nor a non-termination proof is applicable. We establish a necessary and sufficient characterization of infinite (generalized) SLDNF-derivations with arbitrary (concrete or moded) queries, and develop an algorithm that predicts termination of general logic programs with arbitrary non-floundering queries. We have implemented a termination prediction tool and obtained quite satisfactory experimental results. Except for five programs which break the experiment time limit, our prediction is 100% correct for all 296 benchmark programs of the Termination Competition 2007, of which eighteen programs cannot be proved by any of the existing state-of-the-art analyzers like AProVE07, NTI, Polytool and TALP.
28 pages, 12 figures. to appear in Theory and Practice of Logic Programming (TPLP)
cs.PL
[ "cs.PL", "cs.AI", "cs.LO" ]
A FORTRAN coded regular expression Compiler for IBM 1130 Computing System
http://arxiv.org/abs/0905.0740v1
http://arxiv.org/abs/0905.0740v1
http://arxiv.org/pdf/0905.0740v1
2009-05-06
2009-05-06
[ "Gerardo Cisneros" ]
[ "" ]
REC (Regular Expression Compiler) is a concise programming language which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes an interpreter of REC written in FORTRAN.
This version of REC is archaeological reconstruction of REC/A language on IBM1130 Simulator (SIMH IBM 1130 Emulator and Disk Monitor System R2V12) from Computer History Simulation Project (www.ibm1130.org), also see REC language is a live for Ignacio Vega-Paez
Acta Mexicana de Ciencia y Tecnologia Vol. IV No. 1, page 30-86, 1970
cs.CL
[ "cs.CL", "cs.PL" ]
REC language is a live on IBM1130 simulator, EL lenguaje REC esta vivo en el simulador de la IBM 1130
http://arxiv.org/abs/0905.0737v1
http://arxiv.org/abs/0905.0737v1
http://arxiv.org/pdf/0905.0737v1
2009-05-06
2009-05-06
[ "Ignacio Vega-Paez", "Jose Angel Ortega", "Georgina G. Pulido" ]
[ "", "", "" ]
REC (Regular Expression Compiler) is a concise programming language development in mayor Mexican Universities at end of 60s which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes use of the interpreter of REC written in FORTRAN on IBM1130 Simulator from -Computer History Simulation- Project.
This work is archaeological reconstruction of REC/A language
cs.PL
[ "cs.PL" ]
Simulating reachability using first-order logic with applications to verification of linked data structures
http://arxiv.org/abs/0904.4902v2
http://arxiv.org/abs/0904.4902v2
http://arxiv.org/pdf/0904.4902v2
2009-04-30
2009-05-27
[ "Tal Lev-Ami", "Neil Immerman", "Thomas Reps", "Mooly Sagiv", "Siddharth Srivastava", "Greta Yorsh" ]
[ "", "", "", "", "", "" ]
This paper shows how to harness existing theorem provers for first-order logic to automatically verify safety properties of imperative programs that perform dynamic storage allocation and destructive updating of pointer-valued structure fields. One of the main obstacles is specifying and proving the (absence) of reachability properties among dynamically allocated cells. The main technical contributions are methods for simulating reachability in a conservative way using first-order formulas--the formulas describe a superset of the set of program states that would be specified if one had a precise way to express reachability. These methods are employed for semi-automatic program verification (i.e., using programmer-supplied loop invariants) on programs such as mark-and-sweep garbage collection and destructive reversal of a singly linked list. (The mark-and-sweep example has been previously reported as being beyond the capabilities of ESC/Java.)
30 pages, LMCS
Logical Methods in Computer Science, Volume 5, Issue 2 (May 28, 2009) lmcs:680
10.2168/LMCS-5(2:12)2009
cs.LO
[ "cs.LO", "cs.PL", "F.3.1; F.3.2; F.4.1" ]
Characterizations of Stable Model Semantics for Logic Programs with Arbitrary Constraint Atoms
http://arxiv.org/abs/0904.4727v1
http://arxiv.org/abs/0904.4727v1
http://arxiv.org/pdf/0904.4727v1
2009-04-30
2009-04-30
[ "Yi-Dong Shen", "Jia-Huai You", "Li-Yan Yuan" ]
[ "", "", "" ]
This paper studies the stable model semantics of logic programs with (abstract) constraint atoms and their properties. We introduce a succinct abstract representation of these constraint atoms in which a constraint atom is represented compactly. We show two applications. First, under this representation of constraint atoms, we generalize the Gelfond-Lifschitz transformation and apply it to define stable models (also called answer sets) for logic programs with arbitrary constraint atoms. The resulting semantics turns out to coincide with the one defined by Son et al., which is based on a fixpoint approach. One advantage of our approach is that it can be applied, in a natural way, to define stable models for disjunctive logic programs with constraint atoms, which may appear in the disjunctive head as well as in the body of a rule. As a result, our approach to the stable model semantics for logic programs with constraint atoms generalizes a number of previous approaches. Second, we show that our abstract representation of constraint atoms provides a means to characterize dependencies of atoms in a program with constraint atoms, so that some standard characterizations and properties relying on these dependencies in the past for logic programs with ordinary atoms can be extended to logic programs with constraint atoms.
34 pages. To appear in Theory and Practice of Logic Programming (TPLP)
cs.AI
[ "cs.AI", "cs.LO", "cs.PL" ]
Software Model Checking via Large-Block Encoding
http://arxiv.org/abs/0904.4709v1
http://arxiv.org/abs/0904.4709v1
http://arxiv.org/pdf/0904.4709v1
2009-04-29
2009-04-29
[ "Dirk Beyer", "Alessandro Cimatti", "Alberto Griggio", "M. Erkan Keremoglu", "Roberto Sebastiani" ]
[ "", "", "", "", "" ]
The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.
13 pages (11 without cover), 4 figures, 5 tables
cs.SE
[ "cs.SE", "cs.PL", "D.2.4; F.3.1" ]
On Constructor Rewrite Systems and the Lambda-Calculus (Long Version)
http://arxiv.org/abs/0904.4120v4
http://arxiv.org/abs/0904.4120v4
http://arxiv.org/pdf/0904.4120v4
2009-04-27
2012-08-02
[ "Ugo Dal Lago", "Simone Martini" ]
[ "", "" ]
We prove that orthogonal constructor term rewrite systems and lambda-calculus with weak (i.e., no reduction is allowed under the scope of a lambda-abstraction) call-by-value reduction can simulate each other with a linear overhead. In particular, weak call-by-value beta-reduction can be simulated by an orthogonal constructor term rewrite system in the same number of reduction steps. Conversely, each reduction in a term rewrite system can be simulated by a constant number of beta-reduction steps. This is relevant to implicit computational complexity, because the number of beta steps to normal form is polynomially related to the actual cost (that is, as performed on a Turing machine) of normalization, under weak call-by-value reduction. Orthogonal constructor term rewrite systems and lambda-calculus are thus both polynomially related to Turing machines, taking as notion of cost their natural parameters.
20 pages. Extended version of a paper in the proceedings of ICALP 2009, Track B
cs.PL
[ "cs.PL", "cs.LO", "F.1.1; F.4.1" ]
Formally Specifying and Proving Operational Aspects of Forensic Lucid in Isabelle
http://arxiv.org/abs/0904.3789v1
http://arxiv.org/abs/0904.3789v1
http://arxiv.org/pdf/0904.3789v1
2009-04-24
2009-04-24
[ "Serguei A. Mokhov", "Joey Paquet" ]
[ "", "" ]
A Forensic Lucid intensional programming language has been proposed for intensional cyberforensic analysis. In large part, the language is based on various predecessor and codecessor Lucid dialects bound by the higher-order intensional logic (HOIL) that is behind them. This work formally specifies the operational aspects of the Forensic Lucid language and compiles a theory of its constructs using Isabelle, a proof assistant system.
23 pages, 3 listings, 3 figures, 1 table, 1 Appendix with theorems, pp. 76--98. TPHOLs 2008 Emerging Trends Proceedings, August 18-21, Montreal, Canada. Editors: Otmane Ait Mohamed and Cesar Munoz and Sofiene Tahar. The individual paper's PDF is at http://users.encs.concordia.ca/~tphols08/TPHOLs2008/ET/76-98.pdf
cs.LO
[ "cs.LO", "cs.CR", "cs.PL", "D.3.1; D.3.2; D.3.3; D.3.4" ]
On the Cooperation of the Constraint Domains H, R and FD in CFLP
http://arxiv.org/abs/0904.2136v1
http://arxiv.org/abs/0904.2136v1
http://arxiv.org/pdf/0904.2136v1
2009-04-14
2009-04-14
[ "S. Estévez-Martín", "T. Hortalá-González", "Rodríguez-Artalejo", "R. del Vado-Vírseda", "F. Sáenz-Pérez", "A. J. Fernández" ]
[ "", "", "", "", "", "" ]
This paper presents a computational model for the cooperation of constraint domains and an implementation for a particular case of practical importance. The computational model supports declarative programming with lazy and possibly higher-order functions, predicates, and the cooperation of different constraint domains equipped with their respective solvers, relying on a so-called Constraint Functional Logic Programming (CFLP) scheme. The implementation has been developed on top of the CFLP system TOY, supporting the cooperation of the three domains H, R and FD, which supply equality and disequality constraints over symbolic terms, arithmetic constraints over the real numbers, and finite domain constraints over the integers, respectively. The computational model has been proved sound and complete w.r.t. the declarative semantics provided by the $CFLP$ scheme, while the implemented system has been tested with a set of benchmarks and shown to behave quite efficiently in comparison to the closest related approach we are aware of. To appear in Theory and Practice of Logic Programming (TPLP)
113 pages, 5 figures, 18 tables
cs.PL
[ "cs.PL", "cs.SC" ]
The Derivational Complexity Induced by the Dependency Pair Method
http://arxiv.org/abs/0904.0570v5
http://arxiv.org/abs/0904.0570v5
http://arxiv.org/pdf/0904.0570v5
2009-04-03
2011-07-11
[ "Georg Moser", "Andreas Schnabl" ]
[ "", "" ]
We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in terms of the derivational complexity of the base techniques employed. In particular we show that the derivational complexity induced by the dependency pair method based on some direct technique, possibly refined by argument filtering, the usable rules criterion, or dependency graphs, is primitive recursive in the derivational complexity induced by the direct method. This implies that the derivational complexity induced by a standard application of the dependency pair method based on traditional termination orders like KBO, LPO, and MPO is exactly the same as if those orders were applied as the only termination technique.
Logical Methods in Computer Science, Volume 7, Issue 3 (July 13, 2011) lmcs:805
10.2168/LMCS-7(3:1)2011
cs.LO
[ "cs.LO", "cs.AI", "cs.CC", "cs.PL", "F.4.1, F.2.2, D.2.4, D.2.8" ]
A System F accounting for scalars
http://arxiv.org/abs/0903.3741v8
http://arxiv.org/abs/0903.3741v8
http://arxiv.org/pdf/0903.3741v8
2009-03-22
2012-02-28
[ "Pablo Arrighi", "Alejandro Diaz-Caro" ]
[ "", "" ]
The Algebraic lambda-calculus and the Linear-Algebraic lambda-calculus extend the lambda-calculus with the possibility of making arbitrary linear combinations of terms. In this paper we provide a fine-grained, System F-like type system for the linear-algebraic lambda-calculus. We show that this "scalar" type system enjoys both the subject-reduction property and the strong-normalisation property, our main technical results. The latter yields a significant simplification of the linear-algebraic lambda-calculus itself, by removing the need for some restrictions in its reduction rules. But the more important, original feature of this scalar type system is that it keeps track of 'the amount of a type' that is present in each term. As an example of its use, we shown that it can serve as a guarantee that the normal form of a term is barycentric, i.e that its scalars are summing to one.
Logical Methods in Computer Science, Volume 8, Issue 1 (February 27, 2012) lmcs:846
10.2168/LMCS-8(1:11)2012
cs.LO
[ "cs.LO", "cs.PL", "quant-ph", "F.4.1" ]
On the Computational Complexity of Satisfiability Solving for String Theories
http://arxiv.org/abs/0903.2825v1
http://arxiv.org/abs/0903.2825v1
http://arxiv.org/pdf/0903.2825v1
2009-03-16
2009-03-16
[ "Susmit Jha", "Sanjit A. Seshia", "Rhishikesh Limaye" ]
[ "", "", "" ]
Satisfiability solvers are increasingly playing a key role in software verification, with particularly effective use in the analysis of security vulnerabilities. String processing is a key part of many software applications, such as browsers and web servers. These applications are susceptible to attacks through malicious data received over network. Automated tools for analyzing the security of such applications, thus need to reason about strings. For efficiency reasons, it is desirable to have a solver that treats strings as first-class types. In this paper, we present some theories of strings that are useful in a software security context and analyze the computational complexity of the presented theories. We use this complexity analysis to motivate a byte-blast approach which employs a Boolean encoding of the string constraints to a corresponding Boolean satisfiability problem.
cs.CC
[ "cs.CC", "cs.LO", "cs.PL" ]
Relations, Constraints and Abstractions: Using the Tools of Logic Programming in the Security Industry
http://arxiv.org/abs/0903.2353v1
http://arxiv.org/abs/0903.2353v1
http://arxiv.org/pdf/0903.2353v1
2009-03-13
2009-03-13
[ "Andy King" ]
[ "" ]
Logic programming is sometimes described as relational programming: a paradigm in which the programmer specifies and composes n-ary relations using systems of constraints. An advanced logic programming environment will provide tools that abstract these relations to transform, optimise, or even verify the correctness of a logic program. This talk will show that these concepts, namely relations, constraints and abstractions, turn out to also be important in the reverse engineer process that underpins the discovery of bugs within the security industry.
Paper presented as an invited talk at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL" ]
Prolog Visualization System Using Logichart Diagrams
http://arxiv.org/abs/0903.2207v1
http://arxiv.org/abs/0903.2207v1
http://arxiv.org/pdf/0903.2207v1
2009-03-12
2009-03-12
[ "Yoshihiro Adachi" ]
[ "" ]
We have developed a Prolog visualization system that is intended to support Prolog programming education. The system uses Logichart diagrams to visualize Prolog programs. The Logichart diagram is designed to visualize the Prolog execution flow intelligibly and to enable users to easily correlate the Prolog clauses with its parts. The system has the following functions. (1) It visually traces Prolog execution (goal calling, success, and failure) on the Logichart diagram. (2) Dynamic change in a Prolog program by calling extra-logical predicates, such as `assertz' and `retract', is visualized in real time. (3) Variable substitution processes are displayed in a text widget in real time.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008, arXiv:0903.1598). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.HC", "cs.SE" ]
A Lightweight Combination of Semantics for Non-deterministic Functions
http://arxiv.org/abs/0903.2205v1
http://arxiv.org/abs/0903.2205v1
http://arxiv.org/pdf/0903.2205v1
2009-03-12
2009-03-12
[ "Francisco Javier Lopez-Fraguas", "Juan Rodriguez-Hortala", "Jaime Sanchez-Hernandez" ]
[ "", "", "" ]
The use of non-deterministic functions is a distinctive feature of modern functional logic languages. The semantics commonly adopted is call-time choice, a notion that at the operational level is related to the sharing mechanism of lazy evaluation in functional languages. However, there are situations where run-time choice, closer to ordinary rewriting, is more appropriate. In this paper we propose an extension of existing call-time choice based languages, to provide support for run-time choice in localized parts of a program. The extension is remarkably simple at three relevant levels: syntax, formal operational calculi and implementation, which is based on the system Toy.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL" ]
Improving Size-Change Analysis in Offline Partial Evaluation
http://arxiv.org/abs/0903.2202v1
http://arxiv.org/abs/0903.2202v1
http://arxiv.org/pdf/0903.2202v1
2009-03-12
2009-03-12
[ "Michael Leuschel", "Salvador Tamarit", "German Vidal" ]
[ "", "", "" ]
Some recent approaches for scalable offline partial evaluation of logic programs include a size-change analysis for ensuring both so called local and global termination. In this work|inspired by experimental evaluation|we introduce several improvements that may increase the accuracy of the analysis and, thus, the quality of the associated specialized programs. We aim to achieve this while maintaining the same complexity and scalability of the recent works.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL" ]
On the Generation of Test Data for Prolog by Partial Evaluation
http://arxiv.org/abs/0903.2199v1
http://arxiv.org/abs/0903.2199v1
http://arxiv.org/pdf/0903.2199v1
2009-03-12
2009-03-12
[ "Miguel Gomez-Zamalloa", "Elvira Albert", "German Puebla" ]
[ "", "", "" ]
In recent work, we have proposed an approach to Test Data Generation (TDG) of imperative bytecode by partial evaluation (PE) of CLP which consists in two phases: (1) the bytecode program is first transformed into an equivalent CLP program by means of interpretive compilation by PE, (2) a second PE is performed in order to supervise the generation of test-cases by execution of the CLP decompiled program. The main advantages of TDG by PE include flexibility to handle new coverage criteria, the possibility to obtain test-case generators and its simplicity to be implemented. The approach in principle can be directly applied for TDG of any imperative language. However, when one tries to apply it to a declarative language like Prolog, we have found as a main difficulty the generation of test-cases which cover the more complex control flow of Prolog. Essentially, the problem is that an intrinsic feature of PE is that it only computes non-failing derivations while in TDG for Prolog it is essential to generate test-cases associated to failing computations. Basically, we propose to transform the original Prolog program into an equivalent Prolog program with explicit failure by partially evaluating a Prolog interpreter which captures failing derivations w.r.t. the input program. Another issue that we discuss in the paper is that, while in the case of bytecode the underlying constraint domain only manipulates integers, in Prolog it should properly handle the symbolic data manipulated by the program. The resulting scheme is of interest for bringing the advantages which are inherent in TDG by PE to the field of logic programming.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.SE" ]
Constraint solving for high-level WCET analysis
http://arxiv.org/abs/0903.2251v1
http://arxiv.org/abs/0903.2251v1
http://arxiv.org/pdf/0903.2251v1
2009-03-12
2009-03-12
[ "Adrian Prantl", "Jens Knoop", "Markus Schordan", "Markus Triska" ]
[ "", "", "", "" ]
The safety of our day-to-day life depends crucially on the correct functioning of embedded software systems which control the functioning of more and more technical devices. Many of these software systems are time-critical. Hence, computations performed need not only to be correct, but must also be issued in a timely fashion. Worst case execution time (WCET) analysis is concerned with computing tight upper bounds for the execution time of a system in order to provide formal guarantees for the proper timing behaviour of a system. Central for this is to compute safe and tight bounds for loops and recursion depths. In this paper, we highlight the TuBound approach to this challenge at whose heart is a constraint logic based approach for loop analysis.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.LO" ]
A Semantics-Aware Editing Environment for Prolog in Eclipse
http://arxiv.org/abs/0903.2252v1
http://arxiv.org/abs/0903.2252v1
http://arxiv.org/pdf/0903.2252v1
2009-03-12
2009-03-12
[ "Jens Bendisposto", "Ian Endrijautzki", "Michael Leuschel", "David Schneider" ]
[ "", "", "", "" ]
In this paper we present a Prolog plugin for Eclipse based upon BE4, and providing many features such as semantic-aware syntax highlighting, outline view, error marking, content assist, hover information, documentation generation, and quick fixes. The plugin makes use of a Java parser for full Prolog with an integrated Prolog engine, and can be extended with further semantic analyses, e.g., based on abstract interpretation.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.HC", "cs.SE" ]
Better Termination for Prolog with Constraints
http://arxiv.org/abs/0903.2168v1
http://arxiv.org/abs/0903.2168v1
http://arxiv.org/pdf/0903.2168v1
2009-03-12
2009-03-12
[ "Markus Triska", "Ulrich Neumerkel", "Jan Wielemaker" ]
[ "", "", "" ]
Termination properties of actual Prolog systems with constraints are fragile and difficult to analyse. The lack of the occurs-check, moded and overloaded arithmetical evaluation via is/2 and the occasional nontermination of finite domain constraints are all sources for invalidating termination results obtained by current termination analysers that rely on idealized assumptions. In this paper, we present solutions to address these problems on the level of the underlying Prolog system. Improved unification modes meet the requirements of norm based analysers by offering dynamic occurs-check detection. A generalized finite domain solver overcomes the shortcomings of conventional arithmetic without significant runtime overhead. The solver offers unbounded domains, yet propagation always terminates. Our work improves Prolog's termination and makes Prolog a more reliable target for termination and type analysis. It is part of SWI-Prolog since version 5.6.50.
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.SE" ]
Rfuzzy framework
http://arxiv.org/abs/0903.2188v1
http://arxiv.org/abs/0903.2188v1
http://arxiv.org/pdf/0903.2188v1
2009-03-12
2009-03-12
[ "Victor Pablos Ceruelo", "Susana Munoz-Hernandez", "Hannes Strass" ]
[ "", "", "" ]
Fuzzy reasoning is a very productive research field that during the last years has provided a number of theoretical approaches and practical implementation prototypes. Nevertheless, the classical implementations, like Fril, are not adapted to the latest formal approaches, like multi-adjoint logic semantics. Some promising implementations, like Fuzzy Prolog, are so general that the regular user/programmer does not feel comfortable because either representation of fuzzy concepts is complex or the results difficult to interpret. In this paper we present a modern framework, Rfuzzy, that is modelling multi-adjoint logic. It provides some extensions as default values (to represent missing information, even partial default values) and typed variables. Rfuzzy represents the truth value of predicates through facts, rules and functions. Rfuzzy answers queries with direct results (instead of constraints) and it is easy to use for any person that wants to represent a problem using fuzzy reasoning in a simple way (by using the classical representation with real numbers).
Paper presented at the 18th Workshop on Logic-based Methods in Programming Environments (WLPE2008) (Report-No: WLPE/2008). Paper submitted by a co-editor of the Workshop proceedings
cs.PL
[ "cs.PL", "cs.LO" ]
18th Workshop on Logic-based methods in Programming Environments (WLPE 2008)
http://arxiv.org/abs/0903.1598v6
http://arxiv.org/abs/0903.1598v6
http://arxiv.org/pdf/0903.1598v6
2009-03-09
2009-03-26
[ "Puri Arenas", "Damiano Zanardini" ]
[ "", "" ]
This volume contains the papers presented at WLPE 2008: the 18th Workshop on Logic-based Methods in Programming Environments held on 12th December, 2008 in Udine, Italy. It was held as a satellite workshop of ICLP 2008, the 24th International Conference on Logic Programming.
cs.PL
[ "cs.PL", "D.2.6; D.1.6" ]
An Instruction Sequence Semigroup with Involutive Anti-Automorphisms
http://arxiv.org/abs/0903.1352v2
http://arxiv.org/abs/0903.1352v2
http://arxiv.org/pdf/0903.1352v2
2009-03-07
2009-11-07
[ "Jan A. Bergstra", "Alban Ponse" ]
[ "", "" ]
We introduce an algebra of instruction sequences by presenting a semigroup C in which programs can be represented without directional bias: in terms of the next instruction to be executed, C has both forward and backward instructions and a C-expression can be interpreted starting from any instruction. We provide equations for thread extraction, i.e., C's program semantics. Then we consider thread extraction compatible (anti-)homomorphisms and (anti-)automorphisms. Finally we discuss some expressiveness results.
36 pages, 1 table
Scientific Annals of Computer Science, 19:57-92, 2009
cs.PL
[ "cs.PL", "math.RA", "D.3.1; F.3.2; I.1.1" ]
A Domain-Specific Language for Programming in the Tile Assembly Model
http://arxiv.org/abs/0903.0889v1
http://arxiv.org/abs/0903.0889v1
http://arxiv.org/pdf/0903.0889v1
2009-03-05
2009-03-05
[ "David Doty", "Matthew J. Patitz" ]
[ "", "" ]
We introduce a domain-specific language (DSL) for creating sets of tile types for simulations of the abstract Tile Assembly Model. The language defines objects known as tile templates, which represent related groups of tiles, and a small number of basic operations on tile templates that help to eliminate the error-prone drudgery of enumerating such tile types manually or with low-level constructs of general-purpose programming languages. The language is implemented as a class library in Python (a so-called internal DSL), but is presented independently of Python or object-oriented programming, with emphasis on supporting the creation of visual editing tools for programmatically creating large sets of complex tile types without needing to write a program.
cs.SE
[ "cs.SE", "cs.PL" ]
A minimalistic look at widening operators
http://arxiv.org/abs/0902.3722v3
http://arxiv.org/abs/0902.3722v3
http://arxiv.org/pdf/0902.3722v3
2009-02-21
2009-11-23
[ "David Monniaux" ]
[ "" ]
We consider the problem of formalizing the familiar notion of widening in abstract interpretation in higher-order logic. It turns out that many axioms of widening (e.g. widening sequences are ascending) are not useful for proving correctness. After keeping only useful axioms, we give an equivalent characterization of widening as a lazily constructed well-founded tree. In type systems supporting dependent products and sums, this tree can be made to reflect the condition of correct termination of the widening sequence.
cs.LO
[ "cs.LO", "cs.PL", "F.3.2; F.4.1" ]
Transmission protocols for instruction streams
http://arxiv.org/abs/0902.2859v1
http://arxiv.org/abs/0902.2859v1
http://arxiv.org/pdf/0902.2859v1
2009-02-17
2009-02-17
[ "J. A. Bergstra", "C. A. Middelburg" ]
[ "", "" ]
Threads as considered in thread algebra model behaviours to be controlled by some execution environment: upon each action performed by a thread, a reply from its execution environment -- which takes the action as an instruction to be processed -- determines how the thread proceeds. In this paper, we are concerned with the case where the execution environment is remote: we describe and analyse some transmission protocols for passing instructions from a thread to a remote execution environment.
13 pages
In ICTAC 2009, pages 127--139. Springer-Verlag, LNCS 5684, 2009
10.1007/978-3-642-03466-4_8
cs.PL
[ "cs.PL", "cs.DC", "D.2.1; D.2.4; F.1.1; F.3.1" ]
Creating modular and reusable DSL textual syntax definitions with Grammatic/ANTLR
http://arxiv.org/abs/0902.2621v1
http://arxiv.org/abs/0902.2621v1
http://arxiv.org/pdf/0902.2621v1
2009-02-16
2009-02-16
[ "Andrey Breslav" ]
[ "" ]
In this paper we present Grammatic -- a tool for textual syntax definition. Grammatic serves as a front-end for parser generators (and other tools) and brings modularity and reuse to their development artifacts. It adapts techniques for separation of concerns from Apsect-Oriented Programming to grammars and uses templates for grammar reuse. We illustrate usage of Grammatic by describing a case study: bringing separation of concerns to ANTLR parser generator, which is achieved without a common time- and memory-consuming technique of building an AST to separate semantic actions from a grammar definition.
Submitted to PSI'09
cs.PL
[ "cs.PL", "cs.SE" ]
A formally verified compiler back-end
http://arxiv.org/abs/0902.2137v3
http://arxiv.org/abs/0902.2137v3
http://arxiv.org/pdf/0902.2137v3
2009-02-12
2009-11-14
[ "Xavier Leroy" ]
[ "" ]
This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.
Journal of Automated Reasoning 43, 4 (2009) 363-446
10.1007/s10817-009-9155-4
cs.LO
[ "cs.LO", "cs.PL" ]
Compilation of extended recursion in call-by-value functional languages
http://arxiv.org/abs/0902.1257v1
http://arxiv.org/abs/0902.1257v1
http://arxiv.org/pdf/0902.1257v1
2009-02-07
2009-02-07
[ "Tom Hirschowitz", "Xavier Leroy", "J. B. Wells" ]
[ "", "", "" ]
This paper formalizes and proves correct a compilation scheme for mutually-recursive definitions in call-by-value functional languages. This scheme supports a wider range of recursive definitions than previous methods. We formalize our technique as a translation scheme to a lambda-calculus featuring in-place update of memory blocks, and prove the translation to be correct.
62 pages, uses pic
Higher-Order and Symbolic Computation 22, 1 (2009) 3-66
10.1007/s10990-009-9042-z
cs.PL
[ "cs.PL" ]
CPAchecker: A Tool for Configurable Software Verification
http://arxiv.org/abs/0902.0019v1
http://arxiv.org/abs/0902.0019v1
http://arxiv.org/pdf/0902.0019v1
2009-01-30
2009-01-30
[ "Dirk Beyer", "M. Erkan Keremoglu" ]
[ "", "" ]
Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker BLAST. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.
8 pages (6 without cover), 2 figures, 2 tables, tool paper, Web page: http://www.cs.sfu.ca/~dbeyer/CPAchecker
cs.PL
[ "cs.PL", "cs.SE", "D.2.4; F.3.1" ]
A Program Transformation for Continuation Call-Based Tabled Execution
http://arxiv.org/abs/0901.3906v1
http://arxiv.org/abs/0901.3906v1
http://arxiv.org/pdf/0901.3906v1
2009-01-25
2009-01-25
[ "Pablo Chico de Guzman", "Manuel Carro", "Manuel V. Hermenegildo" ]
[ "", "", "" ]
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known --as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspension) require. This implementation effort is reduced by program transformation-based continuation call techniques, at some efficiency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation call technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
Part of the proceedings of CICLOPS 2008
cs.PL
[ "cs.PL", "D.1.6, D.3.3" ]
Mechanized semantics for the Clight subset of the C language
http://arxiv.org/abs/0901.3619v1
http://arxiv.org/abs/0901.3619v1
http://arxiv.org/pdf/0901.3619v1
2009-01-23
2009-01-23
[ "Sandrine Blazy", "Xavier Leroy" ]
[ "", "" ]
This article presents the formal semantics of a large subset of the C language called Clight. Clight includes pointer arithmetic, "struct" and "union" types, C loops and structured "switch" statements. Clight is the source language of the CompCert verified compiler. The formal semantics of Clight is a big-step operational semantics that observes both terminating and diverging executions and produces traces of input/output events. The formal semantics of Clight is mechanized using the Coq proof assistant. In addition to the semantics of Clight, this article describes its integration in the CompCert verified compiler and several ways by which the semantics was validated.
Journal of Automated Reasoning (2009)
Journal of Automated Reasoning 43, 3 (2009) 263-288
10.1007/s10817-009-9148-3
cs.PL
[ "cs.PL" ]
Grammatic -- a tool for grammar definition reuse and modularity
http://arxiv.org/abs/0901.2461v1
http://arxiv.org/abs/0901.2461v1
http://arxiv.org/pdf/0901.2461v1
2009-01-16
2009-01-16
[ "Andrey Breslav" ]
[ "" ]
Grammatic is a tool for grammar definition and manipulation aimed to improve modularity and reuse of grammars and related development artifacts. It is independent from parsing technology and any other details of target system implementation. Grammatic provides a way for annotating grammars with arbitrary metadata (like associativity attributes, semantic actions or anything else). It might be used as a front-end for external tools like parser generators to make their input grammars modular and reusable. This paper describes main principles behind Grammatic and gives an overview of languages it provides and their ability to separate concerns and define reusable modules. Also it presents sketches of possible use cases for the tool.
Submitted to DSL'09
cs.PL
[ "cs.PL", "cs.SE" ]
The Safe Lambda Calculus
http://arxiv.org/abs/0901.2399v3
http://arxiv.org/abs/0901.2399v3
http://arxiv.org/pdf/0901.2399v3
2009-01-16
2009-02-19
[ "William Blum", "C. -H. Luke Ong" ]
[ "", "" ]
Safety is a syntactic condition of higher-order grammars that constrains occurrences of variables in the production rules according to their type-theoretic order. In this paper, we introduce the safe lambda calculus, which is obtained by transposing (and generalizing) the safety condition to the setting of the simply-typed lambda calculus. In contrast to the original definition of safety, our calculus does not constrain types (to be homogeneous). We show that in the safe lambda calculus, there is no need to rename bound variables when performing substitution, as variable capture is guaranteed not to happen. We also propose an adequate notion of beta-reduction that preserves safety. In the same vein as Schwichtenberg's 1976 characterization of the simply-typed lambda calculus, we show that the numeric functions representable in the safe lambda calculus are exactly the multivariate polynomials; thus conditional is not definable. We also give a characterization of representable word functions. We then study the complexity of deciding beta-eta equality of two safe simply-typed terms and show that this problem is PSPACE-hard. Finally we give a game-semantic analysis of safety: We show that safe terms are denoted by `P-incrementally justified strategies'. Consequently pointers in the game semantics of safe lambda-terms are only necessary from order 4 onwards.
Logical Methods in Computer Science, Volume 5, Issue 1 (February 19, 2009) lmcs:1145
10.2168/LMCS-5(1:3)2009
cs.PL
[ "cs.PL", "cs.GT", "F.3.2; F.4.1" ]
Logical Algorithms meets CHR: A meta-complexity result for Constraint Handling Rules with rule priorities
http://arxiv.org/abs/0901.1230v1
http://arxiv.org/abs/0901.1230v1
http://arxiv.org/pdf/0901.1230v1
2009-01-09
2009-01-09
[ "Leslie De Koninck" ]
[ "" ]
This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation schema from LA to CHR-rp: CHR with rule priorities, and show that the meta-complexity theorem for LA can be applied to a subset of CHR-rp via inverse translation. Inspired by the high-level implementation proposal for Logical Algorithm by Ganzinger and McAllester and based on a new scheduling algorithm, we propose an alternative implementation for CHR-rp that gives strong complexity guarantees and results in a new and accurate meta-complexity theorem for CHR-rp. It is furthermore shown that the translation from Logical Algorithms to CHR-rp combined with the new CHR-rp implementation, satisfies the required complexity for the Logical Algorithms meta-complexity result to hold.
To appear in Theory and Practice of Logic Programming (TPLP)
cs.PL
[ "cs.PL", "cs.AI", "cs.CC" ]
On the Complexity of Deciding Call-by-Need
http://arxiv.org/abs/0901.0869v2
http://arxiv.org/abs/0901.0869v2
http://arxiv.org/pdf/0901.0869v2
2009-01-07
2011-11-28
[ "Irène Durand", "Aart Middeldorp" ]
[ "", "" ]
In a recent paper we introduced a new framework for the study of call by need computations to normal form and root-stable form in term rewriting. Using elementary tree automata techniques and ground tree transducers we obtained simple decidability proofs for classes of rewrite systems that are much larger than earlier classes defined using the complicated sequentiality concept. In this paper we show that we can do without ground tree transducers in order to arrive at decidability proofs that are phrased in direct tree automata constructions. This allows us to derive better complexity bounds.
cs.LO
[ "cs.LO", "cs.PL" ]
A Simple, Linear-Time Algorithm for x86 Jump Encoding
http://arxiv.org/abs/0812.4973v1
http://arxiv.org/abs/0812.4973v1
http://arxiv.org/pdf/0812.4973v1
2008-12-29
2008-12-29
[ "Neil G. Dickson" ]
[ "" ]
The problem of space-optimal jump encoding in the x86 instruction set, also known as branch displacement optimization, is described, and a linear-time algorithm is given that uses no complicated data structures, no recursion, and no randomization. The only assumption is that there are no array declarations whose size depends on the negative of the size of a section of code (Hyde 2006), which is reasonable for real code.
5 pages
cs.PL
[ "cs.PL" ]
Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic(TM) reasoning and the Actor Model
http://arxiv.org/abs/0812.4852v103
http://arxiv.org/abs/0812.4852v103
http://arxiv.org/pdf/0812.4852v103
2008-12-28
2015-03-02
[ "Carl Hewitt" ]
[ "" ]
Because contemporary large software systems are pervasively inconsistent, it is not safe to reason about them using classical logic. The goal of Direct Logic is to be a minimal fix to classical mathematical logic that meets the requirements of large-scale Internet applications (including sense making for natural language) by addressing the following issues: inconsistency robustness, contrapositive inference bug, and direct argumentation. Direct Logic makes the following contributions over previous work: * Direct Inference (no contrapositive bug for inference) * Direct Argumentation (inference directly expressed) * Inconsistency-robust deduction without artifices such as indices (labels) on propositions or restrictions on reiteration * Intuitive inferences hold including the following: * Boolean Equivalences * Reasoning by splitting for disjunctive cases * Soundness * Inconsistency-robust Proof by Contradiction Since the global state model of computation (first formalized by Turing) is inadequate to the needs of modern large-scale Internet applications the Actor Model was developed to meet this need. Using, the Actor Model, this paper proves that Logic Programming is not computationally universal in that there are computations that cannot be implemented using logical inference. Consequently the Logic Programming paradigm is strictly less general than the Procedural Embedding of Knowledge paradigm.
Corrected: all types are strict
cs.LO
[ "cs.LO", "cs.PL", "cs.SE" ]
XML Static Analyzer User Manual
http://arxiv.org/abs/0812.3550v1
http://arxiv.org/abs/0812.3550v1
http://arxiv.org/pdf/0812.3550v1
2008-12-18
2008-12-18
[ "Pierre Geneves", "Nabil Layaida" ]
[ "", "" ]
This document describes how to use the XML static analyzer in practice. It provides informal documentation for using the XML reasoning solver implementation. The solver allows automated verification of properties that are expressed as logical formulas over trees. A logical formula may for instance express structural constraints or navigation properties (like e.g. path existence and node selection) in finite trees. Logical formulas can be expressed using the syntax of XPath expressions, DTD, XML Schemas, and Relax NG definitions.
cs.PL
[ "cs.PL", "cs.DB", "cs.LO", "cs.SE", "D.3.0; D.3.1; D.3.4; E.1; F.3.1; F.3.2; F.4.1; F.4.3; H.2.3; I.2.4;\n I.7.2" ]
New parallel programming language design: a bridge between brain models and multi-core/many-core computers?
http://arxiv.org/abs/0812.2926v1
http://arxiv.org/abs/0812.2926v1
http://arxiv.org/pdf/0812.2926v1
2008-12-15
2008-12-15
[ "Gheorghe Stefanescu", "Camelia Chira" ]
[ "", "" ]
The recurrent theme of this paper is that sequences of long temporal patterns as opposed to sequences of simple statements are to be fed into computation devices, being them (new proposed) models for brain activity or multi-core/many-core computers. In such models, parts of these long temporal patterns are already committed while other are predicted. This combination of matching patterns and making predictions appears as a key element in producing intelligent processing in brain models and getting efficient speculative execution on multi-core/many-core computers. A bridge between these far-apart models of computation could be provided by appropriate design of massively parallel, interactive programming languages. Agapia is a recently proposed language of this kind, where user controlled long high-level temporal structures occur at the interaction interfaces of processes. In this paper Agapia is used to link HTMs brain models with TRIPS multi-core/many-core architectures.
To appear in: "From Natural Language to Soft Computing: New Paradigms in Artificial Intelligence,", L.A. Zadeh et.al (Eds.), Editing House of Romanian Academy, 2008
cs.PL
[ "cs.PL", "cs.AI" ]
Control software analysis, part II: Closed-loop analysis
http://arxiv.org/abs/0812.1986v1
http://arxiv.org/abs/0812.1986v1
http://arxiv.org/pdf/0812.1986v1
2008-12-10
2008-12-10
[ "Eric Feron", "Fernando Alegre" ]
[ "", "" ]
The analysis and proper documentation of the properties of closed-loop control software presents many distinct aspects from the analysis of the same software running open-loop. Issues of physical system representations arise, and it is desired that such representations remain independent from the representations of the control program. For that purpose, a concurrent program representation of the plant and the control processes is proposed, although the closed-loop system is sufficiently serialized to enable a sequential analysis. While dealing with closed-loop system properties, it is also shown by means of examples how special treatment of nonlinearities extends from the analysis of control specifications to code analysis.
16 pages, 2 figures
cs.SE
[ "cs.SE", "cs.PL" ]
Justifications for Logic Programs under Answer Set Semantics
http://arxiv.org/abs/0812.0790v1
http://arxiv.org/abs/0812.0790v1
http://arxiv.org/pdf/0812.0790v1
2008-12-03
2008-12-03
[ "Enrico Pontelli", "Tran Cao Son", "Omar Elkhatib" ]
[ "", "", "" ]
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
59 pages, 7 figures
cs.AI
[ "cs.AI", "cs.PL" ]
Provenance Traces
http://arxiv.org/abs/0812.0564v1
http://arxiv.org/abs/0812.0564v1
http://arxiv.org/pdf/0812.0564v1
2008-12-02
2008-12-02
[ "James Cheney", "Umut Acar", "Amal Ahmed" ]
[ "", "", "" ]
Provenance is information about the origin, derivation, ownership, or history of an object. It has recently been studied extensively in scientific databases and other settings due to its importance in helping scientists judge data validity, quality and integrity. However, most models of provenance have been stated as ad hoc definitions motivated by informal concepts such as "comes from", "influences", "produces", or "depends on". These models lack clear formalizations describing in what sense the definitions capture these intuitive concepts. This makes it difficult to compare approaches, evaluate their effectiveness, or argue about their validity. We introduce provenance traces, a general form of provenance for the nested relational calculus (NRC), a core database query language. Provenance traces can be thought of as concrete data structures representing the operational semantics derivation of a computation; they are related to the traces that have been used in self-adjusting computation, but differ in important respects. We define a tracing operational semantics for NRC queries that produces both an ordinary result and a trace of the execution. We show that three pre-existing forms of provenance for the NRC can be extracted from provenance traces. Moreover, traces satisfy two semantic guarantees: consistency, meaning that the traces describe what actually happened during execution, and fidelity, meaning that the traces "explain" how the expression would behave if the input were changed. These guarantees are much stronger than those contemplated for previous approaches to provenance; thus, provenance traces provide a general semantic foundation for comparing and unifying models of provenance in databases.
Technical report
cs.PL
[ "cs.PL", "cs.DB" ]
Ensuring Query Compatibility with Evolving XML Schemas
http://arxiv.org/abs/0811.4324v1
http://arxiv.org/abs/0811.4324v1
http://arxiv.org/pdf/0811.4324v1
2008-11-26
2008-11-26
[ "Pierre Genevès", "Nabil Layaïda", "Vincent Quint" ]
[ "", "", "" ]
During the life cycle of an XML application, both schemas and queries may change from one version to another. Schema evolutions may affect query results and potentially the validity of produced data. Nowadays, a challenge is to assess and accommodate the impact of theses changes in rapidly evolving XML applications. This article proposes a logical framework and tool for verifying forward/backward compatibility issues involving schemas and queries. First, it allows analyzing relations between schemas. Second, it allows XML designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. Third, it allows examining more precisely the impact of schema changes over queries, therefore facilitating their reformulation.
cs.PL
[ "cs.PL", "cs.SE", "D.3.0; D.3.1; D.3.4; E.1; F.3.1; F.3.2; F.4.1; F.4.3; H.2.3; I.2.4;\n I.7.2" ]
A Rational Deconstruction of Landin's SECD Machine with the J Operator
http://arxiv.org/abs/0811.3231v2
http://arxiv.org/abs/0811.3231v2
http://arxiv.org/pdf/0811.3231v2
2008-11-19
2008-11-28
[ "Olivier Danvy", "Kevin Millikin" ]
[ "", "" ]
Landin's SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin's J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continu-ation-passing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the caller-save rather than the callee-save convention for environments. We also identify that the dump component of the SECD machine is managed in a callee-save way. The caller-save counterpart of the modernized SECD machine precisely corresponds to Thielecke's double-barrelled continuations and to Felleisen's encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimited-control operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions with the J operator, based on Curien's original calculus of explicit substitutions. These reduction semantics mechanically correspond to the modernized versions of the SECD machine and to the best of our knowledge, they provide the first syntactic theories of applicative expressions with the J operator.
Logical Methods in Computer Science, Volume 4, Issue 4 (November 29, 2008) lmcs:1112
10.2168/LMCS-4(4:12)2008
cs.PL
[ "cs.PL", "cs.LO", "D.1.1, D.3.3, F.1.1" ]
A Transformation--Based Approach for the Design of Parallel/Distributed Scientific Software: the FFT
http://arxiv.org/abs/0811.2535v1
http://arxiv.org/abs/0811.2535v1
http://arxiv.org/pdf/0811.2535v1
2008-11-15
2008-11-15
[ "Harry B. Hunt", "Lenore R. Mullin", "Daniel J. Rosenkrantz", "James E. Raynolds" ]
[ "", "", "", "" ]
We describe a methodology for designing efficient parallel and distributed scientific software. This methodology utilizes sequences of mechanizable algebra--based optimizing transformations. In this study, we apply our methodology to the FFT, starting from a high--level algebraic algorithm description. Abstract multiprocessor plans are developed and refined to specify which computations are to be done by each processor. Templates are then created that specify the locations of computations and data on the processors, as well as data flow among processors. Templates are developed in both the MPI and OpenMP programming styles. Preliminary experiments comparing code constructed using our methodology with code from several standard scientific libraries show that our code is often competitive and sometimes performs better. Interestingly, our code handled a larger range of problem sizes on one target architecture.
45 pages, 2 figures
cs.SE
[ "cs.SE", "cs.PL" ]
Compactly accessible categories and quantum key distribution
http://arxiv.org/abs/0811.2113v3
http://arxiv.org/abs/0811.2113v3
http://arxiv.org/pdf/0811.2113v3
2008-11-13
2016-04-19
[ "Chris Heunen" ]
[ "" ]
Compact categories have lately seen renewed interest via applications to quantum physics. Being essentially finite-dimensional, they cannot accomodate (co)limit-based constructions. For example, they cannot capture protocols such as quantum key distribution, that rely on the law of large numbers. To overcome this limitation, we introduce the notion of a compactly accessible category, relying on the extra structure of a factorisation system. This notion allows for infinite dimension while retaining key properties of compact categories: the main technical result is that the choice-of-duals functor on the compact part extends canonically to the whole compactly accessible category. As an example, we model a quantum key distribution protocol and prove its correctness categorically.
26 pages in Logical Methods in Computer Science, Volume 4, Issue 4 (November 17, 2008) lmcs:1129
Logical Methods in Computer Science, Volume 4, Issue 4 (November 17, 2008) lmcs:1129
10.2168/LMCS-4(4:9)2008
cs.LO
[ "cs.LO", "cs.PL", "quant-ph", "F.3.2" ]
Persistent Queries
http://arxiv.org/abs/0811.0819v1
http://arxiv.org/abs/0811.0819v1
http://arxiv.org/pdf/0811.0819v1
2008-11-05
2008-11-05
[ "Andreas Blass", "Yuri Gurevich" ]
[ "", "" ]
We propose a syntax and semantics for interactive abstract state machines to deal with the following situation. A query is issued during a certain step, but the step ends before any reply is received. Later, a reply arrives, and later yet the algorithm makes use of this reply. By a persistent query, we mean a query for which a late reply might be used. Syntactically, our proposal involves issuing, along with a persistent query, a location where a late reply is to be stored. Semantically, it involves only a minor modification of the existing theory of interactive small-step abstract state machines.
cs.PL
[ "cs.PL", "cs.LO" ]
Instruction sequences for the production of processes
http://arxiv.org/abs/0811.0436v2
http://arxiv.org/abs/0811.0436v2
http://arxiv.org/pdf/0811.0436v2
2008-11-04
2008-11-18
[ "J. A. Bergstra", "C. A. Middelburg" ]
[ "", "" ]
Single-pass instruction sequences under execution are considered to produce behaviours to be controlled by some execution environment. Threads as considered in thread algebra model such behaviours: upon each action performed by a thread, a reply from its execution environment determines how the thread proceeds. Threads in turn can be looked upon as producing processes as considered in process algebra. We show that, by apposite choice of basic instructions, all processes that can only be in a finite number of states can be produced by single-pass instruction sequences.
23 pages; acknowledgement corrected, reference updated
cs.PL
[ "cs.PL", "cs.LO", "D.1.4; F.1.1; F.1.2; F.3.2" ]
Automatic Modular Abstractions for Linear Constraints
http://arxiv.org/abs/0811.0166v1
http://arxiv.org/abs/0811.0166v1
http://arxiv.org/pdf/0811.0166v1
2008-11-02
2008-11-02
[ "David Monniaux" ]
[ "" ]
We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.
cs.PL
[ "cs.PL", "cs.LO", "F.3.1; F.3.2; F.4.1" ]
Detection of parallel steps in programs with arrays
http://arxiv.org/abs/0810.5575v1
http://arxiv.org/abs/0810.5575v1
http://arxiv.org/pdf/0810.5575v1
2008-10-30
2008-10-30
[ "R. Nuriyev" ]
[ "" ]
The problem of detecting of information and logically independent (DILD) steps in programs is a key for equivalent program transformations. Here we are considering the problem of independence of loop iterations, the concentration of massive data processing and hence the most challenge construction for parallelizing. We introduced a separated form of loops when loop's body is a sequence of procedures each of them are used array's elements selected in a previous procedure. We prove that any loop may be algorithmically represented in this form and number of such procedures is invariant. We show that for this form of loop the steps connections are determined with some integer equations and hence the independence problem is algorithmically unsolvable if index expressions are more complex than cubical. We suggest a modification of index semantics that made connection equations trivial and loops iterations can be executed in parallel.
13 pages, 5 figurers
cs.PL
[ "cs.PL" ]
The Mob core language and abstract machine (rev 0.2)
http://arxiv.org/abs/0810.4451v1
http://arxiv.org/abs/0810.4451v1
http://arxiv.org/pdf/0810.4451v1
2008-10-24
2008-10-24
[ "Herve Paulino", "Luis Lopes" ]
[ "", "" ]
Most current mobile agent systems are based on programming languages whose semantics are difficult to prove correct as they lack an adequate underlying formal theory. In recent years, the development of the theory of concurrent systems, namely of process calculi, has allowed for the first time the modeling of mobile agent systems.Languages directly based on process calculi are, however, very low-level and it is desirable to provide the programmer with higher level abstractions, while keeping the semantics of the base calculus. In this technical report we present the syntax and the semantics of a scripting language for programming mobile agents called Mob. We describe the language's syntax and semantics. Mob is service-oriented, meaning that agents act both as servers and as clients of services and that this coupling is done dynamically at run-time. The language is implemented on top of a process calculus which allows us to prove that the framework is sound by encoding its semantics into the underlying calculus. This provides a form of language security not available to other mobile agent languages developed using a more ah-doc approach.
33 pages
cs.PL
[ "cs.PL", "cs.DC" ]
Logics for XML
http://arxiv.org/abs/0810.4460v2
http://arxiv.org/abs/0810.4460v2
http://arxiv.org/pdf/0810.4460v2
2008-10-24
2014-05-24
[ "Pierre Geneves" ]
[ "" ]
This thesis describes the theoretical and practical foundations of a system for the static analysis of XML processing languages. The system relies on a fixpoint temporal logic with converse, derived from the mu-calculus, where models are finite trees. This calculus is expressive enough to capture regular tree types along with multi-directional navigation in trees, while having a single exponential time complexity. Specifically the decidability of the logic is proved in time 2^O(n) where n is the size of the input formula. Major XML concepts are linearly translated into the logic: XPath navigation and node selection semantics, and regular tree languages (which include DTDs and XML Schemas). Based on these embeddings, several problems of major importance in XML applications are reduced to satisfiability of the logic. These problems include XPath containment, emptiness, equivalence, overlap, coverage, in the presence or absence of regular tree type constraints, and the static type-checking of an annotated query. The focus is then given to a sound and complete algorithm for deciding the logic, along with a detailed complexity analysis, and crucial implementation techniques for building an effective solver. Practical experiments using a full implementation of the system are presented. The system appears to be efficient in practice for several realistic scenarios. The main application of this work is a new class of static analyzers for programming languages using both XPath expressions and XML type annotations (input and output). Such analyzers allow to ensure at compile-time valuable properties such as type-safety and optimizations, for safer and more efficient XML processing.
Ph.D. dissertation, defended on December 4th, 2006
cs.PL
[ "cs.PL", "cs.DB", "cs.LO", "D.3.0; D.3.1; D.3.4; E.1; F.3.1; F.3.2; F.4.1; F.4.3; H.2.3; I.2.4;\n I.7.2" ]
Binding bigraphs as symmetric monoidal closed theories
http://arxiv.org/abs/0810.4419v2
http://arxiv.org/abs/0810.4419v2
http://arxiv.org/pdf/0810.4419v2
2008-10-24
2009-06-08
[ "Tom Hirschowitz", "Aurélien Pardon" ]
[ "", "" ]
Milner's bigraphs are a general framework for reasoning about distributed and concurrent programming languages. Notably, it has been designed to encompass both the pi-calculus and the Ambient calculus. This paper is only concerned with bigraphical syntax: given what we here call a bigraphical signature K, Milner constructs a (pre-) category of bigraphs BBig(K), whose main features are (1) the presence of relative pushouts (RPOs), which makes them well-behaved w.r.t. bisimulations, and that (2) the so-called structural equations become equalities. Examples of the latter include, e.g., in pi and Ambient, renaming of bound variables, associativity and commutativity of parallel composition, or scope extrusion for restricted names. Also, bigraphs follow a scoping discipline ensuring that, roughly, bound variables never escape their scope. Here, we reconstruct bigraphs using a standard categorical tool: symmetric monoidal closed (SMC) theories. Our theory enforces the same scoping discipline as bigraphs, as a direct property of SMC structure. Furthermore, it elucidates the slightly mysterious status of so-called links in bigraphs. Finally, our category is also considerably larger than the category of bigraphs, notably encompassing in the same framework terms and a flexible form of higher-order contexts.
17 pages, uses Paul Taylor's diagrams
cs.LO
[ "cs.LO", "cs.PL" ]
A Call-Graph Profiler for GNU Octave
http://arxiv.org/abs/0810.3468v1
http://arxiv.org/abs/0810.3468v1
http://arxiv.org/pdf/0810.3468v1
2008-10-20
2008-10-20
[ "Muthiah Annamalai", "Leela Velusamy" ]
[ "", "" ]
We report the design and implementation of a call-graph profiler for GNU Octave, a numerical computing platform. GNU Octave simplifies matrix computation for use in modeling or simulation. Our work provides a call-graph profiler, which is an improvement on the flat profiler. We elaborate design constraints of building a profiler for numerical computation, and benchmark the profiler by comparing it to the rudimentary timer start-stop (tic-toc) measurements, for a similar set of programs. The profiler code provides clean interfaces to internals of GNU Octave, for other (newer) profiling tools on GNU Octave.
6 pages, 2 figures, 1 table. Fix typos
cs.PF
[ "cs.PF", "cs.PL", "cs.SE" ]
A sound spatio-temporal Hoare logic for the verification of structured interactive programs with registers and voices
http://arxiv.org/abs/0810.3332v1
http://arxiv.org/abs/0810.3332v1
http://arxiv.org/pdf/0810.3332v1
2008-10-19
2008-10-19
[ "Cezara Dragoi", "Gheorghe Stefanescu" ]
[ "", "" ]
Interactive systems with registers and voices (shortly, "rv-systems") are a model for interactive computing obtained closing register machines with respect to a space-time duality transformation ("voices" are the time-dual counterparts of "registers"). In the same vain, AGAPIA v0.1, a structured programming language for rv-systems, is the space-time dual closure of classical while programs (over a specific type of data). Typical AGAPIA programs describe open processes located at various sites and having their temporal windows of adequate reaction to the environment. The language naturally supports process migration, structured interaction, and deployment of components on heterogeneous machines. In this paper a sound Hoare-like spatio-temporal logic for the verification of AGAPIA v0.1 programs is introduced. As a case study, a formal verification proof of a popular distributed termination detection protocol is presented.
21 pages, 8 figures, Invited submission for WADT'08 LNCS Proceedings
cs.PL
[ "cs.PL", "cs.LO", "F.1.2; F.3; D.2.4; D.3.2" ]
Periodic Single-Pass Instruction Sequences
http://arxiv.org/abs/0810.1151v2
http://arxiv.org/abs/0810.1151v2
http://arxiv.org/pdf/0810.1151v2
2008-10-07
2013-04-16
[ "Jan A. Bergstra", "Alban Ponse" ]
[ "", "" ]
A program is a finite piece of data that produces a (possibly infinite) sequence of primitive instructions. From scratch we develop a linear notation for sequential, imperative programs, using a familiar class of primitive instructions and so-called repeat instructions, a particular type of control instructions. The resulting mathematical structure is a semigroup. We relate this set of programs to program algebra (PGA) and show that a particular subsemigroup is a carrier for PGA by providing axioms for single-pass congruence, structural congruence, and thread extraction. This subsemigroup characterizes periodic single-pass instruction sequences and provides a direct basis for PGA's toolset.
16 pages, 3 tables, New title
cs.PL
[ "cs.PL", "D.3.1; F.3.2" ]
On the expressiveness of single-pass instruction sequences
http://arxiv.org/abs/0810.1106v3
http://arxiv.org/abs/0810.1106v3
http://arxiv.org/pdf/0810.1106v3
2008-10-07
2009-01-13
[ "J. A. Bergstra", "C. A. Middelburg" ]
[ "", "" ]
We perceive programs as single-pass instruction sequences. A single-pass instruction sequence under execution is considered to produce a behaviour to be controlled by some execution environment. Threads as considered in basic thread algebra model such behaviours. We show that all regular threads, i.e. threads that can only be in a finite number of states, can be produced by single-pass instruction sequences without jump instructions if use can be made of Boolean registers. We also show that, in the case where goto instructions are used instead of jump instructions, a bound to the number of labels restricts the expressiveness.
14 pages; error corrected, acknowledgement added; another error corrected, another acknowledgement added
Theory of Computing Systems, 50(2):313--328, 2012
10.1007/s00224-010-9301-8
cs.PL
[ "cs.PL", "D.1.4; D.3.3; F.1.1; F.3.3" ]
Definition and Implementation of a Points-To Analysis for C-like Languages
http://arxiv.org/abs/0810.0753v1
http://arxiv.org/abs/0810.0753v1
http://arxiv.org/pdf/0810.0753v1
2008-10-04
2008-10-04
[ "Stefano Soffia" ]
[ "" ]
The points-to problem is the problem of determining the possible run-time targets of pointer variables and is usually considered part of the more general aliasing problem, which consists in establishing whether and when different expressions can refer to the same memory address. Aliasing information is essential to every tool that needs to reason about the semantics of programs. However, due to well-known undecidability results, for all interesting languages that admit aliasing, the exact solution of nontrivial aliasing problems is not generally computable. This work focuses on approximated solutions to this problem by presenting a store-based, flow-sensitive points-to analysis, for applications in the field of automated software verification. In contrast to software testing procedures, which heuristically check the program against a finite set of executions, the methods considered in this work are static analyses, where the computed results are valid for all the possible executions of the analyzed program. We present a simplified programming language and its execution model; then an approximated execution model is developed using the ideas of abstract interpretation theory. Finally, the soundness of the approximation is formally proved. The aim of developing a realistic points-to analysis is pursued by presenting some extensions to the initial simplified model and discussing the correctness of their formulation. This work contains original contributions to the issue of points-to analysis, as it provides a formulation of a filter operation on the points-to abstract domain and a formal proof of the soundness of the defined abstract operations: these, as far as we now, are lacking from the previous literature.
135 pages
cs.PL
[ "cs.PL", "F.3.1" ]
Optimizing Binary Code Produced by Valgrind (Project Report on Virtual Execution Environments Course - AVExe)
http://arxiv.org/abs/0810.0372v1
http://arxiv.org/abs/0810.0372v1
http://arxiv.org/pdf/0810.0372v1
2008-10-02
2008-10-02
[ "Filipe Cabecinhas", "Nuno Lopes", "Renato Crisostomo", "Luis Veiga" ]
[ "", "", "", "" ]
Valgrind is a widely used framework for dynamic binary instrumentation and its mostly known by its memcheck tool. Valgrind's code generation module is far from producing optimal code. In addition it has many backends for different CPU architectures, which difficults code optimization in an architecture independent way. Our work focused on identifying sub-optimal code produced by Valgrind and optimizing it.
Technical report from INESC-ID Lisboa describing optimizations to code generation of the Valgring execution environment. Work developed in the context of a Virtual Execution Environments course (AVExe) at IST/Technical university of Lisbon
cs.PL
[ "cs.PL", "cs.OS" ]
Mechanistic Behavior of Single-Pass Instruction Sequences
http://arxiv.org/abs/0809.4635v1
http://arxiv.org/abs/0809.4635v1
http://arxiv.org/pdf/0809.4635v1
2008-09-26
2008-09-26
[ "Jan A. Bergstra", "Mark B. van der Zwaag" ]
[ "", "" ]
Earlier work on program and thread algebra detailed the functional, observable behavior of programs under execution. In this article we add the modeling of unobservable, mechanistic processing, in particular processing due to jump instructions. We model mechanistic processing preceding some further behavior as a delay of that behavior; we borrow a unary delay operator from discrete time process algebra. We define a mechanistic improvement ordering on threads and observe that some threads do not have an optimal implementation.
12 pages
cs.PL
[ "cs.PL", "cs.LO", "D.1.4; F.3.2; F.3.3" ]
How applicable is Python as first computer language for teaching programming in a pre-university educational environment, from a teacher's point of view?
http://arxiv.org/abs/0809.1437v1
http://arxiv.org/abs/0809.1437v1
http://arxiv.org/pdf/0809.1437v1
2008-09-09
2008-09-09
[ "Fotis Georgatos" ]
[ "" ]
This project report attempts to evaluate the educational properties of the Python computer language, in practice. This is done by examining computer language evolution history, related scientific background work, the existing educational research on computer languages and Python's experimental application in higher secondary education in Greece, during first half of year 2002. This Thesis Report was delivered in advance of a thesis defense for a Masters/Doctorandus (MSc/Drs) title with the Amstel Institute/Universiteit van Amsterdam, during the same year.
135 pages, 20 tables, 10 figures (incl. evolution of computer languages)
cs.PL
[ "cs.PL", "cs.CY", "D.3; K.3.2; I.2.6" ]
Realizing Fast, Scalable and Reliable Scientific Computations in Grid Environments
http://arxiv.org/abs/0808.3548v1
http://arxiv.org/abs/0808.3548v1
http://arxiv.org/pdf/0808.3548v1
2008-08-26
2008-08-26
[ "Yong Zhao", "Ioan Raicu", "Ian Foster", "Mihael Hategan", "Veronika Nefedova", "Mike Wilde" ]
[ "", "", "", "", "", "" ]
The practical realization of managing and executing large scale scientific computations efficiently and reliably is quite challenging. Scientific computations often involve thousands or even millions of tasks operating on large quantities of data, such data are often diversely structured and stored in heterogeneous physical formats, and scientists must specify and run such computations over extended periods on collections of compute, storage and network resources that are heterogeneous, distributed and may change constantly. We present the integration of several advanced systems: Swift, Karajan, and Falkon, to address the challenges in running various large scale scientific applications in Grid environments. Swift is a parallel programming tool for rapid and reliable specification, execution, and management of large-scale science and engineering workflows. Swift consists of a simple scripting language called SwiftScript and a powerful runtime system that is based on the CoG Karajan workflow engine and integrates the Falkon light-weight task execution service that uses multi-level scheduling and a streamlined dispatcher. We showcase the scalability, performance and reliability of the integrated system using application examples drawn from astronomy, cognitive neuroscience and molecular dynamics, which all comprise large number of fine-grained jobs. We show that Swift is able to represent dynamic workflows whose structures can only be determined during runtime and reduce largely the code size of various workflow representations using SwiftScript; schedule the execution of hundreds of thousands of parallel computations via the Karajan engine; and achieve up to 90% reduction in execution time when compared to traditional batch schedulers.
Book chapter in Grid Computing Research Progress, ISBN: 978-1-60456-404-4, Nova Publisher 2008
cs.DC
[ "cs.DC", "cs.PL", "D.1.3; D.4.7" ]
Proving Noninterference by a Fully Complete Translation to the Simply Typed lambda-calculus
http://arxiv.org/abs/0808.3307v2
http://arxiv.org/abs/0808.3307v2
http://arxiv.org/pdf/0808.3307v2
2008-08-25
2008-09-20
[ "Naokata Shikuma", "Atsushi Igarashi" ]
[ "", "" ]
Tse and Zdancewic have formalized the notion of noninterference for Abadi et al.'s DCC in terms of logical relations and given a proof of noninterference by reduction to parametricity of System F. Unfortunately, their proof contains errors in a key lemma that their translation from DCC to System F preserves the logical relations defined for both calculi. In fact, we have found a counterexample for it. In this article, instead of DCC, we prove noninterference for sealing calculus, a new variant of DCC, by reduction to the basic lemma of a logical relation for the simply typed lambda-calculus, using a fully complete translation to the simply typed lambda-calculus. Full completeness plays an important role in showing preservation of the two logical relations through the translation. Also, we investigate relationship among sealing calculus, DCC, and an extension of DCC by Tse and Zdancewic and show that the first and the last of the three are equivalent.
31 pages
Logical Methods in Computer Science, Volume 4, Issue 3 (September 20, 2008) lmcs:683
10.2168/LMCS-4(3:10)2008
cs.PL
[ "cs.PL", "cs.CR", "D.3.1; F.3.2; F.3.3" ]
Declarative Combinatorics: Isomorphisms, Hylomorphisms and Hereditarily Finite Data Types in Haskell
http://arxiv.org/abs/0808.2953v4
http://arxiv.org/abs/0808.2953v4
http://arxiv.org/pdf/0808.2953v4
2008-08-21
2009-01-19
[ "Paul Tarau" ]
[ "" ]
This paper is an exploration in a functional programming framework of {\em isomorphisms} between elementary data types (natural numbers, sets, multisets, finite functions, permutations binary decision diagrams, graphs, hypergraphs, parenthesis languages, dyadic rationals, primes, DNA sequences etc.) and their extension to hereditarily finite universes through {\em hylomorphisms} derived from {\em ranking/unranking} and {\em pairing/unpairing} operations. An embedded higher order {\em combinator language} provides any-to-any encodings automatically. Besides applications to experimental mathematics, a few examples of ``free algorithms'' obtained by transferring operations between data types are shown. Other applications range from stream iterators on combinatorial objects to self-delimiting codes, succinct data representations and generation of random instances. The paper covers 59 data types and, through the use of the embedded combinator language, provides 3540 distinct bijective transformations between them. The self-contained source code of the paper, as generated from a literate Haskell program, is available at \url{http://logic.csci.unt.edu/tarau/research/2008/fISO.zip}. {\bf Keywords}: Haskell data representations, data type isomorphisms, declarative combinatorics, computational mathematics, Ackermann encoding, G\"{o}del numberings, arithmetization, ranking/unranking, hereditarily finite sets, functions and permutations, encodings of binary decision diagrams, dyadic rationals, DNA encodings
unpublished draft, revision 3, added various new encodings, with focus on primes and multisets, now 104 pages
cs.PL
[ "cs.PL", "cs.DS" ]
Coinductive big-step operational semantics
http://arxiv.org/abs/0808.0586v1
http://arxiv.org/abs/0808.0586v1
http://arxiv.org/pdf/0808.0586v1
2008-08-05
2008-08-05
[ "Xavier Leroy", "Hervé Grall" ]
[ "", "" ]
Using a call-by-value functional language as an example, this article illustrates the use of coinductive definitions and proofs in big-step operational semantics, enabling it to describe diverging evaluations in addition to terminating evaluations. We formalize the connections between the coinductive big-step semantics and the standard small-step semantics, proving that both semantics are equivalent. We then study the use of coinductive big-step semantics in proofs of type soundness and proofs of semantic preservation for compilers. A methodological originality of this paper is that all results have been proved using the Coq proof assistant. We explain the proof-theoretic presentation of coinductive definitions and proofs offered by Coq, and show that it facilitates the discovery and the presentation of the results.
Information and Computation (2007)
cs.PL
[ "cs.PL" ]
Logic Engines as Interactors
http://arxiv.org/abs/0808.0556v1
http://arxiv.org/abs/0808.0556v1
http://arxiv.org/pdf/0808.0556v1
2008-08-05
2008-08-05
[ "Paul Tarau" ]
[ "" ]
We introduce a new programming language construct, Interactors, supporting the agent-oriented view that programming is a dialog between simple, self-contained, autonomous building blocks. We define Interactors as an abstraction of answer generation and refinement in Logic Engines resulting in expressive language extension and metaprogramming patterns, including emulation of Prolog's dynamic database. A mapping between backtracking based answer generation in the callee and "forward" recursion in the caller enables interaction between different branches of the callee's search process and provides simplified design patterns for algorithms involving combinatorial generation and infinite answer streams. Interactors extend language constructs like Ruby, Python and C#'s multiple coroutining block returns through yield statements and they can emulate the action of monadic constructs and catamorphisms in functional languages. Keywords: generalized iterators, logic engines, agent oriented programming language constructs, interoperation with stateful objects, metaprogramming
unpublished draft
cs.PL
[ "cs.PL", "cs.MA" ]
Unfolding in CHR
http://arxiv.org/abs/0807.3979v1
http://arxiv.org/abs/0807.3979v1
http://arxiv.org/pdf/0807.3979v1
2008-07-25
2008-07-25
[ "Maurizio Gabbrielli", "Maria Chiara Meo", "Paolo Tacchella" ]
[ "", "", "" ]
Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption and more generally to optimize a given program. Essentially it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. One of the basic operations which is used by most program transformation systems is unfolding which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages and, to the best of our knowledge, no other has considered unfolding of CHR programs. This paper defines a correct unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning. We prove that confluence and termination properties are preserved by the above transformations.
cs.PL
[ "cs.PL" ]
Quantifying Timing Leaks and Cost Optimisation
http://arxiv.org/abs/0807.3879v1
http://arxiv.org/abs/0807.3879v1
http://arxiv.org/pdf/0807.3879v1
2008-07-24
2008-07-24
[ "Alessandra Di Pierro", "Chris Hankin", "Herbert Wiklicky" ]
[ "", "", "" ]
We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.
16 pages, 2 figures, 4 tables. A shorter version is included in the proceedings of ICICS'08 - 10th International Conference on Information and Communications Security, 20-22 October, 2008 Birmingham, UK
cs.CR
[ "cs.CR", "cs.PL" ]
A Non-Termination Criterion for Binary Constraint Logic Programs
http://arxiv.org/abs/0807.3451v3
http://arxiv.org/abs/0807.3451v3
http://arxiv.org/pdf/0807.3451v3
2008-07-22
2009-01-10
[ "Etienne Payet", "Fred Mesnard" ]
[ "", "" ]
On the one hand, termination analysis of logic programs is now a fairly established research topic within the logic programming community. On the other hand, non-termination analysis seems to remain a much less attractive subject. If we divide this line of research into two kinds of approaches: dynamic versus static analysis, this paper belongs to the latter. It proposes a criterion for detecting non-terminating atomic queries with respect to binary CLP rules, which strictly generalizes our previous works on this subject. We give a generic operational definition and an implemented logical form of this criterion. Then we show that the logical form is correct and complete with respect to the operational definition.
32 pages. Long version of a paper accepted for publication in Theory and Practice of Logic Programming (TPLP)
cs.PL
[ "cs.PL" ]
Flux: FunctionaL Updates for XML (extended report)
http://arxiv.org/abs/0807.1211v1
http://arxiv.org/abs/0807.1211v1
http://arxiv.org/pdf/0807.1211v1
2008-07-08
2008-07-08
[ "James Cheney" ]
[ "" ]
XML database query languages have been studied extensively, but XML database updates have received relatively little attention, and pose many challenges to language design. We are developing an XML update language called Flux, which stands for FunctionaL Updates for XML, drawing upon ideas from functional programming languages. In prior work, we have introduced a core language for Flux with a clear operational semantics and a sound, decidable static type system based on regular expression types. Our initial proposal had several limitations. First, it lacked support for recursive types or update procedures. Second, although a high-level source language can easily be translated to the core language, it is difficult to propagate meaningful type errors from the core language back to the source. Third, certain updates are well-formed yet contain path errors, or ``dead'' subexpressions which never do any useful work. It would be useful to detect path errors, since they often represent errors or optimization opportunities. In this paper, we address all three limitations. Specifically, we present an improved, sound type system that handles recursion. We also formalize a source update language and give a translation to the core language that preserves and reflects typability. We also develop a path-error analysis (a form of dead-code analysis) for updates.
Extended version of ICFP 2008 paper
cs.PL
[ "cs.PL", "cs.DB", "D.3.1; H.2.3" ]
Concept-Oriented Programming
http://arxiv.org/abs/0806.4746v2
http://arxiv.org/abs/0806.4746v2
http://arxiv.org/pdf/0806.4746v2
2008-06-29
2010-09-26
[ "Alexandr Savinov" ]
[ "" ]
Object-oriented programming (OOP) is aimed at describing the structure and behaviour of objects by hiding the mechanism of their representation and access in primitive references. In this article we describe an approach, called concept-oriented programming (COP), which focuses on modelling references assuming that they also possess application-specific structure and behaviour accounting for a great deal or even most of the overall program complexity. References in COP are completely legalized and get the same status as objects while the functions are distributed among both objects and references. In order to support this design we introduce a new programming construct, called concept, which generalizes conventional classes and concept inclusion relation generalizing class inheritance. The main advantage of COP is that it allows programmers to describe two sides of any program: explicitly used functions of objects and intermediate functionality of references having cross-cutting nature and executed implicitly behind the scenes during object access.
46 pages, 8 figures, 11 listings
cs.PL
[ "cs.PL" ]
Separability in the Ambient Logic
http://arxiv.org/abs/0806.3849v2
http://arxiv.org/abs/0806.3849v2
http://arxiv.org/pdf/0806.3849v2
2008-06-24
2008-09-04
[ "Daniel Hirschkoff", "Etienne Lozes", "Davide Sangiorgi" ]
[ "", "", "" ]
The \it{Ambient Logic} (AL) has been proposed for expressing properties of process mobility in the calculus of Mobile Ambients (MA), and as a basis for query languages on semistructured data. We study some basic questions concerning the discriminating power of AL, focusing on the equivalence on processes induced by the logic $(=_L>)$. As underlying calculi besides MA we consider a subcalculus in which an image-finiteness condition holds and that we prove to be Turing complete. Synchronous variants of these calculi are studied as well. In these calculi, we provide two operational characterisations of $_=L$: a coinductive one (as a form of bisimilarity) and an inductive one (based on structual properties of processes). After showing $_=L$ to be stricly finer than barbed congruence, we establish axiomatisations of $_=L$ on the subcalculus of MA (both the asynchronous and the synchronous version), enabling us to relate $_=L$ to structural congruence. We also present some (un)decidability results that are related to the above separation properties for AL: the undecidability of $_=L$ on MA and its decidability on the subcalculus.
logical methods in computer science, 44 pages
Logical Methods in Computer Science, Volume 4, Issue 3 (September 4, 2008) lmcs:682
10.2168/LMCS-4(3:4)2008
cs.LO
[ "cs.LO", "cs.MA", "cs.PL", "F.3.2; F.4.1" ]
An overview of QML with a concrete implementation in Haskell
http://arxiv.org/abs/0806.2735v2
http://arxiv.org/abs/0806.2735v2
http://arxiv.org/pdf/0806.2735v2
2008-06-17
2008-07-21
[ "Jonathan Grattage" ]
[ "" ]
This paper gives an introduction to and overview of the functional quantum programming language QML. The syntax of this language is defined and explained, along with a new QML definition of the quantum teleport algorithm. The categorical operational semantics of QML is also briefly introduced, in the form of annotated quantum circuits. This definition leads to a denotational semantics, given in terms of superoperators. Finally, an implementation in Haskell of the semantics for QML is presented as a compiler. The compiler takes QML programs as input, which are parsed into a Haskell datatype. The output from the compiler is either a quantum circuit (operational), an isometry (pure denotational) or a superoperator (impure denotational). Orthogonality judgements and problems with coproducts in QML are also discussed.
9 pages, final conference version (Quantum Physics and Logic 2008)
ENTCS: Proceedings of QPL V - DCV IV, 157-165, Reykjavik, Iceland, 2008
quant-ph
[ "quant-ph", "cs.PL" ]
Data-Oblivious Stream Productivity
http://arxiv.org/abs/0806.2680v5
http://arxiv.org/abs/0806.2680v5
http://arxiv.org/pdf/0806.2680v5
2008-06-16
2008-07-19
[ "Joerg Endrullis", "Clemens Grabmayer", "Dimitri Hendriks" ]
[ "", "", "" ]
We are concerned with demonstrating productivity of specifications of infinite streams of data, based on orthogonal rewrite rules. In general, this property is undecidable, but for restricted formats computable sufficient conditions can be obtained. The usual analysis disregards the identity of data, thus leading to approaches that we call data-oblivious. We present a method that is provably optimal among all such data-oblivious approaches. This means that in order to improve on the algorithm in this paper one has to proceed in a data-aware fashion.
cs.LO
[ "cs.LO", "cs.PL" ]
Logical Reasoning for Higher-Order Functions with Local State
http://arxiv.org/abs/0806.2448v2
http://arxiv.org/abs/0806.2448v2
http://arxiv.org/pdf/0806.2448v2
2008-06-15
2008-10-20
[ "Nobuko Yoshida", "Kohei Honda", "Martin Berger" ]
[ "", "", "" ]
We introduce an extension of Hoare logic for call-by-value higher-order functions with ML-like local reference generation. Local references may be generated dynamically and exported outside their scope, may store higher-order functions and may be used to construct complex mutable data structures. This primitive is captured logically using a predicate asserting reachability of a reference name from a possibly higher-order datum and quantifiers over hidden references. We explore the logic's descriptive and reasoning power with non-trivial programming examples combining higher-order procedures and dynamically generated local state. Axioms for reachability and local invariant play a central role for reasoning about the examples.
68 pages
Logical Methods in Computer Science, Volume 4, Issue 4 (October 20, 2008) lmcs:830
10.2168/LMCS-4(4:2)2008
cs.LO
[ "cs.LO", "cs.PL", "D.3.3; D.3.2; F.3.1; F.3.2; F.4.1" ]
Event Synchronization by Lightweight Message Passing
http://arxiv.org/abs/0805.4029v1
http://arxiv.org/abs/0805.4029v1
http://arxiv.org/pdf/0805.4029v1
2008-05-27
2008-05-27
[ "Avik Chaudhuri" ]
[ "" ]
Concurrent ML's events and event combinators facilitate modular concurrent programming with first-class synchronization abstractions. A standard implementation of these abstractions relies on fairly complex manipulations of first-class continuations in the underlying language. In this paper, we present a lightweight implementation of these abstractions in Concurrent Haskell, a language that already provides first-order message passing. At the heart of our implementation is a new distributed synchronization protocol. In contrast with several previous translations of event abstractions in concurrent languages, we remain faithful to the standard semantics for events and event combinators; for example, we retain the symmetry of $\mathtt{choose}$ for expressing selective communication.
cs.PL
[ "cs.PL", "cs.DC", "D.3.3; D.1.3; F.3.3" ]
The Complexity of Coverage
http://arxiv.org/abs/0804.4525v1
http://arxiv.org/abs/0804.4525v1
http://arxiv.org/pdf/0804.4525v1
2008-04-29
2008-04-29
[ "Krishnendu Chatterjee", "Luca de Alfaro", "Rupak Majumdar" ]
[ "", "", "" ]
We study the problem of generating a test sequence that achieves maximal coverage for a reactive system under test. We formulate the problem as a repeated game between the tester and the system, where the system state space is partitioned according to some coverage criterion and the objective of the tester is to maximize the set of partitions (or coverage goals) visited during the game. We show the complexity of the maximal coverage problem for non-deterministic systems is PSPACE-complete, but is NP-complete for deterministic systems. For the special case of non-deterministic systems with a re-initializing ``reset'' action, which represent running a new test input on a re-initialized system, we show that the complexity is again co-NP-complete. Our proof technique for reset games uses randomized testing strategies that circumvent the exponentially large memory requirement in the deterministic case.
15 Pages, 1 Figure
cs.PL
[ "cs.PL", "cs.SE" ]
Design and Implementation of a Tracer Driver: Easy and Efficient Dynamic Analyses of Constraint Logic Programs
http://arxiv.org/abs/0804.4116v1
http://arxiv.org/abs/0804.4116v1
http://arxiv.org/pdf/0804.4116v1
2008-04-25
2008-04-25
[ "Ludovic Langevine", "Mireille Ducasse" ]
[ "", "" ]
Tracers provide users with useful information about program executions. In this article, we propose a ``tracer driver''. From a single tracer, it provides a powerful front-end enabling multiple dynamic analysis tools to be easily implemented, while limiting the overhead of the trace generation. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or ``on demand''. The proposed tracer driver has been designed in the context of constraint logic programming; experiments have been made within GNU-Prolog. Execution views provided by existing tools have been easily emulated with a negligible overhead. Experimental measures show that the flexibility and power of the described architecture lead to good performance. The tracer driver overhead is inversely proportional to the average time between two traced events. Whereas the principles of the tracer driver are independent of the traced programming language, it is best suited for high-level languages, such as constraint logic programming, where each traced execution event encompasses numerous low-level execution steps. Furthermore, constraint logic programming is especially hard to debug. The current environments do not provide all the useful dynamic analysis tools. They can significantly benefit from our tracer driver which enables dynamic analyses to be integrated at a very low cost.
To appear in Theory and Practice of Logic Programming (TPLP), Cambridge University Press. 30 pages,
cs.SE
[ "cs.SE", "cs.PL", "D.2.5; D.3.2" ]
Reasoning in Abella about Structural Operational Semantics Specifications
http://arxiv.org/abs/0804.3914v2
http://arxiv.org/abs/0804.3914v2
http://arxiv.org/pdf/0804.3914v2
2008-04-24
2008-06-03
[ "Andrew Gacek", "Dale Miller", "Gopalan Nadathur" ]
[ "", "", "" ]
The approach to reasoning about structural operational semantics style specifications supported by the Abella system is discussed. This approach uses lambda tree syntax to treat object language binding and encodes binding related properties in generic judgments. Further, object language specifications are embedded directly into the reasoning framework through recursive definitions. The treatment of binding via generic judgments implicitly enforces distinctness and atomicity in the names used for bound variables. These properties must, however, be made explicit in reasoning tasks. This objective can be achieved by allowing recursive definitions to also specify generic properties of atomic predicates. The utility of these various logical features in the Abella system is demonstrated through actual reasoning tasks. Brief comparisons with a few other logic based approaches are also made.
15 pages. To appear in LFMTP'08
cs.LO
[ "cs.LO", "cs.PL" ]
A Logic Programming Framework for Combinational Circuit Synthesis
http://arxiv.org/abs/0804.2095v1
http://arxiv.org/abs/0804.2095v1
http://arxiv.org/pdf/0804.2095v1
2008-04-14
2008-04-14
[ "Paul Tarau", "Brenda Luderman" ]
[ "", "" ]
Logic Programming languages and combinational circuit synthesis tools share a common "combinatorial search over logic formulae" background. This paper attempts to reconnect the two fields with a fresh look at Prolog encodings for the combinatorial objects involved in circuit synthesis. While benefiting from Prolog's fast unification algorithm and built-in backtracking mechanism, efficiency of our search algorithm is ensured by using parallel bitstring operations together with logic variable equality propagation, as a mapping mechanism from primary inputs to the leaves of candidate Leaf-DAGs implementing a combinational circuit specification. After an exhaustive expressiveness comparison of various minimal libraries, a surprising first-runner, Strict Boolean Inequality "<" together with constant function "1" also turns out to have small transistor-count implementations, competitive to NAND-only or NOR-only libraries. As a practical outcome, a more realistic circuit synthesizer is implemented that combines rewriting-based simplification of (<,1) circuits with exhaustive Leaf-DAG circuit search. Keywords: logic programming and circuit design, combinatorial object generation, exact combinational circuit synthesis, universal boolean logic libraries, symbolic rewriting, minimal transistor-count circuit synthesis
23rd International Conference on Logic Programming (ICLP), LNCS 4670, 2007, pages 180-194
cs.LO
[ "cs.LO", "cs.CE", "cs.DM", "cs.PL" ]
A classification of invasive patterns in AOP
http://arxiv.org/abs/0804.1696v2
http://arxiv.org/abs/0804.1696v2
http://arxiv.org/pdf/0804.1696v2
2008-04-10
2008-04-24
[ "Freddy Munoz", "Benoit Baudry", "Olivier Barais" ]
[ "", "", "" ]
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
cs.PL
[ "cs.PL", "cs.SE" ]
The Geometry of Interaction of Differential Interaction Nets
http://arxiv.org/abs/0804.1435v1
http://arxiv.org/abs/0804.1435v1
http://arxiv.org/pdf/0804.1435v1
2008-04-09
2008-04-09
[ "Marc de Falco" ]
[ "" ]
The Geometry of Interaction purpose is to give a semantic of proofs or programs accounting for their dynamics. The initial presentation, translated as an algebraic weighting of paths in proofnets, led to a better characterization of the lambda-calculus optimal reduction. Recently Ehrhard and Regnier have introduced an extension of the Multiplicative Exponential fragment of Linear Logic (MELL) that is able to express non-deterministic behaviour of programs and a proofnet-like calculus: Differential Interaction Nets. This paper constructs a proper Geometry of Interaction (GoI) for this extension. We consider it both as an algebraic theory and as a concrete reversible computation. We draw links between this GoI and the one of MELL. As a by-product we give for the first time an equational theory suitable for the GoI of the Multiplicative Additive fragment of Linear Logic.
20 pagee, to be published in the proceedings of LICS08
cs.LO
[ "cs.LO", "cs.PL" ]
A Survey of Quantum Programming Languages: History, Methods, and Tools
http://arxiv.org/abs/0804.1118v1
http://arxiv.org/abs/0804.1118v1
http://arxiv.org/pdf/0804.1118v1
2008-04-07
2008-04-07
[ "Donald A. Sofge" ]
[ "" ]
Quantum computer programming is emerging as a new subject domain from multidisciplinary research in quantum computing, computer science, mathematics (especially quantum logic, lambda calculi, and linear logic), and engineering attempts to build the first non-trivial quantum computer. This paper briefly surveys the history, methods, and proposed tools for programming quantum computers circa late 2007. It is intended to provide an extensive but non-exhaustive look at work leading up to the current state-of-the-art in quantum computer programming. Further, it is an attempt to analyze the needed programming tools for quantum programmers, to use this analysis to predict the direction in which the field is moving, and to make recommendations for further development of quantum programming language tools.
6 pages
cs.PL
[ "cs.PL", "D.3.2" ]
Testing data types implementations from algebraic specifications
http://arxiv.org/abs/0804.0970v1
http://arxiv.org/abs/0804.0970v1
http://arxiv.org/pdf/0804.0970v1
2008-04-07
2008-04-07
[ "Marie-Claude Gaudel", "Pascale Le Gall" ]
[ "", "" ]
Algebraic specifications of data types provide a natural basis for testing data types implementations. In this framework, the conformance relation is based on the satisfaction of axioms. This makes it possible to formally state the fundamental concepts of testing: exhaustive test set, testability hypotheses, oracle. Various criteria for selecting finite test sets have been proposed. They depend on the form of the axioms, and on the possibilities of observation of the implementation under test. This last point is related to the well-known oracle problem. As the main interest of algebraic specifications is data type abstraction, testing a concrete implementation raises the issue of the gap between the abstract description and the concrete representation. The observational semantics of algebraic specifications bring solutions on the basis of the so-called observable contexts. After a description of testing methods based on algebraic specifications, the chapter gives a brief presentation of some tools and case studies, and presents some applications to other formal methods involving datatypes.
Formal Methods and Testing, Springer-Verlag (Ed.) (2008) 209-239
cs.PL
[ "cs.PL" ]
Semi-continuous Sized Types and Termination
http://arxiv.org/abs/0804.0876v2
http://arxiv.org/abs/0804.0876v2
http://arxiv.org/pdf/0804.0876v2
2008-04-05
2008-04-10
[ "Andreas Abel" ]
[ "" ]
Some type-based approaches to termination use sized types: an ordinal bound for the size of a data structure is stored in its type. A recursive function over a sized type is accepted if it is visible in the type system that recursive calls occur just at a smaller size. This approach is only sound if the type of the recursive function is admissible, i.e., depends on the size index in a certain way. To explore the space of admissible functions in the presence of higher-kinded data types and impredicative polymorphism, a semantics is developed where sized types are interpreted as functions from ordinals into sets of strongly normalizing terms. It is shown that upper semi-continuity of such functions is a sufficient semantic criterion for admissibility. To provide a syntactical criterion, a calculus for semi-continuous functions is developed.
33 pages, extended version of CSL'06
Logical Methods in Computer Science, Volume 4, Issue 2 (April 10, 2008) lmcs:1236
10.2168/LMCS-4(2:3)2008
cs.PL
[ "cs.PL", "cs.LO", "D.1.1; F.3.2; F.4.1" ]
Structure and Interpretation of Computer Programs
http://arxiv.org/abs/0803.4025v1
http://arxiv.org/abs/0803.4025v1
http://arxiv.org/pdf/0803.4025v1
2008-03-27
2008-03-27
[ "Ganesh M. Narayan", "K. Gopinath", "V. Sridhar" ]
[ "", "", "" ]
Call graphs depict the static, caller-callee relation between "functions" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various interprocedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to ask if call graphs exhibit any intrinsic graph theoretic features -- across versions, program domains and source languages. This work is an attempt to answer these questions: we present and investigate a set of meaningful graph measures that help us understand call graphs better; we establish how these measures correlate, if any, across different languages and program domains; we also assess the overall, language independent software quality by suitably interpreting these measures.
9 pages, 10pt, double column, 15 figures
2nd IEEE International Symposium on Theoretical Aspects of Software Engineering, 2008, Nanjing, China
10.1109/TASE.2008.40
cs.SE
[ "cs.SE", "cs.PL", "D.2.8; D.2.3; D.2.5; D.2.10" ]
A Type System for Data-Flow Integrity on Windows Vista
http://arxiv.org/abs/0803.3230v2
http://arxiv.org/abs/0803.3230v2
http://arxiv.org/pdf/0803.3230v2
2008-03-21
2008-05-08
[ "Avik Chaudhuri", "Prasad Naldurg", "Sriram Rajamani" ]
[ "", "", "" ]
The Windows Vista operating system implements an interesting model of multi-level integrity. We observe that in this model, trusted code can be blamed for any information-flow attack; thus, it is possible to eliminate such attacks by static analysis of trusted code. We formalize this model by designing a type system that can efficiently enforce data-flow integrity on Windows Vista. Typechecking guarantees that objects whose contents are statically trusted never contain untrusted values, regardless of what untrusted code runs in the environment. Some of Windows Vista's runtime access checks are necessary for soundness; others are redundant and can be optimized away.
cs.CR
[ "cs.CR", "cs.OS", "cs.PL", "D.4.6; D.2.4; F.3.1" ]
Concurrent Composition and Algebras of Events, Actions, and Processes
http://arxiv.org/abs/0803.3099v1
http://arxiv.org/abs/0803.3099v1
http://arxiv.org/pdf/0803.3099v1
2008-03-21
2008-03-21
[ "Mark Burgin a", "Marc L. Smith" ]
[ "", "" ]
There are many different models of concurrent processes. The goal of this work is to introduce a common formalized framework for current research in this area and to eliminate shortcomings of existing models of concurrency. Following up the previous research of the authors and other researchers on concurrency, here we build a high-level metamodel EAP (event-action-process) for concurrent processes. This metamodel comprises a variety of other models of concurrent processes. We shape mathematical models for, and study events, actions, and processes in relation to important practical problems, such as communication in networks, concurrent programming, and distributed computations. In the third section of the work, a three-level algebra of events, actions and processes is constructed and studied as a new stage of algebra for concurrent processes. Relations between EAP process algebra and other models of concurrency are considered in the fourth section of this work.
cs.LO
[ "cs.LO", "cs.PL", "D.4.1" ]
The Abella Interactive Theorem Prover (System Description)
http://arxiv.org/abs/0803.2305v2
http://arxiv.org/abs/0803.2305v2
http://arxiv.org/pdf/0803.2305v2
2008-03-15
2008-05-23
[ "Andrew Gacek" ]
[ "" ]
Abella is an interactive system for reasoning about aspects of object languages that have been formally presented through recursive rules based on syntactic structure. Abella utilizes a two-level logic approach to specification and reasoning. One level is defined by a specification logic which supports a transparent encoding of structural semantics rules and also enables their execution. The second level, called the reasoning logic, embeds the specification logic and allows the development of proofs of properties about specifications. An important characteristic of both logics is that they exploit the lambda tree syntax approach to treating binding in object languages. Amongst other things, Abella has been used to prove normalizability properties of the lambda calculus, cut admissibility for a sequent calculus and type uniqueness and subject reduction properties. This paper discusses the logical foundations of Abella, outlines the style of theorem proving that it supports and finally describes some of its recent applications.
7 pages, to appear in IJCAR'08
cs.LO
[ "cs.LO", "cs.PL" ]
Automated Termination Proofs for Logic Programs by Term Rewriting
http://arxiv.org/abs/0803.0014v2
http://arxiv.org/abs/0803.0014v2
http://arxiv.org/pdf/0803.0014v2
2008-03-02
2008-09-01
[ "P. Schneider-Kamp", "J. Giesl", "A. Serebrenik", "R. Thiemann" ]
[ "", "", "", "" ]
There are two kinds of approaches for termination analysis of logic programs: "transformational" and "direct" ones. Direct approaches prove termination directly on the basis of the logic program. Transformational approaches transform a logic program into a term rewrite system (TRS) and then analyze termination of the resulting TRS instead. Thus, transformational approaches make all methods previously developed for TRSs available for logic programs as well. However, the applicability of most existing transformations is quite restricted, as they can only be used for certain subclasses of logic programs. (Most of them are restricted to well-moded programs.) In this paper we improve these transformations such that they become applicable for any definite logic program. To simulate the behavior of logic programs by TRSs, we slightly modify the notion of rewriting by permitting infinite terms. We show that our transformation results in TRSs which are indeed suitable for automated termination analysis. In contrast to most other methods for termination of logic programs, our technique is also sound for logic programming without occur check, which is typically used in practice. We implemented our approach in the termination prover AProVE and successfully evaluated it on a large collection of examples.
49 pages
cs.LO
[ "cs.LO", "cs.AI", "cs.PL", "F.3.1; D.1.6; I.2.2; I.2.3" ]
Algebraic Pattern Matching in Join Calculus
http://arxiv.org/abs/0802.4018v2
http://arxiv.org/abs/0802.4018v2
http://arxiv.org/pdf/0802.4018v2
2008-02-27
2008-03-21
[ "Qin Ma", "Luc Maranget" ]
[ "", "" ]
We propose an extension of the join calculus with pattern matching on algebraic data types. Our initial motivation is twofold: to provide an intuitive semantics of the interaction between concurrency and pattern matching; to define a practical compilation scheme from extended join definitions into ordinary ones plus ML pattern matching. To assess the correctness of our compilation scheme, we develop a theory of the applied join calculus, a calculus with value passing and value matching. We implement this calculus as an extension of the current JoCaml system.
Logical Methods in Computer Science, Volume 4, Issue 1 (March 21, 2008) lmcs:770
10.2168/LMCS-4(1:7)2008
cs.PL
[ "cs.PL", "cs.DC", "D.1.3; D.3.3; F.3.2" ]
The RDF Virtual Machine
http://arxiv.org/abs/0802.3492v2
http://arxiv.org/abs/0802.3492v2
http://arxiv.org/pdf/0802.3492v2
2008-02-24
2010-03-25
[ "Marko A. Rodriguez" ]
[ "" ]
The Resource Description Framework (RDF) is a semantic network data model that is used to create machine-understandable descriptions of the world and is the basis of the Semantic Web. This article discusses the application of RDF to the representation of computer software and virtual computing machines. The Semantic Web is posited as not only a web of data, but also as a web of programs and processes.
keywords: Resource Description Framework, Virtual Machines, Distributed Computing, Semantic Web
Knowledge-Based Systems, 24(6), 890-903, August 2011
10.1016/j.knosys.2011.04.004
cs.PL
[ "cs.PL", "F.1.2; I.2.4; E.1" ]
Software graphs and programmer awareness
http://arxiv.org/abs/0802.2306v1
http://arxiv.org/abs/0802.2306v1
http://arxiv.org/pdf/0802.2306v1
2008-02-16
2008-02-16
[ "G. J. Baxter", "M. R. Frean" ]
[ "", "" ]
Dependencies between types in object-oriented software can be viewed as directed graphs, with types as nodes and dependencies as edges. The in-degree and out-degree distributions of such graphs have quite different forms, with the former resembling a power-law distribution and the latter an exponential distribution. This effect appears to be independent of application or type relationship. A simple generative model is proposed to explore the proposition that the difference arises because the programmer is aware of the out-degree of a type but not of its in-degree. The model reproduces the two distributions, and compares reasonably well to those observed in 14 different type relationships across 12 different Java applications.
9 pages, 8 figures
cs.SE
[ "cs.SE", "cs.PL" ]