临时

This work was partially performed when the first
author was a visitor at NCSU, supported by a fellowship from the University of
Pisa and MURST, Italy. This work was supported in part by NSF grant CCR-9320992
Design of a Toolset for Dynamic Analysis of Concurrent Java Programs Alessio
Bechini Dipartimento di Ingegneria dell’Informazione Facoltà di Ingegneria -
Università di Pisa via Diotisalvi, 2 56100 Pisa, Italy
[email protected] Kuo-Chung Tai Department of Computer Science North
Carolina State University Raleigh, North Carolina 27695-7534, USA
[email protected] Abstract The Java language supports the use of monitors,
sockets, and remote method invocation for concurrent programming. Also, Java
classes can be defined to simulate other types of concurrent constructs.
However, concurrent Java programs, like other concurrent programs, are difficult
to specify, design, code, test and debug. In this paper, we describe the design
of a toolset, called JaDA (Java Dynamic Analyzer), that provides testing and
debugging tools for concurrent Java programs. To collect run-time information or
control program execution, JaDA requires transformation of a concurrent Java
program into a slightly different Java program. We show that by modifying Java
classes that support concurrent programming, Java application programs only need
minor modifications. We also present a novel approach to managing threads that
are needed for testing and debugging of concurrent Java programs. 1.
Introduction The Java language supports concurrent programming in several forms.
Java allows the use of monitors to express concurrency involving shared
variables. For distributed programming, Java provides classes and packages for
supporting the use of sockets and remote method invocation. Due to the nature of
object-oriented programming, Java classes and packages can be defined to
simulate other types of concurrent programming constructs. For example, a
package defined in [7] contains classes for simulating the semaphore construct
and various types of message passing constructs. Concurrent programs are
difficult to specify, design, code, test and debug. One reason is the existence
of race conditions due to the unpredictable rates of progress of concurrent
processes. As a consequence, multiple executions of a concurrent program with
the same input may exercise different sequences of interactions through shared
variables or messages and may produce different results. This nondeterministic
execution behavior makes it difficult to understand the behavior of concurrent
programs. There are two general approaches to analyzing the behavior of a
program. Static analysis of a program determines properties of the program
without executing the program. Dynamic analysis of a program involves executing
the program and analyzing the collected runtime information. In this paper, we
describe the design of a toolset, called JaDA, for dynamic analysis of
concurrent Java programs. In section 2, we give a review of dynamic analysis of
concurrent programs. In section 3, we describe the goals of JaDA. In section 4,
we present the architecture of JaDA and explain major design decisions. In
section 5, we report the current status of JaDA’s implementation. Finally, we
conclude this paper in section 6. 2. Review of dynamic analysis of concurrent
programs Dynamic analysis of concurrent programs has two major types of
analysis: testing and debugging, which are discussed in §2.1 and §2.2
respectively. Approaches to building dynamic analysis tools for concurrent
programs are described in §2.3. An execution of a concurrent program exercises
one or more synchronization events (or SYN-events). An actual or expected
execution of a concurrent program can be characterized by its sequence of
synchronization events, referred to as a synchronization sequence (or
SYNsequence). The definitions of SYN-events and SYNsequences of a concurrent
program are based on the concurrent programming constructs used in the program.
How to define SYN-events and SYN-sequences for various concurrent-programming
constructs has been shown in [3, 12, 13]. In the remainder of this paper,
general issues on dynamic analysis of concurrent programs are discussed in
terms of SYN-events and SYNsequences. By doing so, this discussion can be
applied to different concurrent programming constructs. Let P be a concurrent
program. A SYN-sequence is said to be feasible for P with input X if this
SYNsequence can possibly be exercised during an execution of P with input X. A
SYN-sequence is said to be valid for P with input X if, according to the
specification of P, this SYN-sequence is expected to be the SYN-sequence of an
execution of P with input X. 2.1 Testing of concurrent programs Below we
briefly describe two basic approaches to testing concurrent programs. More
details about these and other approaches can be found in [13]. Nondeterministic
testing of a concurrent program P involves the following steps: 1) Select a set
of inputs of P, 2) For each selected input X, execute P with X many times and
examine the result of each execution. The purpose of multiple executions of P
with input X is to exercise different feasible SYN-sequences and increase the
chance of detecting faults. One technique for increasing the likelihood of
exercising different SYN-sequences is to insert sleep (or delay) statements into
P with the length of sleep randomly chosen. Deterministic testing of P involves
the following steps: 1) Select a set of tests, each of the form (X, S), where X
and S are an input and a SYN-sequence of P respectively. 2) For each selected
test (X, S),– force a deterministic execution of P with input X according to S.
This forced execution determines whether S is feasible for P with input X.–
compare the actual and expected results (including the output, feasibility of S,
and termination condition) of the forced execution. If the actual and expected
results are different, a fault is detected. Deterministic testing allows
carefully selected SYNsequences to be used to test specific portions or paths of
P. Also, deterministic testing can detect the existence of invalid, feasible
SYN-sequences of P and valid, infeasible SYN-sequences of P (Nondeterministic
testing can only detect the existence of invalid, feasible SYN-sequences of P).
2.2 Debugging of concurrent programs Many debugging techniques for concurrent
programs have been developed [9, 15]. Below we briefly describe debugging
techniques that are related to JaDA: · Collection of SYN-sequences: The
collected SYNsequences can be used for replay of previous executions (see
below), and they can also be modified to generate new SYN-sequences for
deterministic testing. · Replay of SYN-sequences: The replay of a previous
execution of a concurrent program can be accomplished by deterministic testing
of this program with the input and SYN-sequence of the execution. Thus, replay
is a special case of deterministic testing with the use of feasible
SYNsequences. · Vector timestamps: To identify the causal relationship between
events in a collected SYN-sequence, vector timestamps for these events are
needed. The computation of vector timestamps for message-passing programs is
discussed in [5, 11]. The use of vector timestamps for concurrent programs
using shared variables is described in [10]. How to compute vector timestamps
for programs using both messages and shared variables, according to strong and
weak happened-before relations, is shown in [1]. · Race analysis: Race analysis
of a collected SYNsequence is to identify race conditions in the SYNsequence.
The collected SYN-sequence of an execution of a message-passing program is a
sequence of send and receive events. For a receive event r in a trace T, its
race set is the set of messages in T that have a race with the message received
at r and can be received at r during some possible executions of the same
program with the same input. How to perform race analysis of SYNsequences of
message-passing programs is shown in [14]. How to perform race analysis for
programs using shared variables is shown in [1]. Note that the computation of
vector timestamps and race analysis for a collected SYN-sequence can be
performed either on-the-fly (i.e., during the collection of the SYN-sequence) or
post-mortem (i.e., after the collection of the SYN-sequence). 2.3 Approaches to
building dynamic analysis tools for concurrent programs To implement testing and
debugging techniques for concurrent programs requires the construction of tools.
Two basic approaches to building such tools for programs written in a
concurrent language L are discussed below. An implementation-based approach is
to modify one or more of the three components in the implementation of L: the
compiler, the run-time system, and the operating system. These modifications
enable the collection and forced execution of SYN-sequences during an execution
of a program written in L. For example, many implementation-based debuggers
allow the programmer to directly control execution by performing
“scheduler-control”operations such as setting breakpoints, selecting the next
running process, rearranging processes in various queues, and so on [9]. A
recent example of this approach, in which the layered structure of the run-time
system is used, is presented in [2]. A language-based approach for L has two
steps. The first step is to define the format of SYN-sequences for L in terms of
the synchronization constructs available in L. The second step is to develop
program transformation tools for L in order to support SYN-sequence collection
and execution. Language-based tools for supporting nondeterministic and
deterministic testing of concurrent Ada programs and concurrent programs using
the semaphore and monitor constructs have been implemented [3, 12]. 3. Goals of
JaDA (Java Dynamic Analyzer) JaDA (Java Dynamic Analyzer) is a toolset for
dynamic analysis of concurrent Java programs. The goals of JaDA are the
following: a) to investigate the use of object-oriented technology for building
dynamic analysis tools for concurrent Java programs, b) to provide an integrated
and extensible environment that allows easy implementation of different dynamic
analysis techniques for concurrent Java programs, and c) to support empirical
studies of dynamic analysis techniques for concurrent Java programs. In the
remainder of this section, we discuss two issues of JaDA from the user’s
viewpoint. The next two sections describe the architecture and implementation of
JaDA. Issue 1: Scope of concurrent Java programs As mentioned earlier, Java
supports the use of monitors, sockets, and remote method invocation, and
additional Java classes and packages can be defined to simulate other types of
concurrent constructs. Thus, one design issue is the scope of concurrent Java
programs covered by JaDA. Our decision is to let JaDA cover built-in and
simulated concurrent constructs in Java. In the following discussion, packages
that simulate concurrent constructs are referred to as “communication packages”.
Issues 2: Transformation of concurrent Java programs Since JaDA does not access
the implementation of Java, the only choice is to apply the transformation-based
(or language-based) approach mentioned in §2.3. JaDA applies the following two
types of transformation for dynamic analysis: a) If the user’s program uses
built-in concurrent constructs (e.g., synchronized statements and monitors), it
is transformed in order to support dynamic analysis. This type of transformation
is illustrated in fig. 1B. b) If the user’s program uses simulated concurrent
Fig. 1 - Representation of the structure of a Java program utilizing both
built-in and communication packages. Diagrams B) and C) illustrate two types of
transformation for dynamic analysis. The solution shown in C) allows a reduction
of changes in the user’s program and makes most of such changes transparent to
the programmer.?????? ?????? ?????? ?????? ?????? ?????? ?????? ??????
???????????? ???????????? ???????????? ????????????
???????????? ???????????? ???????????? ???????????? ???????????? ????????????
???????????? ???????????? ???????????? ???????????? ???????????? ????????????
????????????
????????????????????
???????????????????? ????????????????????
User’s Program???????? ???????? ???????? ???????? ???????? ????????
?????????? ?????????? ??????????
Interaction between modules Transformation of the user’s program and
communication packages Transformed part of a module?????? ?????? ?????? ??????
?????? ?????? ?????? ?????? ?????? ?????? ?????? ?????? ??????
Legend???????????????? ????????????????
???????????????? Original System????????????????
???????????????? ???????????????? ???????????????? ????????????????
???????????????? ???????????????? ???????????????? ????????????????
???????????????? ???????????????? ???????????????? ????????????????

????????????????????????
???????????????????????? ????????????????????????
Transformation of the user’s program only Packages Communication B)
Built-in Packages and APIs A) C) constructs provided by communication packages,
it is transformed and combined with transformed communication packages in order
to support dynamic analysis. This type of transformation is illustrated in fig.
1C. (If the user’s program uses both built-in and simulated concurrent
constructs, both types a) and b) of transformation need to be applied. For the
sake of simplicity, we do not consider this case in the following discussion.)
For a Java program using simulated concurrent constructs, type a)
transformation could be applied. However, type a) transformation has the
following disadvantages: · Type a) usually requires many changes in the user’s
program and thus make the program difficult to understand. · Automatic
transformation for type a) may be difficult to do for some simulated concurrent
constructs. On the contrary, type b) transformation has the following
advantages: · Type b) requires very few changes in the user’s program, due to
the changes made in communication packages. · The original and transformed
packages can be used in different phases of software development. So, for a Java
program using simulated concurrent constructs, the transformation of type b) is
more suitable than the other, and it allows to locate most of the changes in the
communication package. In fig. 1, it is worth noticing how the interfaces
between different modules are involved in the transformations of type a) and b).
4. Architecture of JaDA In this section we show the principal aspects of the
structure of JaDA, focusing on the problems we have found and the solutions we
have adopted. JaDA is intended to provide the following capabilities, which are
discussed in §2.2: (Additional capabilities may be considered later.) a)
collection of traces (or SYN-sequences) b) replay of traces c) computation of
vector d) race analysis In JaDA, capabilities a), b) and c) are performed during
program execution, based on the use of program transformation, as discussed in
§3. The user’s program and communication packages are transformed only once. The
transformed program and packages allow the user to choose one of three
execution modes: normal execution, execution for collecting traces, and
execution for replay. In addition, the transformed program and packages allow
the user to specify whether computation of vector timestamps is performed
during an execution. Capability d) is performed on collected traces in
post-mortem fashion. One major design issue of JaDA is the use of centralized or
distributed control for performing capabilities a), b) and c). Our decision is
to use distributed control, in conjunction with partially ordered traces, in
order to get better performance. Distributed control allows concurrent accesses
to files for keeping a partially ordered trace. During replay of a partially
ordered trace, distributed control allows concurrent events to be repeated in an
arbitrary order and thus avoids unnecessary delay. Furthermore, distributed
control supports the use of JaDA on multiple JVMs. Fig. 3 shows how the
distribution of control is used in trace file management. Of course, in most of
the cases a concurrent Java program is executed on a monoprocessor machine, so
the benefit is negligible. However, if the program is spread out on JVMs on
different physical machines, the improvement coming from a distributed control
is significant. 4.1 Management of threads In Java each thread is created through
an instance of class Thread, which has several static methods for managing
threads. For dynamic analysis, we need to keep additional information for each
thread. How to handle such information is a critical issue. Our solution is
define a new class called JKThread such that each thread becomes an instance of
class JKThread. In order to make all the transformations as transparent to the
programmer as possible, we have decided to “sandwich”JKThread between the Thread
class (belonging to the java.lang package) and the class with the thread code.
Fig. 2A and fig. 2B show the insertion we are talking about. This is a good
solution because we can create each thread directly as an instance of JKThread
(instead of Thread). All the portions of the transformed program can access the
additional data corresponding to the thread that is executing them in the
following way (fig. 2C): the static currentThread() method in Thread returns a
reference to a Thread object, but the new class JKThread extends it and so it
is possible to apply a cast operator to get a reference to the current JKThread
object (i.e., we can use “down-casting”).Class JKThread is implemented as
follows. Constructors in class JKThread are expanded versions of corresponding
constructors in class Thread. For example, below is JKThread’s constructor that
has a Runnable object as a parameter: public JKThread (Runnable target, int
numThreads, int whatIAm) { super(target); vectClock = new VectTS(numThreads); [
... ] this.whatIAm = whatIAm; } In the above constructor, the first statement
calls the corresponding constructor in class Thread and other statements assign
values to variables for keeping additional information for a thread. The direct
use of class JKThread, however, is impossible for some threads. Below are two
examples: · The “main” thread of a Java program is created by the system and
thus is not an instance of class JKThread. One solution to this problem is to
disallow the main thread to perform any synchronization events. · In a Java GUI
environment, threads are created by the system to perform operations associated
with external events such as clicking on a button. Such threads are not
instances of class JKThread. Considering these two facts, it is not always
possible to use naively the JKThread class, but sometimes additional mechanisms
are needed in order to use properly such class in different contexts. 4.2
Definition of event formats Every time we want to gather information about the
events exercised during an execution, we must specify the types of the events
and we must organize the event data in an appropriate format [13]. In our case,
we can distinguish the following general types of event: synchronous send,
synchronous receive, asynchronous send, asynchronous receive, read on a shared
object, and write on a shared object. Considering the specific packages used by
the program, many variations could be done within these basic categories. Since
different types of events require different sets of data, it is impossible
define a unique event format for all types of events. Below are some commonly
used data associated with an event: · the thread executing the event · vector
timestamp for the event · type of the event · name of the event · the SYN-object
accessed by the event (e.g., the variable accessed by a read or write, or the
channel accessed by a send or receive) One major design issue is whether the
size of a vector timestamp is fixed (i.e., whether the number of threads in a
Java program is fixed). JaDA allows the number of threads in a Java program to
change during an execution. This decision makes the implementation of vector
timestamps complicated, but allows more Java programs to use JaDA. We have
defined a class called JKSynEvent, which contains information common to all
types of events. For each type of event, we define a new class, which inherits
from JKSynEvent and contains additional data needed for the event type. By
doing so, we keep a hierarchical structure of event formats and allows the
flexibility of defining new event formats if necessary. 4.3 Logging traces to
file Dynamic analysis implies to deal with information about the events
performed during an execution. This information must be structured in such a way
to allow an easy and quick access. As mentioned earlier, JaDA uses Fig. 2 -
Dynamic analysis requires additional information to be maintained for each
thread (especially in the tracing phase). Such information can be placed in an
instance of the class called JKThread, sandwiched between the classes of a
specific thread and the class Thread (belonging to java.lang package). In the
transformed program, a cast operator on the static method currentThread() of
class Thread is used to access the additional thread information. Class diagram
for a typical class implementing the body of a Java thread class Thread class
Object class MyOwnThread interface Runnable java.lang class Thread class Object
interface Runnable class MyOwnThread additional info about thread status and A)
methods for managing it any object referred class JKThread java.lang B)
Insertion of a class containing information and methods to deal with additional
status variables for managing vector timestamps corresponding to the current
thread through C) Getting a reference to the JKThread object a casting on the
currentThread() method Thread object public static Thread currentThread();
JKThread object casting effect r = (JKThread) currentThread(); ... ... by a
MyOwnThread object distributed control, in conjunction with partially ordered
traces, for dynamic analysis. There are two different types of partially
ordered traces [13]: a) based on threads b) based on SYN-objects. A thread-based
partially ordered trace has one “partial”file (or trace) for each thread, while
a SYN-object-based partially ordered trace has one “partial” file for each
SYN-object. JaDA chooses to use SYN-object-based partially ordered traces. We
define a class called JKSyncObjCtrl, which contains methods for accessing
SYNobject-based partial files. For each SYN-object, an instance of JKSyncObjCtrl
is created to control accesses to this SYN-object. Fig. 3 shows the interactions
between threads and SYN-object-based partial files. Each partial file has the
extension “.pef,” denoting “partial event file.” Also, each SYN-object-based
partially ordered trace contains a header file to contain information related to
the trace. 4.4 On-the-fly computation of vector timestamps Every time we want to
associate information to events of an execution in order to understand their
causal relationships, we have to compute a vector timestamp for each event [11].
This vector timestamp assignment operation can be done in two different ways:
using either“on the fly” or “post mortem” algorithms [1]. Both these solutions
have advantages and drawbacks, but we have chosen the first one for several
reasons: · We do not consider real-time constraints, so we do not care about any
“probe effect” and any“heisenbugs” [6]. Clearly, treating timestamp assignment
“on the fly” augments the interference in the original program, but we have to
think that a program, if correct, must behave in the expected way despite of any
delay coming from the instrumentation. · Being the trace files usually big, it
is convenient to assign timestamps during the execution, when the file must be
used anyway, instead of manipulating the files twice (first for event logging,
then for timestamp assignment). · The on-the-fly assignment avoids long waiting
before starting race analysis. · The time overhead of timestamp computing and
logging is distributed during the whole execution time. 4.5 Using templates for
dealing with shared data We have already discussed the reasons for placing most
of the transformations in the used packages instead Fig. 3 - A schematic view of
the interaction among threads and partial files corresponding to SYN-objects.
Each (transformed) thread has a private instance of the class JKThread, just to
contain the additional data required by dynamic analysis. Moreover, the methods
to manage the partial files are placed inside the JKSyncObjCtrl class, and every
thread must use them for accessing the partial files. Thread #N Instance of
class JKThread for thread #1 Instance of class JKThread for thread #2 Instance
of class JKThread for thread #N is accessed through an instance Info in every
partial file of class JKSyncObjCtrl Use of the M SYN-objects by the N
transformed threads the transformed program Threads of Instance of class
SObjName1.pef JKSyncObjCtrl Instance of class JKSyncObjCtrl Instance of class
JKSyncObjCtrl Every thread uses an instance of class JKThread for additional
local info necessary for dynamic analysis Code of Thread #1 Code of Thread #2
Code of Myname.hef SObjName2.pef SObjNameM.pef Header file Partial file #1
Partial file #2 Partial file #M Trace file system (partial ordering based on
SYN-objects) of the program itself. This operation is permitted because of our
attention toward high-level events. Sometimes, in our analysis we could have to
consider accesses to shared data made through the direct use of basic Java
synchronization capabilities. So, we have to remember that Java objects have an
implicit lock which can be handled implicitly (through the statement
synchronized) for assuring mutual exclusion in accessing them or specific parts
of code (indications on how to use this and other Java basic concurrent
constructs can be found in [8] ). In dealing with accesses to shared data in the
analyzed program, we must insert additional code directly within the original
program. This insertion must provide the tracing and replay capabilities and
must encompass the variables and the procedures for treating vector timestamps,
according with the algorithms described in [1]. Even in these cases, the
transformed program should keep visible the original control structure. For
assuring such characteristic, we have decided to design some templates for the
additional code to place where shared accesses are present. In our approach, we
assume that a synchronized class is used for hiding shared data, and its methods
provide read and write operations on them (i.e., a synchronized class is used as
a monitor-like construct). This way of coping with the problem of shared data is
very similar to that shown in [3]. 4.6 Race analysis of collected traces Once a
trace has been collected, we can use it for verifying properties of the
execution. A particular type of inspection of the characteristics of an
execution can be done through race analysis, as previously mentioned in §2.2. As
Java threads can interact using both message passing and shared variables, in
JaDA it is possible to compute race sets for both receive and read events. This
operation is not particularly difficult, since the trace contains, for each
event, the relative vector timestamp. Moreover, there is the possibility of
using two different relations for causality detection: weak and strong happened-
before [1]. Of course, the general algorithms presented in [14] and [1] had to
be adapted to the particular structure of the Java environment, tailoring them
to the peculiarities of the concurrent constructs and SYNobjects used in the
program. 5. Current implementation of JaDA At present JaDA implementation covers
the basic features for shared data access, as described in §4.5, and some of the
concurrent constructs in package Synchronization in [7]. Without describing in
depth the techniques used for the transformation of the package, we can say that
the new code inserted in each class try to leave clearly visible the original
one, and that the added portions try to mimic the structure and the behavior of
the original code. It could be interesting to underline that the code for
tracing purposes is placed after the lines corresponding to the event, despite
that for replay, which has to be put directly before the event. Moreover, as a
consequence of the application of the philosophy of hiding the transformations
to the original program, the number and the types of the parameters in every
public method must remain the same. JaDA is composed by the following three
packages: · JaDAkernel: it contains classes and interfaces for supporting basic
functions in dynamic analysis and providing services needed by other JaDA
packages. · JaDAmsg: it contains classes and interfaces that implement various
message-passing constructs with dynamic analysis capabilities. · JaDAvar: it
contains classes and interfaces that implement various variable-sharing
constructs with dynamic analysis capabilities. Currently, we have implemented
classes in package JaDAkernel to provide functions mentioned in §4.1 through
§4.4. For package JaDAmsg, we have revised classes in [7] to add tracing and
replay capabilities. For package JaDAvar, we plan to include classes and
interfaces that simulate various types of semaphores and monitors. Considering
the goals of JaDA, an academic study could be limited to a little portion of a
single package, but in this way there is no serious possibility to do relevant
empirical experiments using JaDA: the larger the scope of the tool, the better
the benefits we can have through it. 6. Conclusions In recent years, several
approaches were proposed for analyzing, testing and debugging concurrent
programs. Since Java is becoming a major language for writing concurrent
programs, static and dynamic analysis of concurrent Java programs are important
research topics. Some issues on static analysis of concurrent Java programs are
discussed in [4]. In this paper, we have described the design (and some
implementation details) of JaDA, which provides testing and debugging tools for
concurrent Java programs. To collect run-time information or control program
execution, JaDA requires transformation of a concurrent Java program into a
slightly different Java program. We have shown that by modifying Java classes
that support concurrent programming, Java application programs only need minor
modifications. We have also presented a novel approach to managing threads that
are needed for testing and debugging of concurrent Java programs. As mentioned
in §3, JaDA is intended to accomplish the following goals for concurrent Java
programs: a) to investigate the use of object-oriented technology for building
dynamic analysis tools, b) to provide an integrated and extensible environment
that allows easy implementation of different dynamic analysis techniques, and
c) to support empirical studies of dynamic analysis techniques. In this paper,
we have addressed major issues related to the first two goals. We have made
significant progress in the implementation of JaDA. Soon we will use JaDA to
carry out some empirical studies of testing and debugging of concurrent Java
programs. We will also investigate extensions of JaDA to increase the quality
and reliability of concurrent Java programs. Acknowledgments We want to express
our grateful acknowledgments to Robert Harris, Bengi Karacali, Naveen Sarabu,
and Jun Zhou for their contributions to the design and implementation of JaDA.
We also wish to thank the anonymous reviewers for their helpful comments on an
earlier version of this paper. References [1] A. Bechini and K. C. Tai,
“Timestamps for Programs Using Messages and Shared Variables,” in Proc. of 18th
IEEE Inter. Conf. on Distributed Computing Systems, May 1998. [2] A. Bechini,
J. Cutajar, and C. A. Prete, “A Tool for Testing of Parallel and Distributed
Programs in Message Passing Environments,” in Proc. of 9th Mediterranean
Electrotechnical Conf. , May 1998 [3] R. H. Carver and K. C. Tai, “Replay and
Testing for Concurrent Programs,” IEEE Software, March 1991, pp.66-74 [4] J. C.
Corbett, “Constructing Compact Models of Concurrent Java Programs,” in Proc. of
ACM Inter. Symp. Software, Testing and Analysis (ACM Software Engineering Notes,
Vol. 23, No. 2, March 1998) pp. 1-11 [5] C. J. Fidge, “Logical Time in
Distributed Systems,” IEEE Computer, Aug. 1991, pp. 28-33. [6] J. Gait, “A Probe
Effect in Concurrent Programs,” Software-Practice and Experience, Vol.16, No. 3,
March 1986, pp. 225-233 [7] S. J. Hartley, “Concurrent Programming: The Java
Programming Language,” Oxford University Press, 1998. [8] D. Lea, “Concurrent
Programming in Java: Design Principles and Patterns,” Addison Wesley, 1997 [9]
C. E. McDowell and D. P. Helmbold, “Debugging Concurrent Programs,” ACM
Computing Surveys, Vol. 21, No. 4, Dec. 1989, pp. 593-22 [10] R. H. B. Netzer,
“Optimal Tracing and Replay for Debugging Shared-Memory Parallel Programs,” in
Proc. ACM/ONR Workshop on Parallel and Distributed Debugging, 1993, pp. 1-11
[11] R. Schwartz, and F. Mattern, “Detecting Causal Relationships in
Distributed Computations: in Search of the Holy Grail,” Distributed Computing,
Vol. 7, 1994, pp. 149-174. [12] K. C. Tai, R. H. Carver, and E. E. Obaid,
“Debugging Concurrent Ada Programs by Deterministic Execution,”IEEE Trans. Soft.
Eng., Vol. 17, No. 1, Jan. 1991, pp. 45-63 [13] K. C. Tai and R. H. Carver,
“Testing of Distributed Programs”, chapter 33 of Handbook of Parallel and
Distributed Computing, edited by A. Zoyama, McGraw-Hill, 1996, pp. 955-978 [14]
K. C. Tai, “Race Analysis of Traces of Asynchronous Message-Passing Programs,”
Proc. 17th IEEE Inter. Conf. Distributed Computing Systems, 1997, pp. 261-268
[15] J. J. P. Tsai and S. J. H. Yang, eds., “Monitoring and Debugging of
Distributed Real-Time Systems,” IEEE Computer Society, 1995.

临时

时间: 2024-07-30 13:38:18

临时的相关文章

Java企业微信开发_09_素材管理之下载微信临时素材到本地服务器

一.本节要点 1.获取临时素材接口 请求方式:GET(HTTPS) 请求地址:https://qyapi.weixin.qq.com/cgi-bin/media/get?access_token=ACCESS_TOKEN&media_id=MEDIA_ID 2.获取临时素材接口的返回结果 企业微信官方开发文档中说明的返回结果如下: 若你以为这就是返回结果,然后跟之前一样,先访问接口,从http连接的输入流中的获取回结果的文本内容,你会发现你接收到的结果是一堆乱码. 这是为何? 以图片为例,此处千

Nginx学习笔记15rewrite之(二)redirect临时重定向

redirect标志跟permanent标志的区别是:redirect使用HTTP 302临时重定向,permanent使用HTTP 301永久重定向.本文介绍redirect标志的临时重定向动作. Nginx配置: location ~ ^/app2/ { rewrite ^/app2/(.*)$  /app/$1  redirect; } 运行结果: curl -v   http://ng.coe2coe.me:8000/app2/ * Hostname was NOT found in D

arm-linux内存管理学习笔记(2)-内核临时页表的建立

学习了arm内存页表的工作原理,接下来就开始咱们软件工程师的本职工作,对内核相关代码进行分析.内核代码那么复杂,该从哪里下手呢,想来想去.其实不管代码逻辑如何复杂,最终的落脚点都是在对页表项的操作上,那么内核是在什么时机会对页表项进行操作,如何操作? 对于一个页表项,抛开所有的软件复杂逻辑,操作无非就是2种吧.一是填写更新页表项,二是读取获取页表项. MMU负责根据页表项进行虚实地址转换,因此读取获取页表项的工作是MMU硬件完成,软件是不参与的.内核代码的主体工作是来更新内存页表.页表更新的时机

如何处理服务器SSL收到了一个弱临时Diffie-Hellman 密钥?

处理服务器SSL收到了一个弱临时Diffie-Hellman 密钥 当我们用火狐浏览器打开某个HTTPS网站时可能会失败,并且出现如下错误提示:         安全连接失败连接某个URL网址时发生错误. 在服务器密钥交换握手信息中 SSL 收到了一个弱临时 Diffie-Hellman 密钥.         错误码: ssl_error_weak_server_ephemeral_dh_key) 如果换用谷歌Chrome打开这个相同的网页也会发生错误,并提示:            服务器的

webform之session传值(临时数据的存储)与扩展属性 --(购物车练习)

页面传值:1.QueryString传值在源页面写:Response.Redirect("Main.aspx?uid="+uid+"&pwd="+pwd);在目标页面:Request["uid"].ToString();2.Session *****特点:可以存任何东西,每个用户都会生成一个特定的Session,Session是存储在服务中的,一般默认存储20分钟,20分钟之后过期用法:在登录页面:Session["uid&qu

域名无法解析 Linux临时或永久修改DNS

最近给VPS重装了系统,因为服务商不提供DHCP,所以只好手动设置IP和DNS Server.悲催的是系统重装的时候忘记了输入DNS Server,最后导致进去系统后,各种域名无法解析. Linux中修改DNS有两种方式,临时修改和永久修改,下面分别介绍. 1.临时修改网卡DNS地址 sudo vim /etc/resolv.conf 改为如下内容: nameserver 8.8.8.8 #修改成你的主DNS nameserver 8.8.4.4 #修改成你的备用DNS search local

代码重构之以查询取代临时变量

意图 - 使得同一个类中的所有函数都可以获得这份信息,能够为这个类编写更清晰的代码 示例 /** * 以查询取代临时变量之前 * Created by luo on 2017/4/19. */ public class ReplaceTempWithQueryBefore { private double _quantity; private double _itemPrice; public double test() { double basePrice = _quantity * _ite

临时任务总结

今天接到临时任务,要求把一个死数据改成从数据库获取,这个过程,略慌乱,导致浪费了时间,以后有紧急事务,要先分析到最好,最快的方法,不要乱试,浪费时间,学习坦克大战时的精神,以及坦克大战中高手的武艺 另外 ,个人的修行,才会导致做事的改变,要想改变做事,先改变自己,要精准,不能有一丝差错,一个字母 错,整个程序完蛋,切记. 要慢,慢才能快,静才能快,静才能想到好办法,急匆匆,如傻叉,不断更新自己,不断改变

简单即用的临时Map容器(参考TimeCacheMap和RotatingMap)

因为业务需要,经常会缓存一些临时数据.比如:手机号发送验证码, 60s内同一个手机号不能重复发送验证码.查询航班信息,缓存1分钟热门查询数据.... 之前一直使用redis作为数据缓存,简单方便..但是如果是个小App,数据没有那么大,可能需要缓存的数据只有不到100KB,使用redis就大材小用 最近一个项目上线的时候,老大跟我说:真的有必要用redis么..不行就先删掉吧..自己想了下,因为App入口有两个ip,不同机器,虽然业务量不大,为了session共享,还是上了 如果只有一个入口(不

linux基础篇-24,swap交换分区临时救急及划分方法

################################################ swap 查看物理内存和交换分区大小及其使用情况 [[email protected] ~]# free -m total       used       free     shared    buffers     cached Mem:          1869        192       1676          0         13         65 -/+ buffer