The Fundamentals of Three-Phase Power Measurements
Application Note
Introduction
Although single-phase electricity is used to supply common domestic and office electrical appliances, three-phase alternating current (a.c.) systems are almost universally used to distribute electrical power and to supply electricity directly to higher power equipment.
This technical note describes the basic principles of three-phase systems and the difference between the different measurement connections that are possible.
Three-phase systems
Three-phase electricity consists of three ac voltages of identical frequency and similar amplitude. Each ac voltage ??phase?ˉ is separated by 120?? from the other (Figure 1). This can be represented diagrammatically by both waveforms and a vector diagram (Figure 2).
Figure 1. Three-phase voltage waveform.
Figure 2. Three-phase voltage vectors.
Figure 3. Three single-phase supplies - six units of loss.
Figure 4. Three-phase supply, balanced load - 3 units of loss.
Three phase systems are used for two reasons:
1. The three vector-spaced voltages can be used to create a rotating field in a motor. Motors can thus be started without the need for additional windings.
2. A three-phase system can be connected to a load such that the amount of copper connections required (and thus the transmission losses) are one half of what they would otherwise be.
Consider three single-phase systems each supplying 100W to a load (Figure 3). The total load is 3 x 100W = 300W. To supply the power, 1 amp flows through 6 wires and there are thus 6 units of loss. Alternatively, the three supplies can be connected to a common return, as shown in Figure 4. When the load current in each phase is the same the load is said to be balanced. With the load balanced, and the three currents phase shifted by 120?? from each other, the sum of the current at any instant is zero and there is no current in the return line.
In a three-phase 120?? system, only 3 wires are required to transmit the power that would otherwise require 6 wires. One half of the copper is required and the wire transmission losses will be halved.
Figure 5. Wye or star connection - three- phase, four wires.
Wye or Star Connection
A three-phase system with a common connection is normally drawn as shown in Figure 5 and is known as a ??wye?ˉ or ??star?ˉ connection.
The common point is called the neutral point. This point is often grounded at the supply for safety reasons. In practice, loads are not perfectly balanced and a fourth ??neutral?ˉ wire is used to carry the resultant current. The neutral conductor may be considerably smaller than the three main conductors, if allowed by local codes and standards.
Figure 6. The sum of the instantaneous voltage at any time is zero.
Figure 7. Delta connection - three-phase, three wires.
Delta Connection
The three single-phase supplies discussed earlier could also be connected in series. The sum of the three 120?? phase shifted voltages at any instant is zero. If the sum is zero, then both end points are at the same potential and may be joined together. The connection is usually drawn as shown in Figure 7 and is known a delta connection after the shape of the Greek letter delta, |¤.
Figure 8. V phase-phase = ?ì3 x V phase-neutral
Figure 9. Delta configuration with a "split-phase" or "center-tapped" winding.
Wye and Delta Comparison
The Wye configuration is used to distribute power to everyday single-phase appliances found in the home and office. Single-phase loads are connected to one leg of the wye between line and neutral. The total load on each phase is shared out as much as possible to present a balanced load to the primary three phase supply.
The wye configuration can also supply single or three-phase power to higher power loads at a higher voltage. The single-phase voltages are phase to neutral voltages. A higher phase to phase voltage is also available as shown by the black vector in Figure 8.
The delta configuration is most often used to supply higher power three-phase industrial loads. Different voltage combinations can be obtained from one three-phase delta supply however, by making connections or ??taps?ˉ along the windings of the supply transformers. In the US, for example, a 240V delta system may have a split-phase or center-tapped winding to provide two 120V supplies (Figure 9). The center-tap may be grounded at the transformer for safety reasons. 208V is also available between the center tap and the third ??high leg?ˉ of the delta connection.
Figure 10. Single-phase, two-wire and DC measurements.
Power Measurements
Power is measured in ac systems using wattmeters. A modern digital sampling wattmeter, such as any of the Tektronix power analyzers, multiplies instantaneous samples of voltage and current together to calculate instantaneous watts and then takes an average of the instantaneous watts over one cycle to display the true power. A wattmeter will provide accurate measurements of true power, apparent power, volt-amperes reactive, power factor, harmonics and many others over a broad range of wave shapes, frequencies and power factor. In order for the power analyzer to give good results, you must be able to correctly identify the wiring configuration and connect the analyzer‘s wattmeters correctly.
Single-Phase Wattmeter Connection
Only one wattmeter is required, as shown in Figure 10. The system connection to the voltage and current terminals of the wattmeter is straightforward. The voltage terminals of the wattmeter are connected in parallel across the load and the current is passed through the current terminals which are in series with the load.
Single-Phase Three-Wire Connection
In this system, shown in Figure 11, the voltages are produced from one center-tapped transformer winding and all voltages are in phase. This is common in North American residential applications, where one 240 V and two 120V supplies are available and may have different loads on each leg. To measure the total power and other quantities, connect two wattmeters as shown in Figure 11.
Figure 11. Single-phase, three-wire.
Figure 13. Three-phase, three-wire, 2 wattmeter method.
Figure 14. Three-phase, three-wire (three wattmeter method - set analyzer to three-phase, four-wire mode.
Three-Phase Three-Wire Connection - Two Wattmeter Method
Where three wires are present, two wattmeters are required to measure total power. Connect the wattmeters as shown in Figure 11. The voltage terminals of the wattmeters are connected phase to phase.
Three-Phase Three-Wire Connection - Three Wattmeter Method
Although only two wattmeters are required to measure total power in a three-wire system as shown earlier, it is sometimes convenient to use three wattmeters. In the connection shown in Figure 14 a false neutral has been created by connecting the voltage low terminals of all three wattmeters together.
The three-wire, three-wattmeter connection has the advantages of indicating the power in each individual phase (not possible in the two-wattmeter connection) and phase to neutral voltages.
Figure 15. Three-phase, four-wire (three wattmeter method).
Three-Phase, Four-Wire Connection
Three wattmeters are required to measure total watts in a four-wire system. The voltages measured are the true phase to neutral voltages. The phase to phase voltages can be accurately calculated from the phase to neutral voltages?ˉ amplitude and phase using vector mathematics. A modern power analyzer will also use Kirchoff?ˉs law to calculate the current flowing in the neutral line.
Configuring Measurement Equipment
As shown in the sidebar, for a given number of wires, N, N-1 wattmeters are required to measure total quantities such as power. You must make sure you have sufficient number of channels, and connect them properly.
Modern multi-channel power analyzers will calculate total or sum quantities such as watts, volts, amps, volt-amperes and power factor directly using appropriate built-in formulas. The formulas are selected based on the wiring configuration, so setting the wiring is critical to get good total power measurements. A power analyzer with vector mathematics capability will also convert phase to neutral (or wye) quantities to phase to phase (or delta) quantities. The factor ?ì3 can only be used to convert between systems or scale the measurements of only one wattmeter on balanced, linear systems.
Understanding wiring configurations and making proper connections is critical to performing power measurements. Being familiar with common wiring systems, and remembering Blondel‘s Theorem will help you get the connections right and results you can rely upon.
The role of soft computing in intelligent machines
By Clarence W. de Silva
Industrial Automation Laboratory, Department of Mechanical Engineering,
The University of British Columbia, 2054¨C2324 Main Mall,
Vancouver V6T 1Z4, Canada
Published online 18 June 2003
An intelligent machine relies on computational intelligence in generating its intelligent behaviour. This requires a knowledge system in which representation and processing of knowledge are central functions. Approximation is a ??soft?ˉ concept, and the capability to approximate for the purposes of comparison, pattern recognition, reasoning, and decision making is a manifestation of intelligence. This paper examines the use of soft computing in intelligent machines. Soft computing is an important branch of computational intelligence, where fuzzy logic, probability theory, neural networks, and genetic algorithms are synergistically used to mimic the reasoning and decision making of a human. This paper explores several important characteristics and capabilities of machines that exhibit intelligent behaviour. Approaches that are useful in the development of an intelligent machine are introduced. The paper presents a general structure for an intelligent machine, giving particular emphasis to its primary components, such as sensors, actuators, controllers, and the communication backbone, and their interaction. The role of soft computing within the overall system is discussed. Common techniques and approaches that will be useful in the development of an intelligent machine are introduced, and the main steps in the development of an intelligent machine for practical use are given. An industrial machine, which employs the concepts of soft computing in its operation, is presented, and one aspect of intelligent tuning, which is incorporated into the machine, is illustrated.
Keywords: soft computing; expert systems; intelligent machines; intelligent control; knowledge-based systems; artificial intelligence
1. Introduction
An ??intelligent?ˉ machine embodies artificial intelligence in a general sense, and as a result displays intelligent behaviour. The field of artificial intelligence evolved with the objective of developing computers that can think like humans. Just like the 20 billion neurons in a human brain, the hardware and software of a computer are themselves not intelligent, yet it has been demonstrated that a computer may be programmed to exhibit some intelligent characteristics of a human. Since intelligence is a soft and complex concept, it is rather unrealistic to expect a precise definition for the term. Narrow definitions can result in misleading interpretations, similar to how a proverbial group of blind people defined a elephant. Instead of trying to define intelligence, the paper will explore characteristics of intelligence and associated techniques of machine implementation.
It is the knowledge system of a machine that enables the machine to display intelligent characteristics. Representation and processing of knowledge are central functions of a knowledge system. Conventional machine intelligence has relied heavily on symbolic manipulation for processing descriptive information and ??knowledge?ˉ, in realizing some degree of intelligent behaviour. Approximation is a ??soft?ˉ concept, and the capability of approximating for the purposes of comparison, pattern recognition, reasoning and decision making is a manifestation of intelligence. Human reasoning is predominantly approximate, qualitative and ??soft?ˉ in nature. Humans can effectively handle incomplete, imprecise and fuzzy information in making intelligent decisions (Zadeh 1984). Soft computing is an important branch of computational intelligence where fuzzy logic, probability theory, neural networks, and genetic algorithms are cooperatively used with the objective of mimicking the reasoning and decision-making processes of a human (Filippidis et al . 1999). Accordingly, this is an important branch of study in the area of intelligent machines, and it is the focus of the present paper.
Intelligent machines will exhibit an increased presence and significance in a wide variety of applications. Products with a ??brain?ˉ are found, for example, in household appliances, consumer electronics, transportation systems, industrial processes, manufacturing systems and services (de Silva 2003). In particular, for practical, economic and quality reasons, there exists a need to incorporate some degree of intelligence and a high level of autonomy into automated machines. This will require proper integration of such devices as sensors, actuators, signal conditioners, communicators and controllers into a machine, which themselves may have to be ??intelligent?ˉ and furthermore, appropriately distributed throughout the machine. The design, development, production and operation of intelligent machines are possible today through ongoing research and development in the field of intelligent systems and soft computing. The paper will discuss a general structure for an intelligent machine. Sensory perception and control are important functions within this architecture. Some practical issues pertaining to intelligent machines will be examined.
2. Intelligent machines
(a) Characteristics of intelligence
A complete and precise definition for intelligence is not available. Any interpretation of intelligence has to be closely related to the characteristics or outward ??appearance?ˉof an intelligent behaviour. It is known that possession of a memory, ability to learn and thereby gain knowledge and expertise, ability to satisfactorily deal with unexpected and unfamiliar situations, and ability to reason, deduce, and infer from incomplete and qualitative information are all intelligent attributes. In particular, pattern recognition and classification play an important role in intelligent processing of information. For example, an intelligent system may be able to recognize and acquire useful information from an object that is aged or distorted, having been previously familiar with the original, undistorted object. Somewhat related is the concept of approximation, which leads one to treat the ability to approximate as a characteristic of intelligence (de Silva 1995). Approximation and qualitative or approximate reasoning fall within the subject of soft computing.
In the context of machine intelligence, the term machine is normally used to denote a computer or a computing machine. In this sense machine intelligence and computational intelligence are synonymous. An intelligent machine, however, takes a broader meaning than an intelligent computer. It may be used to represent any process, plant, system, device or machinery that possesses machine intelligence. Historically machine
Figure 1. The structure of an intelligent process.
intelligence and artificial intelligence (AI) have also come to mean the same thing. The developments in the field of soft computing have taken a somewhat different path from traditional AI, yet they have contributed to the general goal of realizing intelligent machines, thereby broadening and rationalizing the interpretation of machine intelligence.
(b) The structure of an intelligent machine
In general terms an intelligent machine may be treated as an intelligent process. The structure of an intelligent process is given in figure 1, where the important components of the system and the flow of signals/information are indicated. Sensing, sensory perception, actuation, control, and knowledge-based decision making are crucial for the proper performance of an intelligent process. In the general structure shown in figure 1, machines, materials, information and knowledge are used to generate products and services according to some specifications. Revenue generation will be an economic goal of the process. Quality of the products and services, speed of operation, reliability, environmental considerations, and so on, may be included as well within the overall goals. Assessment of the products and services and also the performance of the process would be crucial here. Since the process models and goals may not be mathematically precise, and since there could be unexpected and unknown inputs and disturbances to the process, concepts of artificial intelligence and soft computing would be useful in the tasks of modelling, control, assessment, adaptation and modification of the system.
No matter what the goals of an intelligent machine are, sensing will play a significant role in its operation. For example, if a machine possesses human emotions, then one could convincingly classify it as an intelligent machine. But, the mechanism of decision making and control that would generate such characteristics need to have a means of sensing these characteristics based on the machine operation. Both intelligent sensing and intelligent control are important in this respect. In particular, an intelligent sensor may consist of a conventional sensor with further processing of signals and knowledge-based ??intelligent?ˉ decision making, as indicated in figure 1.
Even the most sophisticated intelligent machines could benefit from a human interface, which would provide an interface for ??natural?ˉ intelligence. The interaction with human experts can be made both offline and online. System upgrading and innovation may result from research and development, and can be implemented through such communication interfaces.
(c) The technology needs
Even though a considerable effort has gone into the development of machines that somewhat mimic humans in their actions, the present generation of intelligent machines do not claim to possess all the capabilities of human intelligence, for example, common sense, display of emotions and inventiveness. Significant advances have been made, however, in machine implementation of characteristics of intelligence such as sensory perception, pattern recognition, knowledge acquisition and learning, inference from incomplete information, inference from qualitative or approximate information, ability to handle unfamiliar situations, adaptability to deal with new yet related situations, and inductive reasoning. Much research and development would be needed in these areas, pertaining to techniques, hardware, and software before a machine could reliably and consistently possess the level of intelligence of say, a dog.
For instance, consider a handwriting recognition system, which is a practical example of an intelligent system. The underlying problem cannot be solved through simple template matching, which does not require intelligence. Handwriting of the same person can vary temporally, due to various practical shortcomings such as missing characters, errors, non-repeatability, physiological variations, sensory limitations and noise. It should clear from this observation that a handwriting recognition system has to deal with incomplete information and unfamiliar objects (characters), and should possess capabilities of learning, pattern recognition, and approximate reasoning, which will assist in carrying out intelligent functions of the system. Techniques of soft computing are able to challenge such needs of intelligent machines.
3. Artificial intelligence and soft computing
(a) Artificial intelligence
According to Marvin Minsky of the Massachusetts Institute of Technology (de Silva 1997), ??artificial intelligence is the science of making machines do things that would require intelligence if done by men?ˉ. In this context it is the outward behaviour of a machine rather than the physical makeup that is considered in ascertaining whether the machine is intelligent. An analogy may be drawn with the intelligent behaviour of a human whose brain itself is the physical entity that assists in realizing that behaviour. Because it is the thought process that leads to intelligent actions, substantial effort in AI has been directed at the development of artificial means of mimicking the human thought process. These endeavours fall within the field of cognitive science.
Conventional AI has relied heavily on symbolic manipulation for the processing of descriptive information and ??knowledge?ˉ in realizing some degree of intelligent behaviour. The knowledge itself may be represented in a special high-level language. The decisions that are made through processing of such ??artificial?ˉ knowledge, perhaps in response to data such as sensory signals, should possess characteristics of intelligent decisions of a human. Knowledge-based systems and related expert systems are an outcome of the efforts made by the AI community in their pursuit of intelligent computers and intelligent machines.
In the problem of knowledge-based decision making, sensory information and any other available data on the process are evaluated against a knowledge base concerning the specific application, making use of an inference-making procedure. Typically, this procedure consists of some form of ??matching?ˉ of the abstracted data, with the knowledge base (de Silva 1995). In particular, for a knowledge base K and a set of data or information D on the particular process, the procedure of arriving at a decision or inference I may be expressed as (de Silva 1997)
I = M[P(D),K], (3.1)
in which the ??pre-processing operator?ˉ P(?) converts the context information on the process into a form that is compatible with K. Knowledge-base matching of the preprocessed information is performed using the matching operation M[?]. Knowledgebased systems such as rule-based systems and fuzzy decision-making systems in particular, follow this model.
(b) Knowledge-based systems
The structure of a typical knowledge-based system is shown in figure 2. Knowledge and expertise form the ??knowledge base?ˉ of an intelligent system. This may be represented as a set of if¨Cthen rules (productions). New information that is generated or arrives from external sources such as sensors and human interfaces, is stored in the database. These data represent the ??context?ˉ of the decision-making process. The knowledge-based system is able to make perceptions (e.g. sensory perception) and new inferences or decisions using its reasoning mechanism (inference engine), by interpreting the meaning and implications of the new information within the capabilities of the existing knowledge base. These inferences may form the outputs of the knowledge-based systems. The associated decision-making task is an intelligent processing activity, which in turn may lead to enhancement, refinement and updating
of the knowledge base itself.
Production systems are rule-based systems, which are appropriate for the representation and processing of knowledge (i.e. knowledge-based decision making) in intelligent machines (de Silva 1995; Filippidis et al . 1999). An expert system is a good example of a production system. Knowledge representation using a set of if¨Cthen rules is not an unfamiliar concept. For example, a maintenance or trouble-shooting manual of a machine (e.g. automobile) contains such rules, perhaps in tabular form. Also, a production system may be used as a simple model for human reasoning: sensory data fire rules in the short-term memory, which will lead to the firing of more complex rules in the long-term memory.
Figure 2. The structure of a knowledge-based system.
The operation (processing) of a typical rule-based system proceeds as follows: new data are generated (say, from sensors or external commands) and stored in appropriate locations within the database of the system. This is the new context. The inference engine tries to match the new data with the condition part (i.e. the if part or the antecedent) of the rules in the knowledge base. This is called rule searching. If the condition part of a rule matches the data, that rule is ??fired?ˉ, which generates an action dictated by the action part (i.e. the then part or the consequent) of the rule. In fact, firing of a rule amounts to the generation (inference) of new facts, and this in turn may form a context that will lead to the satisfaction (firing) of other rules.
(c) Reasoning and conflict resolution
Reasoning is the procedure which a knowledge-based system adopts in making an inference. The inference engine is responsible for carrying out reasoning. In a rule based system, appropriate rules are fired to generate the inferences, and this involves
some form of matching the rules with data. In a production system, the following two strategies are commonly employed to perform reasoning and make inferences:
(i) forward chaining;
(ii) backward chaining.
Forward chaining is a data-driven search method. Here, the rule base is searched to match an ??if ?ˉ part of a rule (condition) with known facts or data (context), and if a match is detected, the corresponding rule is fired (i.e. the ??then?ˉ part or ??action?ˉ part of the rule is activated). Obviously, this is a direct and bottom-up strategy. Actions
could include creation, deletion and updating of data in the database. One action can lead to firing of one or more new rules. The inference engine is responsible for sequencing the matching (searching) and action cycles. A production system that uses forward chaining is termed a forward production system (FPS). This type of system is particularly useful in knowledge-based control, for example, in driving an intelligent machine.
In backward chaining, which is a top-down search process, a hypothesized conclusion is matched with rules in the knowledge base in order to determine the context (facts) that supports the particular conclusion. If enough facts support a hypothesis, that particular hypothesis is accepted. Backward chaining is useful in situations that demand a logical explanation or justification for each action (for example, in fault diagnosis of an intelligent machine, and in theorem proving). A production system that uses backward chaining is called a backward chaining system (BCS).
When the context data are matched with the condition parts of the rules in a rule base, it may be possible that more than one rule is satisfied. The set of rules that is satisfied in this manner is termed the conflict set. A method of conflict resolution has to be invoked to select from the conflict set the rule that would be fired. Methods ofconflict resolution include the following:
(1) first match;
(2) toughest match;
(3) privileged match;
(4) most recent match.
In the first method, the very first rule that is satisfied during searching will be fired. This is a very simple strategy but may not produce the best performance in general. In the second method, the rule containing the most condition elements, within the conflict set, will be fired. For example, suppose that the conflict set has the following two rules.
(i) If the temperature is high then increase the coolant flow rate.
(ii) If the temperature is high and the coolant flow rate is maximum, then shut down the plant.
Here, the toughest match is the second rule.
The rules in a rule base may be assigned various weightings and priorities depending on their significance. The privileged match is the rule with the highest priority, within the conflict set. For example, a priority may be assigned in proportion to the toughness of the match. In this case methods 2 and 3 listed above are identical. Alternatively, a priority may be assigned to a rule on the basis of the significance or consequences of its acts.
The most recent match is the rule in the conflict set whose condition part satisfies the most recent entries of data. In this method, a higher priority is given to more recently arrived data in the database.
(d) Approximation and intelligence
Approximation is a ??soft?ˉ concept and is related to intelligence (Zadeh 1984; Filippidis et al . 1999). The capability of approximation for the purpose of comparison,
pattern recognition, reasoning and decision making is a manifestation of intelligence (e.g. dealing with incomplete and subjective information, unfamiliar situations, comparison and judgment). There are many concepts of approximation, examples of which include imprecision, uncertainty, fuzziness and belief. These concepts are not identical even though the associated methods of information representation and processing may display analytical similarities.
Several concepts of approximation (Klir & Folger 1988) that may be useful in knowledge-based reasoning are summarized in table 1. In particular, uncertainty deals with crisp sets, while belief and plausibility may be defined with respect to sets with either a crisp or non-crisp (fuzzy) boundary. It is possible to incorporate and employ these various concepts into a knowledge base in a complementary, rather than mutually exclusive, manner. In particular, uncertainty and belief levels may be included as qualifiers in a rule, for example (de Silva 1997),
(i) if A then B with probability p;
(ii) if A then B with belief b.
In each case, the antecedent A and the consequent B can be either crisp or noncrisp (fuzzy) quantities. In the fuzzy case the level of fuzziness of a parameter is determined by the degree of membership of that parameter in the associated fuzzy set. Apart from that, a level of certainty p, a truth value m, a level of belief b, etc., may be associated with each rule, so as to qualify the exactness or confidence of each rule on the basis of additional statistical information, perception, evidence, and so on. In this manner, the concepts of approximation may be used in a complementary way to more realistically and authentically represent a knowledge base and to make valid inferences using it.
It is important to recognize the characteristic features of the concepts of approximation listed in table 1, so that one could avoid using them incorrectly and inappropriately. In table 1 the main characteristic that differentiates each concept from the others is indicated, and an example illustrating that characteristic is also given. In representing and analysing these concepts, one may treat a condition as membership within a set. The set itself may be either fuzzy or crisp, depending on the concept. In particular, it should be noted that both uncertainty and imprecision deal with crisp sets and fuzziness uses fuzzy sets. Belief and plausibility are applicable to both crisp and fuzzy sets.
(e) Soft computing and approximate reasoning
Soft computing has effectively complemented conventional AI in the area of machine intelligence. Computing paradigms of fuzzy logic, neural networks and genetic algorithms are the main constituents of soft computing, which can be used in intelligent machines either separately or synergistically. Fuzzy logic is useful in representing human knowledge in a specific domain of application and in reasoning with that knowledge to make useful inferences or actions. For example, fuzzy logic may be employed to represent, as a set of ??fuzzy rules?ˉ, the knowledge of a human in operating a machine. This is the process of knowledge representation. Then, a rule of inference in fuzzy logic may be used in conjunction with this ??fuzzy?ˉ knowledge base to make decisions on operation of the machine for a given set of machine observations (conditions or context). This task concerns ??knowledge processing?ˉ (de Silva 1995). In this sense, fuzzy logic serves to represent and process the knowledge of a human in operating a given machine.
Artificial neural networks (NNs) are massively connected networks of computational??neurons?ˉ. By adjusting a set of weighting parameters of an NN, it may be ??trained?ˉ to approximate an arbitrary nonlinear function to a required degree of accuracy. Biological analogy here is the neuronal architecture of a human brain. Since machine intelligence involves a special class of highly nonlinear decision making, NNs may be appropriately employed there, either separately or in conjunction with other techniques such as fuzzy logic (Jang et al . 1997). Fuzzy-neural techniques
are applicable in intelligent decision making, in particular when machine learning is involved and parameter changes and unknown disturbances have to be compensated for.
Genetic algorithms (GAs) are derivative-free optimization techniques that can evolve through procedures analogous to biological evolution (Filippidis et al . 1999). GAs belong to the area of evolutionary computing. They represent an optimization approach where a search is made to ??evolve?ˉ a solution algorithm that will retain the ??most-fit?ˉ components in a procedure that is analogous to biological evolution through natural selection, crossover and mutation. It follows that GAs are applicable in machine intelligence, particularly when optimization is an objective.
To summarize the biological analogies of fuzzy, neural, and genetic approaches: fuzzy techniques attempt to approximate human knowledge and the associated reasoning process; NNs are a simplified representation of the neuron structure of a brain; genetic algorithms follow procedures that are crudely similar to the process of evolution in biological species.
Techniques of soft computing are powerful by themselves in achieving the goals of machine intelligence. Furthermore, they have a particular appeal in view of the biological analogies that exist. This is no accident, because in machine intelligence it is the behaviour of human intelligence that would be mimicked. Biological analogies and key characteristics of several techniques of machine intelligence are listed in table 2. Conventional AI, which typically uses symbolic and descriptive representations and procedures for knowledge-based reasoning, is listed as well for completeness.
Consider the general problem of approximate reasoning. In this case the knowledge base K is represented in an ??approximate?ˉ form, for example, by a set of if¨Cthen rules with antecedent and consequent variables that have approximate descriptors. First, the data D are pre-processed according to
FD = FP(D), (3.2)
which is a data abstraction procedure. In the situation of fuzzy logic, this corresponds to a ??fuzzification?ˉ and establishes the membership functions or membership grades of D. Then with an ??approximate?ˉ knowledge base FK, the inference FI is obtained through approximate reasoning, as denoted by
FI = FK ? FD. (3.3)
In the case of fuzzy decision making, what is used is fuzzy-predicate approximate reasoning, which is usually based on the compositional rule of inference (de Silva 1995).
4. Techniques of soft computing
The primary techniques of soft computing fall within the four areas: fuzzy logic, NNs, genetic algorithms (or, evolutionary computing) and probabilistic methods. The mixed or hybrid techniques, which synergistically exploit the advantages of two or more of these areas, are quite effective. Techniques of soft computing are widely used in knowledge representation and decision making associated with intelligent machines. The basics of the four areas of soft computing are given in the next three subsections.
(a) Fuzzy logic
Fuzzy logic is useful in representing human knowledge in a specific domain of application and in reasoning with that knowledge to make useful inferences or actions. The conventional binary (bivalent) logic is crisp and allows for only two states. This logic cannot handle fuzzy descriptors, examples of which are ??fast?ˉ, which is a fuzzy quantifier, and ??weak?ˉ, which is a fuzzy predicate. They are generally qualitative, descriptive and subjective and may contain some overlapping degree of a neighbouring quantity, for example, some degree of ??slowness?ˉ in the case of the fuzzy quantity ??fast?ˉ. Fuzzy logic allows for a realistic extension of binary, crisp logic to qualitative, subjective and approximate situations, which often exist in problems of intelligent machines where techniques of artificial intelligence are appropriate.
In fuzzy logic, the knowledge base is represented by if¨Cthen rules of fuzzy descriptors (de Silva 1995). An example of a fuzzy rule would be ??if the speed is slow and the target is far, then moderately increase the power?ˉ, which contains the fuzzy descriptors slow, far and moderate. A fuzzy descriptor may be represented by a membership function, which is a function that gives a membership grade between 0 and 1 for each possible value of the fuzzy descriptor it represents.
Mathematically, a fuzzy set A is represented by a membership function, or a ??possibility function?ˉ (Dubois & Prade 1980) of the form
Fz[x ?ê A] = |ìA(x) : Re ?ú [0, 1], (4.1)
in which each element of A, as denoted by a point x on the real line Re, is mapped to a value |ì, which may lie anywhere in the real interval of 0 to 1. This value is the ??grade of membership?ˉ of x in A. If the membership grade is greater than zero but less than unity, then the membership is not crisp (i.e. it is fuzzy), and the element has some possibility of being within the set and also some complementary possibility of being outside the set. In that case, the element falls on the fuzzy boundary of the set. The conventional binary (bivalent) logic allows for only two states, represented by 0 and 1, of set membership: an element is either completely inside or completely outside a set. Through the use of membership grades which lie between 0 and 1, fuzzy sets and associated fuzzy logic allow for a realistic extension and generalization of binary, crisp logic.
A fuzzy rule itself may be represented as a grouping of membership functions. An example of two rules,
(i) if A1 and B1 then C1,
(ii) if A2 and B2 then C2,
Figure 3. An illustration of fuzzy decision making.
is sketched in figure 3, where triangular membership functions are used. Here A1 and A2 may represent two fuzzy states such as ??near?ˉ and ??far?ˉ of the variable ??destination?ˉ; B1 and B2 may represent two fuzzy states such as ??fast?ˉ and ??slow?ˉ of the variable ??speed?ˉ; and C1 and C2 may represent the fuzzy actions ??decrease?ˉ and ??increase?ˉ of the variable ??power?ˉ. If the actual distance to the target is x0 and the actual speed is y0, then each fuzzy rule will contribute an action as shown by the shaded region of the membership functions for C1 and C2 in figure 3. The net power-change action (decision) corresponding to these readings of distance and speed is given by the overall shaded region, indicated by C_ in the figure. This is arrived at according to a particular decision-making procedure: the sup¨Cmin composition, which is commonly used in fuzzy logic (de Silva 1995). If a crisp decision is required, one may use the centroid z0 of the decision membership function C_ as shown.
Here, the fuzzy matching operator FM that corresponds to M in equation (3.1) is in fact the composition operator ? and may be expressed as
|ìI = supx min(|ìK, |ìD), (4.2)
X
where |ìK is the multidimensional membership function of the fuzzy rule base, |ìD is the membership function of the fuzzified data, |ìI is the membership function of the fuzzy inference, and x ?ê X is the common set of context variables used in knowledge-base matching.
Also, the supremum (sup) and minimum (min) operations are given by the usual definitions (de Silva 1995). Often, a crisp value ?c is needed for the intelligent action that has to be carried out in the physical process (intelligent machine) and may be determined by the centroid method; thus,
, (4.3)
where c is the independent variable of the inference I, and S is the support set (or the region of interest) of the inference membership function.
Fuzzy logic is commonly used in direct control of processes and machinery. In this case the inferences of a fuzzy decision-making system form the control inputs to the
process. These inferences are arrived at by using the process responses as the inputs (context data) to the fuzzy system. The structure of a direct fuzzy controller is shown in figure 4. The control scheme shown here may be used to drive the components of an intelligent machine.
The simple example of goal pursuit (vehicle driving) shown in figure 3 was given to illustrate the similarity of fuzzy decision making to approximate reasoning, which is commonly used by humans. One may argue that there is no intelligence in the associated decision-making process because the rules are fixed and no learning and self-improvement are involved. Even in the example of figure 2, however, there exists
a process of ??approximation?ˉ and the use of approximate reasoning. These, as commonly performed by humans, may be interpreted as manifestations of intelligence.
Problems of knowledge-based decision making that are much more complex would be involved in intelligent machines for practical applications. Use of fuzzy logic is valuable and appealing in such applications, particularly because of its incorporation of approximate reasoning. Learning may be incorporated through a process of reinforcement, where valid rules in the rule base are retained and new rules are added (learned), while inappropriate rules are removed. Techniques of NNs may be used as means of learning in this context (de Silva 2000). Application areas of fuzzy logic include smart appliances, supervisory control of complex processes, and expert systems.
(b) Neural networks
Artificial NNs are massively connected networks of computational ??neurons?ˉ, and represent parallel-distributed processing structures. The inspiration for NNs has come from the biological architecture of neurons in human brain. A key characteristic of NNs is their ability to approximate arbitrary nonlinear functions. Since machine intelligence involves a special class of highly nonlinear decision making, NNs would be effective there. Furthermore, the process of approximation of a nonlinear function (i.e. system identification) by interacting with a system and employing data on its behaviour, may be interpreted as ??learning?ˉ. Through the use of NNs, an intelligent machine would be able to learn and perform high-level cognitive tasks (de Silva 2000). For example, an intelligent machine would only need to be presented a goal; it could achieve its objective through continuous interaction with its environment and evaluation of the responses by means of NNs.
A NN consists of a set of nodes, usually organized into layers, and connected through weight elements called synapses. At each node, the weighted inputs are summed (aggregated), thresholded, and subjected to an activation function in order to generate the output of that node (Gupta & Rao 1994; Filippidis et al . 1999). These operations are shown in figure 5. The analogy to the operations in a biological neuron is highlighted. Specifically, in a biological neuron, the dendrites receive information from other neurons. The soma (cell body) collects and combines this information, which is transmitted to other neurons using a channel (tubular structure) called an axon. This biological analogy, apart from the ability to learn by example, approximation of highly nonlinear functions, massive computing power, and memory, may be a root reason for inherent ??intelligence?ˉ in a NN. If the weighted sum of the inputs to a node (neuron) exceeds a threshold value w0, then the neuron is fired and an output y(t) is generated according to
where xi are neuron inputs, wi are the synaptic weights and |\[CenterDot][?] is the activation function.
As indicated in figure 6, there are two main classes of NNs known as feedforward networks (or static networks) and feedback networks (or recurrent networks). In a feedforward network, an example of which is a multi-player preceptron, the signal flow from a node to another node takes place in the forward direction only. There are no feedback paths. Figure 7 shows a multi-player preceptron consisting of an input layer, a hidden layer and an output layer. Another example of feedforward network is the radial basis function network. Here there are only three layers. Furthermore, only the hidden layer uses nonlinear activation functions in its nodes. These functions are called radial basis functions, the Gaussian distribution function being a popular example. These functions form the basis for the capability of the NN to approximate any nonlinear function. In a feedforward NN, learning is achieved through example. This is known as supervised learning. Specifically, first a set of input¨Coutput data of the actual process is determined (e.g. by measurement). The input data are fed into the NN. The network output is compared with the desired output (experimental data) and the synaptic weights of the NNs are adjusted using a gradient (steepest descent) algorithm until the desired output is achieved.
In feedback NNs, the outputs of one or more nodes (say, in the output layer) are fed back to one or more nodes in a previous layer (say, the hidden layer or input layer) or even to the same node. The feedback provides the capability of ??memory?ˉ to the network. An example is the Hopfield network. It consists of two layers: the input layer and the Hopfield layer. Each node in the input layer is directly connected to only one node in the Hopfield layer. The outputs of the network are fed back to the input nodes via a time delay (providing memory) and synaptic weights. Nodes in the Hopfield layer have nonlinear activation functions such as sigmoidal functions.
Feedback NNs commonly use unsupervised-learning algorithms. In these learning schemes, the synaptic weights are adjusted based on the input values to the network and the reactions of individual neurons, and not by comparing the network output with the desired output data. Unsupervised learning is called self-organization (or open-loop adaptation), because the output characteristics of the network are determined internally and locally by the network itself, without any data on desired outputs. This type of learning is particularly useful in pattern classification and grouping of data. Hebbian learning and competitive learning are examples of unsupervisedlearning algorithms. In the Hebbian-learning algorithm, the weight between a neuron and an input is strengthened (increased) if the neuron is fired by the input. In competitive learning, weights are modified to enhance a node (neuron) having the largest output. An example is the Kohonen network, which uses a winner-takes-all approach. A Kohonen network has two layers, the input layer and the output layer (Kohonen layer). In the operation of the network, the node in the Kohonen layer with weights that most closely resemble the current input is assigned an output of 1 (the winner), and the outputs of all the remaining nodes are set to zero. In this manner, the input nodes organize themselves according to the pattern of the input data while the output nodes compete among themselves to be activated.
(c) Genetic algorithms
GAs are derivative-free optimization techniques, which can evolve through procedures analogous to biological evolution (Anderson 1984; de Silva & Wickramarachchi 1997). GAs belong to the area of evolutionary computing (Davis 1991). They represent an optimization approach where a search is made to ??evolve?ˉ a solution algorithm, which will retain the ??most-fit?ˉ components in a procedure that is analogous to biological evolution through natural selection, crossover and mutation. In the present context of intelligent machines, intellectual fitness rather than physical fitness is what is important for the evolutionary process. Evolutionary computing can play an important role in the development of an optimal and self-improving intelligent machine.
Evolutionary computing has the following characteristics (Filippidis et al . 1999):
(i) it is based on multiple searching points or solution candidates (population based search);
(ii) it uses evolutionary operations such as crossover and mutation;
(iii) it is based on probabilistic operations.
The basic operations of a genetic algorithm are indicated in figure 8. The algorithm works with a population of individuals, each representing a possible solution to a given problem. Each individual is assigned a fitness score according to how good its solution to the problem is. The highly fit (in an intellectual sense) individuals are given opportunities to reproduce by crossbreeding with other individuals in the population. This produces new individuals as offspring, who share some features taken from each parent. The least-fit members of the population are less likely to get selected for reproduction and will eventually die out. An entirely new population of possible solutions is produced in this manner, by mating the best individuals (i.e. individuals with best solutions) from the current generation. The new generation will contain a higher proportion of the characteristics possessed by the ??fit?ˉ members of the previous generation. In this way, over many generations, desirable characteristics are spread throughout the population, while being mixed and exchanged with other desirable characteristics, in the process. By favouring the mating of the individuals who are more fit (i.e. who can provide better solutions), the most promising areas of the search space would be exploited. A GA determines the next set of searching points using the fitness values of the current searching points, which are widely distributed throughout the searching space. It uses the mutation operation to escape from a local minimum.
Two important activities of a GA are selection and reproduction. Selection is the operation which will choose parent solutions. New solution vectors in the next generation are calculated from them. Since it is expected that better parents generate better offspring, parent solution vectors that possess higher fitness values will have a higher probability of selection. There are several methods of selection. In the method of roulette-wheel selection (Davis 1991), the probability of winning is proportional to the areal rate of a chosen number on a roulette wheel. In this manner, the selection procedure assigns a selection probability to individuals in proportion to their fitness values. In the elitist strategy, the best parents are copied into the next generation. This strategy prevents the best fitness value of the offspring generation from becoming worse than that in the present generation.
During the reproductive phase of a GA, individuals are selected from the population and recombined, producing offspring, who in turn will make up the next generation. Parents are selected randomly from the population, using a scheme that favours the individuals who are more fit. After two parents are selected, their chromosomes are recombined using the mechanism of crossover and mutation. Crossover takes two individuals and cuts their chromosome strings at some randomly chosen position to produce two ??head?ˉ segments and two ??tail?ˉ segments. The tail segments are then swapped over to produce two new full-length chromosomes. Each of the two offspring will inherit some genes from each parent. This is known as a single-point crossover. Crossover is not usually applied to all pairs of individuals that are chosen for mating. A random choice is made, where the likelihood of the crossover being applied is typically between 0.6 and 1.0. Mutation is applied individually to each child, after crossover. The mutation operation alters each gene at random. The associated probability of change is quite low. Mutation provides a small degree of random search and helps ensure that every point in the search space has some probability of being examined.
(d) Probabilistic reasoning
Uncertainty and the associated concept of probability are linked to approximation. One can justify that probabilistic reasoning should be treated within the area of soft computing. Probabilistic approximate reasoning may be viewed in an analogous manner to fuzzy-logic reasoning, considering uncertainty in place of fuzziness as the concept of approximation that is applicable. Probability distribution/density functions are employed in place of membership functions. The formula of knowledge based decision making that corresponds to equation (3.1), in this case, depends on the specific type of probabilistic reasoning that is employed. The Bayesian approach (Anderson 1984; Pao 1989) is commonly used. This may be interpreted as a classification problem.
Suppose that an observation d is made, and it may belong to one of several classes
ci. The Bayes relation states (de Silva 1995)
where: P(ci | d) is the probability that, given the observation is d, it belongs to class ci (the a posteriori conditional probability); P(d | ci) is the probability that, given that the observation belongs to the class ci, the observation is d (the class conditional probability); P(ci) is the probability that a particular observation belongs to class ci, without knowing the observation itself (the a priori probability); and P(d) is the probability that the observation is d without any knowledge of the class.
In the Bayes decision-making approach, for a given observation (datum) d, a posteriori probabilities P(ci | d) are computed for all possible classes (i = 1, 2, . . . , n), using equation (4.5). The class that corresponds to the largest of these a posteriori probability values is chosen as the class of d, thereby solving the classification problem. The remaining n ? 1 a posteriori probabilities represent the error in this decision.
Note the analogy between equations (4.5) and (3.1). Specifically, P(d) represents the ??pre-processed?ˉ probabilistic data that correspond to the observation d. The knowledge base itself constitutes the two sets of probabilities:
(i) P(d | ci) of occurrence of data d if the class (decision) is ci, i = 1, 2, . . . , n;
(ii) P(ci) of class (decision) ci, i = 1, 2, . . . , n, without any knowledge of the observation (data) itself.
The knowledge-base matching is carried out, in this case, by computing the expression on the right-hand side of (4.5) for all possible i and then picking out the maximum value. Application areas of probabilistic decision making include forecasting, signal analysis and filtering, and parameter estimation and system identification.
It is important to compare the concepts of uncertainty and fuzziness. Uncertainty is statistical inexactness due to random events. Fuzziness arises when the decision of whether a particular object belongs to a given set is a matter of perception, which can be subjective. Probabilistic and statistical techniques may be employed in conjunction with fuzzy logic in a complementary manner. Conceptual differences remain despite the analytical similarities of the two methods. The two concepts are compared in table 3 with respect to their advantages and disadvantages in practical applications.
Formulae that correspond to equation (3.1) may be identified for knowledge-based decision making involving other concepts of approximation (e.g. belief, plausibility, truth value) as well. For example, in the case of belief, the knowledge base may constitute a set of belief values for a set of rules. Then, knowing the belief value of an observation, the decision-making problem will become a matter of picking the segment of the rule base that matches the data and also has the highest belief. More often, however, such concepts of approximation are used as modifiers within a knowledge base involving a different concept of approximation (e.g. fuzzy or probabilistic) rather than stand-alone tools of decision making, as alluded to previously.
5. Practical issues of intelligent machines
(a) System components and architecture
In broad terms, an intelligent machine may be viewed to consist of a knowledge system and a structural system. The knowledge system effects and manages intelligent behaviour of the machine, loosely analogous to the brain, and consists of various knowledge sources and reasoning strategies. As much as neurons themselves in a brain are not intelligent but certain behaviours that are effected by those neurons are, the physical elements of a machine are not intelligent but the machine can be programmed to behave in an intelligent manner. The structural system consists of physical hardware and devices that are necessary to perform the machine objectives yet do not necessarily need a knowledge system for their individual functions. Sensors, actuators, controllers (non-intelligent), communication interfaces, mechanical devices and other physical components fall into this category, and they will work cooperatively in generating an intelligent behaviour by the machine. Sensing with an understanding or ??feeling?ˉ of what is sensed is known as sensory perception, and this is very important for intelligent behaviour. Humans use vision, smell, hearing and touch (tactile sensing) in the context of their intelligent behaviour. Intelligent machines too should possess some degree of sensory perception. The ??mind?ˉ of an intelligent machine is represented by machine intelligence. For proper functioning of an intelligent machine it should have effective communication links between various components, and an interface to the external world. By taking these various requirements into consideration, a general-purpose structure of an intelligent machine is given in figure 9.
The broad division of the structure of an intelligent machine, as mentioned above, is primarily functional rather than physical. In particular, the knowledge system may be distributed throughout the machine, and individual components by themselves may be interpreted as being ??intelligent?ˉ as well (for example, intelligent sensors, intelligent controllers, intelligent multi-agent systems, and so on). It needs to be emphasized that an actual implementation of an intelligent system will be domain specific, and much more detail than that alluded to in figure 9 may have to be incorporated into the system structure. Even from the viewpoint of system efficiency, domain-specific and special purpose implementations are preferred over general-purpose systems.
A hierarchical structure can facilitate efficient control and communication in an intelligent machine. A three-level hierarchy is shown in figure 10. The bottom level consists of machine components with component-level sensing and control. Machine actuation and direct feedback control are carried out at this level. The intermediate level uses intelligent pre-processors for abstraction of the information generated by the component-level sensors. The sensors and their intelligent pre-processors together perform tasks of intelligent sensing. State of performance of the machine components may be evaluated by this means, and component tuning and component-group con trol may be carried out as a result. The top level of the hierarchy performs tasklevel activities including planning, scheduling, machine performance monitoring and supervisory control. Resources such as materials and expertise may be provided at this level and a human¨Cmachine interface would be available. Knowledge-based decision making is carried out at both intermediate and top levels, and techniques of soft computing would be applicable in these knowledge-based activities. The resolution of the information that is involved will generally decrease as the hierarchical level increases, while the level of ??intelligence?ˉ that would be needed in decision making will increase.
Within the overall system, the communication protocol provides a standard interface between various components such as sensors, actuators, signal conditioners and controllers, and also with the machine environment. The protocol will not only allow highly flexible implementations, but also enable the system to use distributed intelligence to perform pre-processing and information understanding. The communication protocol should be based on an application-level standard. In essence, it should outline what components can communicate with each other and with the environment, without defining the physical data link and network levels. The communication protocol should allow for different component types and different data abstractions to be interchanged within the same framework. It should also allow for information from geographically removed locations to be communicated to the control and communication system of the machine.
Intelligent machines heavily rely on intelligent control in their operation. The term intelligent control may be loosely used to denote a control technique that can be carried out using the ??intelligence?ˉ of a human who is knowledgeable in the particular domain of control (de Silva 1995). In this definition, constraints pertaining to limitations of sensory and actuation capabilities and information processing speeds of humans are not considered. It follows that if a human in the control loop can properly control a machine, then that machine would be a good candidate for intelligent control. The need for intelligent control is evident from the emergence of engineering systems exhibiting an increasingly greater degree of complexity and sophistication. When the process is complex and incompletely known, yet some human expertise or experience is available on a practical scheme to control the process, the techniques of intelligent control may be particularly appealing (de Silva 1995). The field of intelligent control has grown in recent years at an impressive rate. This is due primarily to the proven capabilities of the algorithms and techniques, particularly with soft computing, developed in the field to deal with complex systems that operate in illdefined and dynamically varying environments. Techniques of intelligent control are useful in autonomous control and supervision of complex machinery and processes. Intelligent machines are called upon to perform complex tasks with high accuracy, under ill-defined conditions. Conventional control techniques may not be quite effective under these conditions, whereas intelligent control has a tremendous potential (de Silva 1995; Filippidis et al . 1999). Unlike conventional control, intelligent-control techniques possess capabilities for effectively dealing with incomplete information concerning the machine and its environment, and unexpected or unfamiliar conditions. Information abstraction and knowledge-based decision making, which incorporates abstracted information, are considered important in intelligent control.
Advances in digital electronics, technologies of semiconductor processing, and micro-electromechanical systems have set the stage for the integration of intelli gence into sensors, actuators and controllers (de Silva 2000). The physical segregation between these devices may well be lost in due time as it becomes possible to perform diversified functionalities such as sensing, conditioning (filtering, amplification, processing, modification, etc.), transmission of signals, and intelligent control all within the same physical device. Due to the absence of adequate analytical models, sensing assumes an increased importance in the operation and control of complex systems such as intelligent machines. The trend in the applications of intelligent machines has been towards mechatronic-type technology where intelligence is embedded at the component level, particularly in sensors and actuators (Jain & de Silva 1999), and distributed throughout the system.
(b) Expert systems for intelligent machines
An expert system is a knowledge-based system, which may form the ??brain?ˉ of an intelligent machine. It contains the traditional constituents of an intelligent decisionmaking system such as a knowledge base, a database, an inference engine and a human¨Cmachine interface, as indicated in figure 11. The system interface is used for both development, particularly knowledge acquisition, and use of the expert system. The knowledge base of an expert system embodies human knowledge and understanding, somewhat imitating a human expert in the particular domain of expertise (e.g. specialist, engineer, scientist, financial consultant). The inference engine is a ??driver?ˉ program that traverses the knowledge base in response to observations and other inputs from the external world (and possibly previous inferences and results
from the expert system itself), and will identify one or more possible outcomes or conclusions (Forsyth 1984; de Silva 1995; Filippidis et al . 1999). This task of making inferences for arriving at solutions will involve ??processing?ˉ of knowledge. It follows that representation and processing of knowledge are central to the functioning of an expert system (de Silva 1995). Techniques of soft computing are directly applicable here. The data structure selected for the specific form of knowledge representation determines the nature of the program that serves as an inference engine. Monitors and keyboards, sensors and transducers, and even output from other computer programs including expert systems, usually provide communication links between an expert system and the external world. An expert system is intended to take the place of a human expert in an advisory capacity. It receives a problem description and provides advice through knowledge-based reasoning. An expert system should be able to explain its decisions; that is, it should possess an explanation facility.
The development of an expert system requires the combined effort of domain experts, who are human experts in the field of interest, and knowledge engineers, who acquire and represent/program the knowledge in a suitable form. This process will require a knowledge-acquisition facility. Other engineers, programmers, etc., are needed for the development of the system interface, integration, and so on. The experts will be involved in defining the domain of the problem to be solved and in providing the necessary expertise and knowledge for the knowledge base, which will lead to the solution of a class of problems in that domain. The knowledge engineers are involved in acquiring and representing the available knowledge and in the overall process of programming and debugging the system. They organize the acquired knowledge into the form required by the particular expert system tool (Hart 1986). Forms of knowledge representation may include logic and fuzzy logic, production systems (rules), semantic scripts, semantic primitives, frames and symbolic representations (de Silva 1995). Hybrid soft-computing techniques such as neuro-fuzzy evolutionary approaches are particularly applicable (Jang et al . 1997; Vonk et al . 1997). Both experts and knowledge engineers will be involved in testing and evaluating the system. Typically, what is commercially available for developing expert systems is an expert system ??shell.?ˉ It will consist of the required software programs, but will not contain a knowledge base. It is the responsibility of the user, then, to organize the creation of the required knowledge base, which should satisfy the requirements of the system with respect to the form of knowledge representation that is used and the structure of the knowledge base.
While the quality and performance of an expert system is highly dependent on the knowledge base that is contained within it, this knowledge alone does not constitute a powerful expert system. A poorly chosen formalism may limit the extent and nature of the knowledge; accordingly, the structure of knowledge representation plays an important role. The formalism that is employed should closely resemble how an expert uses knowledge and performs reasoning. The skill of both the domain expert and the knowledge engineer will be crucial here. The techniques of representation and processing will be dependent on the type of knowledge. The reasoning scheme that is employed will directly affect the search speed and how conflicts are resolved. In particular, retrieval and processing of information will not be fast or efficient without a good formalism for representing knowledge and a suitable inference-making scheme to process it. In real-time expert systems, which are particularly applicable in intelligent machines, speed of decision making is a very crucial factor, and it depends on hardware/software characteristics, the inference-making scheme and the particular technique of knowledge representation.
Knowledge engineering includes the following tasks.
(i) Knowledge that is pertinent is ascertained from different sources.
(ii) The knowledge is interpreted and integrated (from various sources and in different forms).
(iii) The knowledge is represented within the knowledge-based system. (A suitable structure, language, etc., has to be chosen. Here, one should consider aspects of incomplete knowledge, presence of analytical models, accessibility of system variables for measurement, and availability of past experience.)
(iv) Knowledge is processed in order for inferences to be made. (This operation has to be compatible with the knowledge-base, objectives of the system, etc. The speed of decision making is crucial, particularly in real-time applications of intelligent machines. The form of the inferences should be consistent with the machine objectives.)
(v) The machine needs and constraints (accuracy, implications of incomplete knowledge and the cost of an incorrect inference) are taken into consideration.
(vi) Economic considerations (development cost and cost to the user in comparison with the benefits) are made.
Performance goals of the next-generation expert systems for intelligent machines are the automatic generation of code and knowledge representation, automatic learning and system enhancement from experience, voice recognition, communication through a natural language, automated translation of documents into knowledge bases, cooperative problem-solving architectures, generic problem-solving shells and multi-level reasoning systems.
(c) Development steps of an intelligent machine
Development of an intelligent machine will require a parallel development of the knowledge system and the structural system of the machine. It may be the case that the structural system (a non-intelligent machine) is already available. Still, modifications might be necessary in the structural system in order to accommodate the needs of the intelligent system, for example, new sensors, transducers and communication links for information acquisition. In any event, the development of an intelligent machine is typically a multidisciplinary task often involving the collaboration of a group of professionals such as engineers (electrical, mechanical, etc.), domain experts, computer scientists, programmers, software engineers and technicians. The success of a project of this nature will depend on proper planning of the necessary activities. The tasks involved will be domain specific and depend on many factors, particularly the objectives and specifications of the machine itself. The main steps would be
(1) conceptual development;
(2) system design;
(3) prototyping;
(4) testing and refinement;
(5) technology transfer and commercialization.
It should be noted that generally these are not sequential and independent activities; furthermore, several iterations of multiple steps may be required before satisfactory completion of any single step.
Conceptual development will usually evolve from a specific application need. A general concept needs to be expanded into an implementation model of some detail. A preliminary study of feasibility, costs and benefits should be made at this stage. The interdisciplinary groups that would form the project team should be actively consulted and their input should be incorporated into the process of concept development. Major obstacles and criticism that may arise from both prospective developers and users of the technology should be seriously addressed in this stage. The prospects of abandoning the project altogether due to valid reasons such as infeasibility, time constraints and cost¨Cbenefit factors should not be overlooked.
Once a satisfactory conceptual model has been developed, and the system goals are found to be realistic and feasible, the next logical step of development would be the system design. Here the conceptual model is refined and sufficient practical details for implementation of the machine are identified. The structural design may follow traditional practices. Commercially available components and tools should be identified. In the context of an intelligent machine, careful attention needs to be given to the design of the knowledge system and the human¨Cmachine interface. System architecture, types of knowledge that are required, appropriate techniques of knowledge representation, reasoning strategies and related tools of knowledge engineering should be identified at this stage. Considerations in the context of the user interface would include: graphic displays; interactive use; input types including visual, vocal and other sensory inputs; voice recognition; and natural language processing. These considerations will be application specific to a large degree and will depend on what technologies are available and feasible. A detailed design of the overall system should be finalized after obtaining input from the entire project team, and the cost¨Cbenefit analysis should be refined. At this stage the financial sponsors and managers of the project as well as the developers and users of the technology should be convinced of the project outcomes.
Prototype development may be carried out in two stages. First a research prototype may be developed in laboratory, for the proof of the concept. For this prototype it is not necessary to adhere strictly to industry standards and specifications. Through the experience gained in this manner a practical prototype should be developed next, in close collaboration with industrial users. Actual operating conditions and performance specifications should be carefully taken into account for this prototype. Standard components and tools should be used whenever possible.
Testing of the practical prototype should be done under normal operating conditions and preferably at an actual industrial site. During this exercise, prospective operators and users of the machine should be involved cooperatively with the project team. This should be used as a good opportunity to educate, train, and generally prepare the users and operators for the new technology. Unnecessary fears and prejudices can kill a practical implementation of advanced technology. Familiarization with the technology is the most effective way of overcoming such difficulties. The shortcomings of the machine should be identified through thorough testing and performance evaluation, where all interested parties should be involved. The machine should be refined (and possibly redesigned) to overcome any shortcomings.
Once the machine is tested, is refined and satisfies the required performance specifications, the processes of technology transfer to industry and commercialization may begin. An approved business plan, necessary infrastructure and funds should be in place for commercial development and marketing. According to the existing practice, engineers, scientists and technicians provide minimal input into these activities. This situation is not desirable and needs to be greatly improved. The lawyers, financiers and business managers should work closely with the technical team during the processes of production planning and commercialization. Typically the industrial sponsors of the project will have the right of first refusal of the developed technology. The process of technology transfer would normally begin during the stage of prototype testing and continue through commercial development and marketing. A suitable plan and infrastructure for product maintenance and upgrading are musts for sustaining any commercial product.
(d) Practical example
It will be necessary for intelligent machines to perform their tasks with minimal intervention from humans, to maintain consistency and repeatability of operation, and to cope with disturbances and unexpected variations in the machine, its operating environment and performance objectives. In essence, these machines should be autonomous and should have the capability to accommodate rapid reconfiguration and adaptation. For example, a production machine should be able to cope quickly with variations ranging from design changes for an existing product to the introduction of an entirely new product line. The required flexibility and autonomous operation translate into a need for a higher degree of intelligence in the supporting devices. This will require proper integration of such devices as sensors, actuators and controllers, which themselves may have to be ??intelligent?ˉ and, furthermore, appropriately distributed throughout the system. Design, development, production and operation of intelligent machines, which integrate technologies of sensing, actuation and intelligent control, are possible today through ongoing research and development in the field of intelligent systems and control. Several such projects have been undertaken in the Industrial Automation Laboratory (de Silva 1992) at the University of British Columbia.
Smart machines will exhibit an increased presence and significance in a wide variety of applications. The fish-processing industry has a significant potential for using intelligent machines, incorporating advanced sensor technology and intelligent control. Tasks involved may include handling, cleaning, cutting, inspection, repair and packaging. In industrial plants many of these tasks are still not automated and use human labour. In particular, the machines used for cutting and canning of fish are rather outdated. This has made the maintenance and repair of these machines quite difficult and costly and the operations somewhat slow and inefficient. In view of the present restrictions in fishery, competition for harvest and dwindling stocks, optimal recovery during processing has become an important issue. Furthermore, as the product cost increases, the consumer tends to demand a better-quality product. Advanced sensing and control play a crucial role in realizing the objectives of improved accuracy and product quality in fish processing.
The Industrial Automation Laboratory has developed a machine prototype for the automated cutting of fish (de Silva 1992; de Silva & Wickramarachchi 1997). This machine, shown in figure 12, consists of the following primary components:
(i) an intermittent-motion conveyor system to move the fish through the cutting zone;
(ii) a guillotine-type cutting blade that moves in the vertical direction, which is operated by a pneumatic actuator;
(iii) a horizontal X¨CY table, which carries the cutting-blade assembly and driven
by means of two hydraulic actuators;
(iv) a primary (low-level) charge-coupled device (CCD) camera, which captures the image of each fish as it enters the cutting zone, in order to determine the cutting position;
(v) a secondary (high-level) CCD camera, which is used to determine the cutting quality.
The components of the machine are sketched in figure 13. The conveying mechanism has several parallel arms that are equally spaced along the belt. The fish are pushed forward intermittently by means of these arms. There is also a matrix of retaining pins, which point vertically upwards. These pins can fold in one direction only, thereby restraining any backward motion of the fish. The conveyor pushes a fish forward every 1.24 s, which corresponds to the period of the cyclic motion of the mechanism. Accordingly, the rate of processing is ca. 48 fish min?1. During the first half-period, the conveyor pushes the fish forward and in the next half-period, while the fish are stationary, the imaging of one fish and the cutting of the previous fish are carried out. Also during this half-period, the conveying arms move backward in order to get ready to push a fish forward in the next cycle. There is an optimum position at which the blade should be placed with respect to a fish so that the meat wastage is minimum. This optimum position is determined by using a computer-vision system. An image of the fish is taken by the primary CCD camera while it is stationary, and the corresponding digital information is acquired by a frame-grabber board of the vision computer. Image-processing routines automatically determine the gill of the fish and accordingly compute the optimum position of the cutter. This position is used as the reference input (set point) for the hydraulic controller that moves the cutting-blade assembly. After the cutter assembly is moved to the desired position, the pneumatic actuator is activated, which releases the cutter blade vertically. The operations of the system are synchronized such that, by the time a fish arrives at the cutting zone, the blade has already been positioned at the desired position.
The period of the cyclic motion of the conveyor is 1.24 s. The image capture and processing of a fish and the cutting of the previous fish are carried out during the return half cycle of 620 ms of the conveyor, when the fish are stationary. The desired position for the cutting-blade assembly has to be determined after the image processing, but the positioning command should not reach the actuators before the previous fish is cut. The positioning needs to be completed during the forward half cycle of 620 ms. This means that the hydraulic positioning system has to reach the steady state at the end of the forward half cycle (otherwise, the blade would be released while the cutting-blade unit is still moving, but the fish would be stationary). This may lead to such problems as poor-quality cuts or even complete entanglement of a fish with the cutter. With the available hardware system, the time required for image processing is 170 ms, which is well within the available time of 620 ms for image capture, processing and generation of the set point for the cutter position. Note that the cutter cannot be positioned for the next fish before the present fish is cut. The cutting time for the pneumatic actuator is limited to less than 240 ms, during which imaging and set-point computation for the next fish should be completed as well. Then, a time period of ca. 1 s would be available for positioning the cutter. Accordingly, the following specifications are used for the positioning controller:
(i) time to reach the desired position in steady state is a maximum of 1 s;
(ii) the allowed offset for the cutter position (cutting accuracy) is 5 mm.
The machine operates under the influence of an intelligent supervisory control system, which has a hierarchical structure of the form given in figure 10. Human knowledge and expertise, acquired by experienced operators of the fish-processing industry, are implemented throughout this system, in a distributed manner. In particular, the cutter-positioning system is monitored for its stability, accuracy and speed, and appropriate corrective actions are taken using a fuzzy knowledge base and a decision-making module. The corrective actions are determined by ??fuzzy?ˉ analysis of a step response. For example, consider the step response shown in figure 14. The solid line gives the actual response of the cutter and the broken line gives the desired response based on a specified reference model. The response is separated into several zones, and the conditions of error and change in error are identified as given in table 4. Based on the error states, it is easy to determine the necessary corrective actions, in a qualitative (fuzzy) manner. For example, possible corrective actions in terms of the reference signal are given in table 5. These fuzzy rules are translated into tuning actions of the position controller. The tuning knowledge base can be learned and evolved in this manner. The performance of a cutter with a controller that incorporates a tuning scheme that is based on soft computing of this type (Tang et al . 2001) is shown in figure 15. The solid line gives the actual response of the cutter and the broken line gives the desired response based on a specified reference model. It can be seen that the ??intelligent?ˉ tuner has very effectively improved the cutter response in this example.
6. Conclusion
This paper examined the concepts of intelligence and approximation as applied to intelligent machines, and explored several important characteristics and capabilities of machines that exhibit intelligent behaviour. Soft computing is an important branch of computational intelligence, where fuzzy logic, probability theory, NNs and genetic algorithms are synergistically used to mimic the reasoning and decision making of a human. The paper introduced soft computing and related approaches, which are useful in the development of an intelligent machine. A general structure was presented for an intelligent machine, giving the primary components such as sensors, actuators, controllers, and the communication backbone, and their interaction. The main steps in the development of an intelligent machine for practical use were outlined. An industrial machine, which employs the concepts of soft computing in its operation was presented, and one aspect of intelligent tuning, which has been incorporated into the machine was illustrated.
This work has been funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada.
References
Anderson, T. W. 1984 An introduction to multivariate statistical analysis. Wiley.
Davis, L. 1991 Handbook on genetic algorithms. New York: Van Nostrand-Rienhold.
de Silva, C. W. 1992 Research laboratory for fish processing automation. Robot. Comput. Integr.
Manuf. 9, 49¨C60.
de Silva, C. W. 1995 Intelligent control: fuzzy logic applications. Boca Raton, FL: CRC Press.
de Silva, C. W. 1997 Intelligent control of robotic systems with application in industrial processes.
Robot. Auton. Syst. 21, 221¨C237.
de Silva, C. W. (ed.) 2000 Intelligent machines: myths and realities. Boca Raton, FL: CRC
Press.
de Silva, C. W. 2003 Mechatronics: an integrated approach. Boca Raton, FL: CRC Press.
de Silva, C. W. & Wickramarachchi, N. 1997 An innovative machine for automated cutting of
fish. IEEE ASME Trans. Mechatron. 2, 86¨C98.
Dubois, D. & Prade, H. 1980 Fuzzy sets and systems: theory and applications. Academic.
Filippidis, A., Jain, L. C. & de Silva, C. W. 1999 Intelligent control techniques. In Intelligent
adaptive control (ed. L. C. Jain & C. W. de Silva). Boca Raton, FL: CRC Press.
Forsyth, R. (ed.) 1984 Expert systems. New York: Chapman & Hall.
Gupta, M. M. & Rao, H. 1994 Neural control theory and applications. Piscataway, NJ: IEEE
Press.
Hart, A. 1986 Knowledge acquisition for expert systems. McGraw-Hill.
Jain, L. C. & de Silva, C. W. (ed.) 1999 Intelligent adaptive control: industrial applications.
Boca Raton, FL: CRC Press.
Jang, J. S. R., Sun, C. T. S. & Mizutani, E. 1997 Neuro-fuzzy and soft computing. Upper Saddle
River, NJ: Prentice Hall.
Klir, G. J. & Folger, T. A. 1988 Fuzzy sets, uncertainty, and information. Englewood Cliffs, NJ:
Prentice Hall.
Pao, Y. H. 1989 Adaptive pattern recognition and neural networks. Reading, MA: Addison-
Wesley.
Tang, P. L., Poo, A. N. & de Silva, C. W. 2001 Knowledge-based extension of model-referenced
adaptive control with application to an industrial process. J. Intell. Fuzzy Syst. 10, 159¨C183.
Vonk, E., Jain, L. C. & Johnson, R. P. 1997 Automatic generation of neural network architecture
using evolutionary computing. World Scientific.
Zadeh, L. A. 1984 Making computers think like people. IEEE Spectrum 21, 26¨C32.
Understanding New Developments in Data Acquisition, Measurement, and Control
A Practical Guide to High Performance Test and Measurement
1st Edition
S e c t i o n 1
Data Acquisition and Measurement Overview
1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-2
1.2 Data Acquisition and Control Hardware. . . . . . . . . . . . . . . . . . . . . . .1-3
S e c t i o n 2
Computer Buses, Protocols, and Hardware
2.1 Computer Hardware Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2
2.2 Processors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2
2.3 Bus Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
2.4 Industrial Computer Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
2.5 Connectivity/Data Communications . . . . . . . . . . . . . . . . . . . . . . . .2-16
2.6 Networked Instrument Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-26
S e c t i o n 3
Software Overview
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-2
3.2 Development Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-2
3.3 Source Code and Source Code Management. . . . . . . . . . . . . . . . . .3-7
3.4 Reusable Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-7
3.5 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-11
3.6 General Program Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-13
3.7 Making Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-15
S e c t i o n 4
Basic Component Theory
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
4.2 Passive Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
4.3 Op Amp Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-11
4.4 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-19
4.5 Digital I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
S e c t i o n 5
Basic Analog and Digital I/O
5.1 A/D Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.2 D/A Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-11
5.3 Interfacing Digital I/O to Applications. . . . . . . . . . . . . . . . . . . . . . . .5-12
5.4 Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-17
5.5 Ground Loops. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-19
S e c t i o n 6
Temperature Measurement
6.1 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
6.2 Thermocouples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2iv
6.3 Resistive Temperature Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . .6-12
6.4 Thermistors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
6.5 Semiconductor Linear Temperature Sensors. . . . . . . . . . . . . . . . . 6-21
6.6 Thermal Shunting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-22
S e c t i o n 7
Strain Measurement
7.1 Strain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2
7.2 Poisson?ˉs Strain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2
7.3 Strain Gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2
7.4 Gauge Factor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
7.5 Sources of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
7.6 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-7
7.7 Strain Gauge Signal Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . .7-10
7.8 Shunt Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-11
7.9 Load Cells, Pressure Sensors, and Flow Sensors . . . . . . . . . . . . . .7-11
7.10 Acceleration, Shock, and Vibration . . . . . . . . . . . . . . . . . . . . . . . . . .7-12
S e c t i o n 8
Related Topics of Interest
8.1 Current Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
8.2 Connection Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
S e c t i o n 9
Application Examples
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
9.2 OEM/Factory Automation and Data Acquisition . . . . . . . . . . . . . . . 9-2
9.3 Magnetic Field Monitoring in a Synchrotron Facility . . . . . . . . . . . . 9-5
9.4 Tensile Test Stand Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
9.5 Burn-In and Stress Testing of Electronic Devices . . . . . . . . . . . . . 9-10
9.6 Performance Characterization of Shock Absorbers. . . . . . . . . . . . 9-12
9.7 Instrument-Grade, Low Cost Analog Output Control . . . . . . . . . . .9-13
A p p e n d i x A
Selector Guides for Plug-In Boards, USB Modules,
PXI Systems, and External Data Acquisition Instruments
A p p e n d i x B
Glossary . . . . . . . . . . . . . . . . . . . . . . . . B-1
A p p e n d i x c
Safety Considerations . . . . . . . . . . . . . . . . . . . . C-1
UNDERSTANDING NEw DEVElOPMENTS IN
DATA ACQUISITION, MEASUREMENT, AND CONTROl
S e c t i o n 1
Data Acquisition and Measurement Overview1
1.1 definition
Although concepts like data acquisition, test, and measurement can be surprisingly difficult to define completely, most computer users, engineers, and scientists agree there are several common elements in systems used today for these functions:
? A personal computer (PC) is used to program and control the test and measurement equipment, and to store or manipulate data subsequently. The term ?\[Degree]PC?\[PlusMinus] is used in a general sense to include any computer running any operating system and software that supports the desired result. The PC may also be used for supporting functions, such as real-time graphing or report generation. The PC may not necessarily be in constant control of the data acquisition and measurement equipment, or even remain connected to some pieces of equipment at all times.
? A test or measurement system can consist of data acquisition plug-in boards for PCs, external board chassis, discrete instruments, or a combination of all these. External chassis and discrete instruments typically can be connected to a PC using either standard communication ports or a proprietary interface board in the PC.
? The system can perform one or more measurement and control processes using various combinations of analog input, analog output, digital I/O, or other specialized functions. When external instruments are used, most or all of these functions could reside within the instruments. Therefore, measurement and control can be distributed between PCs, stand-alone instruments, and other external data acquisition systems.
The difficulty involved in differentiating between terms such as ?\[Degree]data acquisition,?\[PlusMinus] ?\[Degree]test and measurement,?\[PlusMinus] and ?\[Degree]measurement and control?\[PlusMinus] stems from the blurred boundaries that separate the different types of instrumentation in terms of operation, features, and performance. For example, some stand-alone instruments now contain card slots and embedded microprocessors, use operating system software, and operate more like computers than like traditional instruments. Such instruments now make it possible to construct test systems with high channel counts that gather data and log it to a controlling computer at high throughput rates, with high measurement accuracy. Accuracy aside, plug-in boards can transform computers into multi-range digital multimeters, oscilloscopes, or other instruments, complete with user-friendly, on-screen virtual front panels.
For the sake of simplicity, this handbook uses the term ?\[Degree]data acquisition and control?\[PlusMinus] broadly to refer to a variety of hardware and software solutions capable of making measurements and controlling external processes. The term ?\[Degree]computer?\[PlusMinus] is also defined rather broadly. Generally in this handbook, a PC refers to any personal computer running Microsoft? Windows 98?, or a later Windows version. However, when external instruments are combined with data acquisition boards, the result is often referred to as a hybrid system. Hybrid systems are becoming more common as engineers seek the optimum combination of throughput and accuracy for their test and measurement applications.
1.2 data Acquisition and control Hardware
Data acquisition and control hardware is available in a number of forms, which offer varying levels of functionality, channel count, speed, resolution, accuracy, and cost. This section summarizes the features and benefits generally associated with the various categories, based on a broad cross-section of products, including external instruments. Refer to Appendix A for a more detailed comparison of plug-in boards and external instruments.
1.2.1 plug-in data Acquisition Boards
Like display adapters, modems, and other types of expansion boards, plug-indata acquisition boards are designed for mounting in board slots on a computer motherboard. Today, most data acquisition boards are designed for the current PCI (Peripheral Component Interconnect) or earlier ISA (Industry Standard Architecture) buses. ISA and associated data acquisition boards are rapidly becoming ?\[Degree]legacy?\[PlusMinus] products, and largely replaced by PCI. Data acquisition plug-in boards and interfaces have been developed for other buses (EISA, IBM Micro Channel, and various Apple buses), but these are no longer considered mainstream products. In addition,a wide range of USB data acquisition modules are now available for plugging into a PC?ˉs USB ports, or USB hubs attached to those ports. As discussed in Section 2, newer PCs may also have PCI Express slots,
table 1-1. Features of plug-in data acquisition boards
Inexpensive method of computerized measurement and control.
High speed available (100kHz to 1GHz and higher).
Available in multi-function versions that combine A/D, D/A, digital I/O, counting, timing, and specialized functions.
Good for tasks involving low-to-moderate channel counts.
Performance adequate to excellent for most tasks, but electrical noise inside the PC can limit ability to perform sensitive measurements.
Input voltage range is limited to approximately ?à10V.
Use of PC expansion slots and internal resources can limit expansion potential and consume PC resources.
Making or changing connections to board?ˉs I/O terminals can be inconvenient.
1-4
often in combination with PCI slots, particularly in computers built expressly for industrial applications.
As a category, plug-in boards offer a variety of test functions, high channel counts, high speed, and adequate sensitivity to measure moderately low signal levels, at relatively low cost. table 1-1 lists additional attributes of typical boards.
1.2.2 external data Acquisition Systems
The original implementation of an external data acquisition system was a selfpowered system that communicated with a computer through a standard or proprietary interface. As a boxed alternative to plug-in boards, this type of system usually offered more I/O channels, a quieter electrical environment, and greater versatility and speed in adapting to different applications. Another alternative today is a system based on external USB data acquisition modules.
1 .2 .2 .1 USB Data Acquisition Modules
USB data acquisition modules offer many advantages over plug-in boards, including plug-and-play capability, better noise immunity, and portability. USB modules do not require opening the PC, because they connect directly to the USB ports via standard cables. The PC identifies the presence of a new device,then prompts the user for the location of the necessary driver software for installation. table 1-2 lists features of typical USB modules.
table 1-2. Features of USB data acquisition modules
True plug and play means a simple connection between the PC and USB module.
Because USB modules are external to the PC, they offer performance benefits for noise-sensitive measurements.
USB 1.1 supports data acquisition rates up to 400kHz. USB 2.0 supports data acquisition rates up to 500kHz in each direction, similar to PCI boards.
Most USB modules handle of large array of I/O connections.
USB modules are compact and portable.
With USB hubs, additional modules can scale up measurement capacity.
USB modules can be installed or removed while the PC is running (hot swapping).
USB modules have simple power connections, either directly through the bus or through an external power source.
1 .2 .2 .2 External Box/Rack Systems
Today, external data acquisition systems often take the form of a stand-alone test and measurement solution oriented toward industrial applications. The applications for which they are used typically demand more than a system based on a PC with plug-in boards can provide, or this type of architecture is simply inappropriate for the application. Modern external data acquisition systems offer:
? Applications involving many types of sensors, high channel counts, or the need for stand-alone operation.
? High sensitivity to low level voltage signals, i.e., approximately 1mV or lower.
? Applications requiring tight, real-time process control.
Like the plug-in board based system, these external systems require the use of a computer for operation and data storage. However, the computer can be built up on boards, just as the instruments are, and incorporated into the board rack. There are several architectures for external industrial data acquisition systems,including VME, VXI, MXI, LXI, CompactPCI,and PXI. These systems use mechanically robust, standardized board racks and plug-in instrument modules that offer a full range of test and measurement functions. Some external system designs include microprocessor modules that support all the standard PC userinterface elements, including keyboard, monitor, mouse, and standard communication ports. Frequently, these systems can also run Microsoft Windows and other PC applications. In this case, a conventional PC may only be needed to develop programs or off-load data for manipulation or analysis. Other features of external board-based systems are listed in table 1-3.
table 1-3. Features of external data acquisition chassis
Multiple slots permit mixing-and-matching boards to support specialized acquisition and control tasks and higher channel counts.
Chassis offers an electrically quieter environment than a PC, allowing for more sensitive measurements.
Use of standard interfaces (IEEE-488, RS-232, USB, FireWire, Ethernet) can facilitate daisy chaining, networking, long distance acquisition, and use with non-PC computers.
Dedicated processor and memory can support critical ?\[Degree]real-time?\[PlusMinus] control applications or stand-alone acquisition independent of a PC.
Standardized modular architectures are mechanically robust, easy to configure, and provide for a variety of measurement and control functions. Required chassis, modules, and accessories are cost-effective for high channel counts.
1.2.2.2.1 Real-Time Data Acquisition and Control
Critical real-time control is an important issue in data acquisition and control systems. Applications that demand real-time control are typically better suited to the type of external system described previously, than to systems based on PC plug-in boards. Although Microsoft Windows has become the standard operating system for PC applications, it is a non-deterministic operating system that can?ˉt provide predictable response times in critical measurement and control applications. Therefore, the solution is to link the PC to a system that can operate autonomously and provide rapid, predictable responses to external stimuli.This is accomplished with external systems that have their own dedicated realtime processor and real-time deterministic operating system.
1 .2 .2 .3 Discrete (Bench/Rack) Instruments
Originally, discrete electronic test instruments consisted mostly of single-channel meters, sources, and related instrumentation intended for general-purpose test applications. Over the years, the addition of more channels, communication interfaces and advances in instrument design, manufacturing, and measurement technology have extended the range and functionality of these
instruments. New products such as scanners, multiplexers, SourceMeter? instruments, counter/timers, nanovoltmeters, micro-ohmmeters, and other specialized instrumentation have made it possible to create computer-controlled test and measurement systems with a large number of channels that offer exceptional sensitivity, resolution, and throughput. Even low channel count instruments can be combined with switch matrices and multiplexers to lower the cost per channel by allowing one set of instruments to service many test inputs while preserving signal integrity. These instruments can also be combined with computers that contain plug-in data acquisition boards to form a hybrid test system. table 1-4 lists other characteristics of these systems.
table 1-4. Features of discrete instruments for data acquisition
Support measurement ranges and sensitivities generally beyond the limits of standard plugin boards and eternal data acquisition systems.
Use standard interfaces (e.g., IEEE-488, RS-232, FireWire, USB, and Ethernet) that support long-distance acquisition, compatibility with non-IBM-compatible computers, or use with computers without available expansion slots.
Most suitable for measurement of voltage, current, resistance, capacitance, inductance, temperature, etc. May not be effective solutions for some types of specialized sensors or signal conditioning requirements.
Generally slower than plug-in boards or external data acquisition systems.
More expensive than standard data acquisition systems on a per-channel basis.
1 .2 .2 .4 Hybrid Data Acquisition Systems
Hybrid systems are a relatively recent development in external data acquisition systems. A typical hybrid system combines PC plug-in boards, or an external data acquisition system, with discrete instruments to perform the desired test and measurement functions. However, some bench and rackmounted instruments have features that include standard data acquisition functions and expansion capabilities in a compact, instrument-like package, making them stand-alone hybrid systems. These instruments have a user interface is much like a typical DMM product. Hybrid systems that combine plug-in boards, external systems, and discrete instruments may utilize a graphical user interface (GUI) that appears as a virtual instrument on the PC screen. In any case, typical hybrid testsystem functions include AC and DC voltage and current measurements, temperature and frequency measurements, event counting, timing, triggering, and process control. They allow the user to get the best combination of speed and accuracy for the application.
Figure 1-1. Keithley pxi-based hybrid test system
Keithley?ˉs Series KPXI (Figure 1-1) is designed for high speed automated production testing as part of a hybrid test system using precision instruments. This series consists of simultaneous data acquisition boards, multi-function analog I/O boards, high speed analog output boards, a 130MS/s digitizer module, digital I/O modules, PXI chassis, embedded PC controllers, MXI bridges (for remote PC control), and GPIB interface cards. KPXI products are designed for optimal integration with precision instrumentation, such as Keithley?ˉs Series 2600 System SourceMeter? multi-channel I-V test solutions, which feature TSP? and TSP-Link? that let users take advantage of distributed programming and concurrent execution to perform high speed, automated test sequences across multiple channels ¨C independent of a PC operating system and its associated communication delays. table 1-5 summarizes the features of hybrid test systems.
table 1-5. Features of a hybrid data acquisition and control system
Delivers accuracy (typically 18- to 22-bit A/D), range, and sensitivity of benchtop instruments (superior to standard data acquisition equipment).
Allows optimization of throughput speed by distributing measurement and control functions to the appropriate hardware.
May have a DMM front end with a digital display and front panel controls, or a PC GUI (often a virtual instrument panel), or both.
External intruments used in these systems often have built-in data and program storage memory for stand-alone data logging and process control.
Uses standard interfaces (IEEE-488, LXI, PXI, Ethernet, etc.) that support long-distance acquisition, and compatibility with non-PC computers.
Cost-effective on a per-channel basis.
Easily expanded or modified as test requirements change.