Advertisement

Introduction to Intelligent Control

Chapter
  • 3k Downloads

Abstract

The term “intelligent control” may be loosely used to denote a control technique that can be carried out using the “intelligence” of a human who is knowledgeable in the particular domain of control. In this definition, constraints pertaining to limitations of sensory and actuation capabilities and information processing speeds of humans are not considered. It follows that if a human in the control loop can properly control a plant, then that system would be a good candidate for intelligent control.

The term “intelligent control” may be loosely used to denote a control technique that can be carried out using the “intelligence” of a human who is knowledgeable in the particular domain of control. In this definition, constraints pertaining to limitations of sensory and actuation capabilities and information processing speeds of humans are not considered. It follows that if a human in the control loop can properly control a plant, then that system would be a good candidate for intelligent control. Information abstraction and knowledge-based decision making that incorporates abstracted information are considered important in intelligent control. Unlike conventional control, intelligent control techniques possess capabilities of effectively dealing with incomplete information concerning the plant and its environment, and unexpected or unfamiliar conditions. The term “adaptive control” is used to denote a class of control techniques where the parameters of the controller are changed (adapted) during control, utilizing observations on the plant (i.e., with sensory feedback), to compensate for parameter changes, other disturbances, and unknown factors of the plant. Combining these two terms, one may view “intelligent adaptive control” as those techniques that rely on intelligent control for proper operation of a plant, particularly in the presence of parameter changes and unknown disturbances.

There are several artificial intelligent techniques that can be used as a basis for the development of intelligent systems, namely expert control, fuzzy logic, neural network, and intelligent search algorithms.

In this class, we will study some fundamental techniques and some application examples of expert control, fuzzy logic, neural networks, and intelligent search algorithms. The main focus here will be their use in intelligent control.

The artificial intelligent techniques should be integrated with modern control theory to develop intelligent control systems.

In this class, we study intelligent control in four parts: expert control, fuzzy logic and control, neural network and control, and genetic algorithm.

1.1 Expert Control

Expert control is control tactics to use expert knowledge and experience. Expert control comes from expert system, it was proposed by K.J. Astrom in 1986 [1], and its main idea is to design control tactics with expert knowledge and experience.

1.2 Fuzzy Logic Control

Fuzzy logic is useful in representing human knowledge in a specific domain of application, and in reasoning with that knowledge to make useful inferences or actions.

In particular, fuzzy logic may be employed to represent, as a set of “fuzzy rules,” the knowledge of a human controlling a plant. This is the process of knowledge representation. Then, a rule of inference in fuzzy logic may be used according to this “fuzzy” knowledge base, to make control decisions for a given set of plant observations. This task concerns “knowledge processing.” In this sense, fuzzy logic in intelligent control serves to represent and process the control knowledge of a human in a given plant.

There are two important ideas in fuzzy systems theory:
  • The real world is too complicated for precise descriptions to be obtained; therefore, approximation (or fuzziness) must be introduced in order to obtain a reasonable model.

  • As we move into the information era, human knowledge becomes increasingly important. We need a theory to formulate human knowledge in a systematic manner and put it into engineering systems, together with other information like mathematical models and sensory measurements.

From the fuzzy universal approximation theorem [2], fuzzy system can approximate any nonlinear function, which can be used to design adaptive fuzzy controller. By adjusting a set of weighting parameters of a fuzzy system, it may be used to approximate an arbitrary nonlinear function to a required degree of accuracy.

1.3 Neural Network and Control

Artificial neural networks are massively connected networks that can be trained to represent complex nonlinear functions at a high level of accuracy. They are analogous to the neuron structure in a human brain.

It is well known that biological systems can perform complex tasks without recourse to explicit quantitative operations. In particular, biological organisms are capable of learning gradually over time. This learning capability reflects the ability of biological neurons to learn through exposure to external stimuli and to generalize. Such properties of nervous systems make them attractive as computation models that can be designed to process complex data. For example, the learning capability of biological organisms from examples suggests possibilities for machine learning.

Neural networks, or more specifically, artificial neural networks, are mathematical models inspired from our understanding of biological nervous systems.

They are attractive as computation devices that can accept a large number of inputs and learn solely from training samples. As mathematical models for biological nervous systems, artificial neural networks are useful in establishing relationships between inputs and outputs of any kind of system. Roughly speaking, a neural network is a collection of artificial neurons. An artificial neuron is a mathematical model of a biological neuron in its simplest form. From our understanding, biological neurons are viewed as elementary units for information processing in any nervous system. Without claiming its neurobiological validity, the mathematical model of an artificial neuron is based on the following theses:
  1. (1)

    Neurons are the elementary units in a nervous system at which information processing occurs.

     
  2. (2)

    Incoming information is in the form of signals that are passed between neurons through connection links.

     
  3. (3)

    Each connection link has a proper weight that multiplies the signal transmitted.

     
  4. (4)

    Each neuron has an internal action, depending on a bias or firing threshold, resulting in an activation function being applied to the weighted sum of the input signals to produce an output signal.

     

Since the idea of the computational abilities of networks composed of simple models of neurons was introduced in the 1940s, neural network techniques have undergone great developments and have been successfully applied in many fields such as learning, pattern recognition, signal processing, modeling, and system control. Their major advantages of highly parallel structure, learning ability, nonlinear function approximation, fault tolerance, and efficient analog VLSI implementation for real-time applications greatly motivate the usage of neural networks in nonlinear system identification and control.

In many real-world applications, there are many nonlinearities, unmodeled dynamics, unmeasurable noise, and multiloop, which pose problems for engineers to implement control strategies.

BP or RBF neural network can approximate any nonlinear function [3], which can be used to design adaptive neural network controller. By adjusting a set of weighting parameters of a neural network, it may be used to approximate an arbitrary nonlinear function to a required degree of accuracy.

1.4 Intelligent Search Algorithm

There are several intelligent search algorithms, classical intelligent search algorithms include GA, PSO, and DE.

Genetic algorithms (GA) are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover, and selection. The basic principle of GA was first laid down by Holland in 1962 [4]. GA simulates those processes in natural populations that are essential to evolution. Genetic algorithms belong to the area of evolutionary computing. They represent an optimization approach where a search is made to “evolve” a solution algorithm that will retain the “most fit” components, in a procedure that is analogous to biological evolution through natural selection, crossover, and mutation. It follows that GAs are applicable in intelligent control, particularly when optimization is an objective.

Particle swarm optimization (PSO) is originally attributed to Kennedy, Eberhart [5] and was first intended for simulating social behavior. Particle swarm optimization (PSO) is an evolutionary computation technique. The basic idea of particle swarm optimization (PSO) is to find the optimal solution through collaboration and information sharing among individuals in a swarm. The advantages of PSO are simplicity, ease of implementation, and no adjustment of many parameters. At present, it has been widely used in function optimization, neural network training, fuzzy system control, etc.

Differential evolution (DE) is originally due to Storn and Price [6]. In evolutionary computation, DE is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.

DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed. DE has been applied in parallel computing, multiobjective optimization, constrained optimization, etc.

Summarizing, the biological analogies of fuzzy, neural, and intelligent search algorithms can be described as follows: Fuzzy techniques attempt to approximate human knowledge and the associated reasoning process; neural networks are a simplified representation of the neuron structure of a human brain; and intelligent search algorithms follow procedures that are crudely similar to the process of evolution in biological species.

Modern industrial plants and technological products are often required to perform complex tasks with high accuracy, under ill-defined conditions. Conventional control techniques may not be quite effective in these systems, whereas intelligent control has a tremendous potential. The emphasis of the class is on practical applications of intelligent control, primarily using fuzzy logic, neural network, and intelligent search algorithms techniques. The remainder of the class will give an introduction to some fundamental techniques of fuzzy logic, neural networks, and intelligent search algorithms.

References

  1. 1.
    K.J. Astrom, J.J. Anton, K.E. Arzen, Expert control. Automatica 22(3), 277–286 (1986)CrossRefGoogle Scholar
  2. 2.
    L.X. Wang, Fuzzy systems are universal approximators, in Proceedings of IEEE Conference on Fuzzy Systems (1992), pp. 1163–1170Google Scholar
  3. 3.
    K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximator. Neural Networks 2(5), 359–366 (1989)CrossRefGoogle Scholar
  4. 4.
    F. Jin, W. Chen, The father of the genetic algorithms—Holland and his scientific work. J. Dialect. Nat. (2007)Google Scholar
  5. 5.
    J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of IEEE International Conference on Neural Networks (1995), pp. 1942–1948Google Scholar
  6. 6.
    R. Storn, K. Price, Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Tsinghua University Press, Beijing and Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Beihang UniversityBeijingChina

Personalised recommendations