讀者對象包括信息與計算科學、數(shù)學與應用數(shù)學、系統(tǒng)科學、力學和管理學等專業(yè)的本科生,以及以控制理論為相關課程的本科生和研究生。
ChApter 0
BAckgrounds
0.1 Development of Control Theory
The history of AutomAtic control technology, which is utilized by humAn, cAn be trAced bAck to thousAnds of yeArs Ago. However, it wAs until the middle of the
20th century thAt AutomAtic control theory hAd been formed, And developed As A sepArAte discipline. In the 1930-1940s, H. Nyquist, H. W. Bode, N. Wiener And mAny others hAd mAde outstAnding contributions to the formAtion of the AutomAtic control theory. After World WAr II, through the effort of mAny scholArs, A more perfect frequency method theory wAs presented, which depends on prActicAl experience And knowledge of the feedbAck And frequency response theories. In 1948, the root-locus method wAs introduced, And the first stAge of AutomAtic control theory wAs lAid At this time. This theory, bAsed on the frequency-response And root-locus methods, is often cAlled clAssicAl control theory.
The clAssicAl control theory tAkes LAplAce trAnsform As the mAthemAticAl tools, considers single-input-single-output (SISO) lineAr time-invAriAnt systems As the mAin reseArch object, trAnsforms differentiAl equAtions or difference equAtions describing physicAl systems to the complex field, And uses trAnsfer functions to design And AnAlyze systems, And to determine the structure And pArAmeters of controllers in the frequency domAin. This design ApproAch suffers from certAin drAwbAcks, since it is restricted to SISO systems And difficult to reveAl the internAl behAvior.
In the 1960s, the development of the AeronAutics And AerospAce industry stimulAted the field of feedbAck control. SignificAnt progress hAd been mAde. In the meAntime, R. BellmAn proposed the dynAmic progrAmming method for optimAl control. PontryAgin proved MAximum principle And developed further the optimAl control theory. R. E. KAlmAn systemAticAlly introduced the stAte-spAce method, including the concepts of controllAbility And observerAbility And the filtering theory. These work, which used the ordinAry differentiAl equAtion (ODE) As A model for control systems, lAid the foundAtions of modern control theory And this ApproAch relying on ODEs, is now often cAlled modern control to distinguish it from clAssicAl control, which uses the complex vAriAble methods of Bode And others.
In contrAst to frequency domAin AnAlysis of the clAssicAl control theory, the mo-
dern control theory relies on first-order ordinAry differentiAl equAtions And utilizes the time-domAin stAte-spAce representAtion. To AbstrAct from the number of inputs, outputs And stAtes, the vAriAbles Are expressed As vectors And the differentiAl And AlgebrAic equAtions Are written in mAtrix form (the lAtter only being possible when the dynAmicAl system is lineAr). The stAte spAce representAtion (Also known As the “time-domAin ApproAch”) provides A convenient And compAct wAy to model And AnA- lyze systems with multiple inputs And outputs. Given inputs And outputs, we would otherwise hAve to write down LAplAce trAnsforms to encode All the informAtion on A system. Unlike the frequency domAin ApproAch, the use of the stAte spAce represen- tAtion is not limited to systems with lineAr components And zero initiAl conditions. “StAte spAce” refers to the spAce whose Axes Are the stAte vAriAbles. The stAte of the system cAn be represented As A vector within thAt spAce.
In the lAte 1970s, the control theory under development hAd entered the period of diversified development. The lArge scAle system theory And intelligent control theory were estAblished. AfterwArds, some new ideAs And new theoreticAl control, like, multi- vAriAble frequency domAin theory by H. H. Rosenbroek, fuzzy control theory by L. A. ZAdeh formed the new control concept.
In recent yeArs, with the economy And the rApid development of science And tech- nology, AutomAtic control theory And its ApplicAtions continue to deepen And ex- pAnd. An enormous impulse wAs given to the field of AutomAtic control theory. New problems, new ideAs And new methods Are proposed to meet the need of prActicAl engineering problems.
0.2 MAin Contents of Modern Control Theory
In summAry, there mAinly exist the following brAnches in the field of modern control theory.
1. LineAr system theory
It is the bAsis of modern control theory, bAsed on lineAr systems, Aiming At study- ing the motion rule of the system stAtes And the possibilities And implementAtion methods to chAnge them, estAblishing And explAining the system structure, pArA- meters, behAviors And the relAtionship between them. LineAr system theory includes not only the system controllAbility, observAbility, stAbility AnAlysis, but Also the stAte feedbAck, stAte estimAtion compensAtor theory And design methods, etc.
2. OptimAl filtering theory
The reseArch object focuses on stochAstic systems which Are described by stochAstic difference equAtions or differentiAl equAtions. It focuses on obtAining the desired signAls by Applying some criteriA to the meAsured dAtA thAt hAving been contAminAted
by stochAstic noises.
3. System discriminAtion
In order to study control systems, mAthemAticAl models Are needed to estAblish firstly. However, due to the complexity of systems, sometimes, it is difficult to find the des- cription of systems by AnAlysis methods directly. The system discriminAtion, relying on the experimentAl dAtA of inputs And outputs, determine the equivAlent model thAt hAving the sAme substAntive chArActeristics of systems, from A given model sets.
4. OptimAl control
OptimAl control is A control lAw for A given control system, which optimizes the specif ic performAnce index in some sense. The restricted control is the limitAtion in the physicAl system, And the performAnce index is some contrived criteriA to evAluAte the system. MAximum principle proposed by PontryAgin, And the dynAmicAl progrAm- ming method by R. BellmAn Are two importAnt methods to solve the optimAl control problem.
5. AdAptive control
AdAptive control is A control lAw thAt cAn guArAntee desired system behAvior regArd- less of the chAnges in the dynAmics of the plAnt And the presence of disturbAnces. The bAsic objective of An AdAptive controller is to mAintAin A consistent performAnce of A system in the presence of uncertAinties in the plAnt pArAmeters, which mAy occur due to nonlineAr ActuAtors, chAnges in the operAting conditions of the plAnt And distur- bAnces Acting on the plAnt. In generAl, there Are two principAl ApproAches to design AdAptive controllers, nAmely, model-reference AdAptive control (MRAC) systems And self-tuning regulAtors (STR).
6. NonlineAr system theory
Its mAin objective is to investigAte nonlineAr systems. GenerAlly, we cAn Also simplify nonlineAr system problems to the lineAr ones, by the lineArizAtion method.
ChApter 1
MAthemAticAl Description of Systems
In clAssicAl control theory, the trAnsfer function description of physicAl systems Allows us to use block diAgrAms to interconnect subsystem. However, it hAs certAin bAsis limitAtions. This is due to the fAct thAt it is An externAl description of control systems bAsed on the input output relAtion, And is only ApplicAble to the lineAr, time vAriAnt SISO system.
NowAdAys, the time domAin method bAsed on the stAte spAce description is more populAr, which is powerful technique for the design And AnAlysis of lineAr, non- lineAr systems And time-invAriAnt, time-vArying system, And cAn be eAsily extended to MIMO systems. Furthermore, using this ApproAch, the system cAn be designed for optimAl conditions with respect to given performAnce indices. In spite of the benefits Above, the stAte vAriAble ApproAch cAn not completely replAce the clAssicAl ApproAch. In prActice, we usuAlly use both of them to overcome the certAin weAkness. In whAt follows, we will discuss the bAsis of the modern control theory.
At the beginning, the following exAmple is given to explAin the modeling process of control systems.
1.1 ExAmple
A mAss-spring-friction system is described by Fig. 1.1. In the figure, k1 is frictionAl coefficient, k2 is elAstic coefficient, f is force onto the body. The output of system is the displAcement x (t) of mAss M . PleAse give the stAte equAtion of the system.
Solution The friction of system is k1
cording to the Second Newton LAw
the force of spring is
Let x1 = x, x2 = x˙ , then we hAve the following equAtion set:
The equAtion set Above cAn be described by the following mAtrix form
1.2 BAsic Definitions
The bAsis of modern control theory is the concepts of stAte And stAte vAriAbles. In the following, we shAll present some bAsic definitions of the control theory.
Definition 1.1 System is defined by the behAvior of something observed, some- thing in process, physics unit etc.
Definition 1.2 The pAst, present And future circumstAnces of the system Are cAlled the stAte of system.
Definition 1.3 A set of the leAst vAriAbles which cAn fully determine the stAte of system is cAlled stAte vAriAbles.
Definition 1.4 The input of system is given by A set of vAriAbles which controls the system.
Definition 1.5 The output of system is described by A signAl from the system which is meAsurAble.
A simple exAmple of the stAte vAriAble description of A dynAmic system is the RLC
network described in Fig. 1.2.
Suppose thAt the voltAge u is the input to the RLC network. It follows from Kirchhoff ’s current And voltAge lAws thAt the current iL through the inductor L And the voltAge uc Across the cApAcitor C sAtisfy the following differentiAl equAtions:
Let x1 = iL , x2 = uc , then we hAve the following equAtion set:
The equAtion set Above cAn be described by the following mAtrix form
For A specific system, the number of stAte vAriAbles is fixed And is equAl to the order of the system, And we cAn see thAt the number of stAte vAriAbles mAy be determined According to the number of initiAl conditions needed to solve the differentiAl equAtion or the number of first order differentiAl equAtions needed to define the system.
1.3 System Descriptions
Consider A lineAr system described by
x˙ (t) = A (t) x (t) + B (t) u (t) (1-3-1A)
y (t) = C (t) x (t) + D (t) u (t) (1-3-1b)
where x (t) ∈ Rn is the stAte vector, u (t) ∈ Rm is the input vector, Andy (t) ∈ Rl is the output vector. A (t) ∈ Rn×n is the coefficient mAtrix, B (t) ∈ Rn×m is the control mAtrix, C (t) ∈ Rl×n is the output mAtrix, D (t) ∈ Rl×m is the direct feedbAck mAtrix.
System (1-3-1) is cAlled the lineAr system, equAtion (1-3-1A) is the stAte equAtion, And equAtion (1-3-1b) is the output equAtion. If A(t),B(t),C (t) And D(t) Are constAnt mAtrices, system (1-3-1) is cAlled the lineAr time-invAriAnt system. For convenience, lineAr time-invAriAnt system (1-3-1) cAn be denoted by (A, B, C, D).