Science Fair Project Encyclopedia
Optimal control theory is a mathematical field that is concerned with control policies that can be deduced using optimization algorithms.
Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A simple example should clarify the issue: consider a car travelling on a straight line through a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total travelling time?. Clearly in this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system is intended to be both the car and the hilly road, and the optimality criterion is the minimization of the total travelling time.
On a more general framework, given a system with input u(t), output y(t) and state x(t), one can define what is called a cost functional, which is no more than a measure that the control designer should be able to minimize. In the previous example, a proper cost functional would be mathematical expression giving the travelling time as a function of the speed, geometrical considerations, and initial conditions of the system. One common cost functional used by control engineers when designing proper control systems is:
Where the matrices Q and R are positive-semidefinite and positive-definite, respectively. Note that this cost functional in thought in terms of penalizing the control energy (measured as a quadratic form) and the time it takes the system to reach zero-state. This functional could seem rather useless since it assumes that the operator is driving the system to zero-state, and hence driving the output of the system to zero. This is indeed right, however the problem of driving the output to the desired level can be solved after the zero output one is. In fact, it can be proved that this secondary problem can be solved a very straightforward manner. The optimal control problem defined with the previous functional is usually called State Regulator Problem and its solution the Linear Quadratic Regulator (LQR) which is no more that a feedback matrix gain of the form
where K is a properly dimensioned matrix and solution of the Continous Time Dynamic Riccati equation. This problem was elegantly solved by R. Kalman (1960).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details