# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# Optimal control

Optimal control theory is a mathematical field that is concerned with control policies that can be deduced using optimization algorithms.

The control which minimizes a certain cost functional is called the optimal control. It can be derived using Pontryagin's minimum principle, or by solving the Hamilton-Jacobi-Bellman equation.

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A simple example should clarify the issue: consider a car travelling on a straight line through a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total travelling time?. Clearly in this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system is intended to be both the car and the hilly road, and the optimality criterion is the minimization of the total travelling time.

On a more general framework, given a system with input u(t), output y(t) and state x(t), one can define what is called a cost functional, which is no more than a measure that the control designer should be able to minimize. In the previous example, a proper cost functional would be mathematical expression giving the travelling time as a function of the speed, geometrical considerations, and initial conditions of the system. One common cost functional used by control engineers when designing proper control systems is:

$J=\int_0^\infty ( x^T(t)Qx(t) + u^T(t)Ru(t) )\,dt.$

Where the matrices Q and R are positive-semidefinite and positive-definite, respectively. Note that this cost functional in thought in terms of penalizing the control energy (measured as a quadratic form) and the time it takes the system to reach zero-state. This functional could seem rather useless since it assumes that the operator is driving the system to zero-state, and hence driving the output of the system to zero. This is indeed right, however the problem of driving the output to the desired level can be solved after the zero output one is. In fact, it can be proved that this secondary problem can be solved a very straightforward manner. The optimal control problem defined with the previous functional is usually called State Regulator Problem and its solution the Linear Quadratic Regulator (LQR) which is no more that a feedback matrix gain of the form

$u(t)=-K(t)\cdot x(t).$

where K is a properly dimensioned matrix and solution of the Continous Time Dynamic Riccati equation. This problem was elegantly solved by R. Kalman (1960).

03-10-2013 05:06:04