# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# Control theory

In engineering and mathematics, control theory deals with the behaviour of dynamical systems over time. The desired output of a system is called the reference variable. When one or more output variables of a system need to show a certain behaviour over time, a controller tries to manipulate the inputs of the system to realize this behaviour at the output of the system.

 Contents

## An example

As an example, consider cruise control. In this case, the system is a car. The goal of cruise control is to keep the car at a constant speed. Here, the output variable of the system is the speed of the car. The primary means to control the speed of the car is the amount of gas being fed into the engine.

A simple way to implement cruise control is to lock the position of the throttle the moment the driver engages cruise control. This is fine if the car is driving on perfectly flat terrain. On hilly terrain, the car will slow down when going uphill and accelerate when going downhill; something its driver may find highly undesirable.

This type of controller is called an open-loop controller because there is no direct connection between the output of the system and its input. One of the main disadvantages of this type of controller is the lack of sensitivity to the dynamics of the system under control.

## Classical control theory

To avoid the problems of the open-loop controller, control theory introduces feedback. The output of the system y is fed back to the reference value r. The controller C then takes the difference between the reference and the output, the error e, to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.

A simple feedback control loop

If we assume the controller C and the plant P are linear, time-invariant and all single input, single output, we can analyze the system above by using the Laplace transform on the variables. This gives us the following relations:

$Y(s) = P(s) U(s)\,\!$
$U(s) = C(s) E(s)\,\!$
$E(s) = R(s) - Y(s)\,\!$

Solving for Y(s) in terms of R(s), we obtain:

$Y(s) = \left( \frac{PC}{1 + PC} \right) R(s)$

The term PC/(1 + PC) is referred to as the transfer function of the system. If we can ensure PC >> 1, then Y(s) is approximately equal to R(s). This means we control the output by simply setting the reference.

## Stability

Stability in control theory means that for any bounded input over any amount of time, the output will also be bounded. Mathematically, that means for a system to be stable all the poles of its transfer function must lie in the left half of the complex plane. Or more simply put, the real part of every complex number that makes the transfer function become infinite, has to be negative for the whole system to be stable. Systems whose response neither decays nor grows over time are referred to as marginally stable.

## Controllability and observability

See controllability and observability.

## Appendix A

Derivation of transfer function:

 $Y(s) = P(s) U(s)\,\!$ (1) $U(s) = C(s) E(s)\,\!$ (2) $E(s) = R(s) - Y(s)\,\!$ (3) (1) + (2) $Y = P C E\,\!$ (4) (4) + (3) $Y = P C ( R - Y )\,\!$ $Y = P C R - P C Y\,\!$ Expanding out ( R − Y ) $Y + P C Y = P C R\,\!$ Moving P C Y to the left hand side $Y ( 1 + P C ) = P C R\,\!$ Consolidating the common term Y $Y = \frac{P C R}{1 + P C}$ Isolating out the term Y $Y = \frac{P C}{1 + P C} R$ (5)

03-10-2013 05:06:04