Optimal control Definition and 14 Threads

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory.

View More On Wikipedia.org
  1. J

    The use of Riccati equations in optimal control theory

    I know that linear control theory, in the form ##\dot{x}=Ax+Bu##, ##\dot{u}=Cx+Du##, can be put in the form of a matrix Riccati equation. But is there really an advantage to doing so?
  2. TammyTsang

    Engineering Scaling of units for equations of motions

    In the picture, there is a problem where the t is in units of square root(l/g), and V in square root(gl) I am wondering 1. What it means when time is in units other than time? Does it mean that when solving I have to take time/squareroot(l/g) 2. How did they get square root(l/g). Thank you...
  3. L

    Under what topic can I ask about optimal control

    Under what topic can I ask about optimal control
  4. L

    Exploring Energy Storage Combinations & Optimal Control Schemes

    I'm currently working on combining energy storage systems like flywheels and batteries to balance consumption/regeneration. I've been looking at using an optimal control scheme so that a cost function can be tailored to our wishes. I'm curious about what other people in this field have been...
  5. J

    Deriving Differential Equations from the Riccati Equation for Optimal Control

    Homework Statement I was wondering if I can get some help on a Linear Regulator Problem for an Optimal Control Problem. Given a state equation and performance measure I am trying to solve using the Riccati equation on MATLAB. This is a sample example I got from a book Optimal Control Donald...
  6. P

    Automotive Optimize Feul consubtion, efficiany and slip condition

    Hi, I'm working on a wheel loader task and my mission is to optimize the feul consumbtion and controlling the slipp using a appropriate optimal control method. All data is from the tires and I have to by some method tell the motor how much it has to give to machine to drive. Anyone suggest a...
  7. Chung

    Deriving adjoint equation of an Optimal Control Problem

    Dear all, I am investigating a Transient Optimal Heating Problem with distributed control and Dirichlet condition. The following are the mathematical expression of the problem: Where Ω is the domain, Γ is the boundary, y is the temperature distribution, u...
  8. K

    MHB Optimal Control Parameter Values: Converge/Fail?

    Motivation: I am working with a code that minimizes the objective functional value in an optimal control problem. It takes $A_1,A_2,A_3,A_4$ (the balancing factors for various components of the objective functional) as inputs, and then outputs the values of the state variables, control...
  9. M

    Optimal Control of Linear-Affine System w/ Constraints

    Although this could fall under engineering, I thought the Diff Eq forum was the most relevant. Let me know if I should post elsewhere. I have a fairly basic system for which I'm trying to find a minimum-time optimal control policy. I know there are many ways to do this numerically, but since...
  10. D

    Optimal control: non-zero target control

    Dear all, I am building an arm with 2 joints(elbow, shoulder) and want to optimally control it to a particular position(wirst). The examples I saw so far(e.g. acrobot) have a target control signal u(t_f) which becomes zero when reaching the target x_{target} in the final time step t_f. In my...
  11. I

    Optimal control, Fourier transform, operating system, multimedia and w

    I have a lot of questions, if you know something in one of them or more I will glad if you can write a replay I search after researches or others things that are correlated between optimal control and autonomous vehicles it can be things like how to calculate the shortest way, the rapid way...
  12. M

    Reproducing Optimal Cost in Linear Quadratic Regulator Problem

    I'm trying to pick up optimal control by self study. At the moment I'm working on linear quadratic regulator and trying to reproduce the result publish in this paper. Curtis and Beard, Successive collocation: An approximation to optimal nonlinear control, Proceedings of the American Control...
  13. P

    How to Prove a Verification Theorem for an Optimal Control Problem?

    Hi guys, I could use some help on the proof of a verification theorem for the following optimal control problem J_{M}(x;u)&\equiv&\mathbb{E}^{x}\left[\int_{0}^{\tau_{C}}\left(\int_{0}^{t}e^{-rs}\pi_{M}(x_{s})ds\right)\lambda...
  14. A

    How can I implement 4th order runga kutta method for an optimal control problem?

    Hi all, I have an optimal control problem and to solve it, after starting with the initial control 'u',I have to integrate the state equation x'=f(x(t),u(t),t) forward in time then integrate the adjoint equation PSI'=G(x(t),u(t),PSI(t),t) backward in time. I want to implement all of that by 4th...
Back
Top