TransWikia.com

Fixed end points optimal control problem

Engineering Asked on April 15, 2021

Given two points $(t_0,x(t_0)=x^{0})$ and $(t_1,x(t_1)=x^{1})$ in the $(t,x)$ plane, the objective is to find an optimal trajectory $x^{*}(t)$ such that the cost function

begin{equation}label{eq:1}
J(x) = int_{t_0} ^ {t_1} g(x,dot{x},t) dt tag{1}
end{equation}

has a relative extremum with $g$ being a function with continuous first and second partial derivatives with respect to all its arguments. In this case where the end points are fixed, the necessary condition for minimizing the functional $J(x)$ is obtained by means of Euler-Lagrange equation

begin{equation}label{eq:2}
dfrac{partial{g(x^{*},dot{x}^{*},t)}}{partial{x}} – dfrac{d}{dt} dfrac{partial{g}(x^{*},dot{x}^{*},t)}{partial{dot{x}}} = 0.tag{2}
end{equation}

Next, as I was studying the same problem (from Modern Control System Theory by M. Gopal) with the modified constraint – terminal time $t_1$ fixed but $x(t_1)$ free, I encountered a bit of difficulty (highlighted in bold texts) in understanding the flow of arguments. It goes as follows.

Given $x$ be any curve in the admissible class $Omega$ and $delta{x}$ represents the variation in $x$ which is defined as an infinitesimal arbitrary change in $x$ for a fixed value of variable $t$.

The first variation in $J$ is given as

begin{equation}label{eq:3}
delta{J}(x,delta{x}) = dfrac{partial{g}(x,dot{x},t)}{partial{dot{x}}}delta{x(t)}|_{t_0} ^ {t_1}+int_{t_0}^{t_1}Bigg{dfrac{partial{g(x,dot{x},t)}}{partial{x}} – dfrac{d}{dt} dfrac{partial{g}(x,dot{x},t)}{partial{dot{x}}}Bigg} delta{x} dt. tag{3}
end{equation}

For an extremal (minimal or maximal) $x^{*}(t)$, we know that $delta{J}(x,delta{x})$ must be zero. In the following we show that the integral in eqref{eq:2} must be zero on an extremal.

Suppose that a curve $x^{*}(t)$ is an extremal for the problem under consideration; $t_1$ specified and $x(t_1)$ free. The value of $x^{*}(t)$ at $t_1$ is say $x^{*}(t_1)=x^{1}$. Now consider the fixed end-points problem with the functional $J$ in eqref{eq:1} and the end-points $(t_0,x^{0})$ and $(t_1,x^{*}(t_1))$. The curve $x^{*}(t)$ must be an extremal for this fixed end-points problem and therefore must be a solution of the Euler-Lagrange eqn. eqref{eq:2}. The integral term in eqref{eq:3} must be zero on an extremal and

begin{equation}label{eq:4}
dfrac{partial{g}(x^{*},dot{x}^{*},t)}{partial{dot{x}}}|_{t_1}delta{x(t_1)}=0.tag{4}
end{equation}

Since $x(t_1)$ is free, $delta{x(t_1)}$ is arbitrary; therefore it is necessary that

begin{equation}label{eq:5}
dfrac{partial{g}(x^{*},dot{x}^{*},t)}{partial{dot{x}}}|_{t_1} = 0. tag{5}
end{equation}

Equation eqref{eq:5} provides the second required boundary condition for the solution of the second-order Euler-Lagrange equation.


Now my question is- why should one consider a fixed-end points problem for this case? I am stuck at this problem for quite some time now and therefore sincerely appreciate any thoughts, or suggestions.

One Answer

I think I have found an answer and please correct me if there are any other reasons apart from the following justification.

Since $x^{*}(t)={x(t)inOmega|J(x)<min J(y), forall { y in Omega}}$ is optimal between all admissible trajectories $x(t)inOmega$ for $tin[t_0,t_1]$ with $x^{*}(t_0)=x(t_0)=x_0$ and $x^{*}(t_1)neq{x(t_1)}$, it is still optimal for the fixed end point problem case $Big((t_0,x_0)$ and $(t_1,x^{*}(t_1))Big)$ as well and therfore the necessary conditions that hold for $x^{*}(t)$ in fixed end point case will hold in this case. Since, Euler-Lagrange condition was a necessary condition for $x^{*}(t)$ being optimal in fixed end point case, Eq. (2) holds true for this case and substituting this result in Eq. (3), the other condition in Eq. (5) can be found easily.

Correct answer by jbgujgu on April 15, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP