End to end deep learning of control policies have gathered much attention in recent times. Their attractiveness stems from the fact that in principal they can work with very minimal assumptions on the problem structure (robot dynamics, environment model etc.). But at the same time, control policies learned through end-to-end approaches have a certain opaqueness, i.e if it does not work, it is difficult to pinpoint the exact reason.
On the other hand, conventional control theoretical motion planning and control comes with certain guarantees and explainability. For example, we can answer questions like, whether robot state converges from a certain set of initial conditions under the given feedback control policy. However, classical approaches require more information and assumptions about the problem structure.
In our research group, we are developing ways to optimally integrate deep-learning based approaches with classical control theoretic algorithms for safety critical applications like autonomous driving and human-robot collaborative manufacturing. The key focus is on understanding on what parameters of control theoretic algorithms needs to be learned in order to make them reliable in real world setting. The explainability of our approach stem from the fact that learned parameters have a physical meaning and thus the performance shortcomings can be clearly explained and analyzed.