Journal Papers

Authors Title Year Journal
G. Chou, N. Ozay, D. Berenson Learning Temporal Logic Formulas from Suboptimal Demonstrations: Theory and Experiments
[Abstract] [Cite]
2021 Autonomous Robots (AURO) (accepted)
Abstract: We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the LTL formula, and the cost function is uncertain to the learner. Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counterexample-guided falsification strategy to learn the atomic proposition parameters and logical structure of the LTL formula, respectively. We provide theoretical guarantees on the conservativeness of the recovered atomic proposition sets, as well as completeness in the search for finding an LTL formula consistent with the demonstrations. We evaluate our method on high-dimensional nonlinear systems by learning LTL formulas explaining multi-stage tasks on a simulated 7-DOF arm and a quadrotor, and show that it outperforms competing methods for learning LTL formulas from positive examples. Finally, we demonstrate that our approach can learn a real-world multi-stage tabletop manipulation task on a physical 7-DOF Kuka iiwa arm.
G. Chou, D. Berenson, N. Ozay Learning Constraints from Demonstrations with Grid and Parametric Representations
[Abstract] [Cite]
2021 International Journal of Robotics Research (IJRR) (accepted)
Abstract: We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving an integer program. Our method generalizes across system dynamics and learns a guaranteed subset of the constraint. Additionally, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions. We also provide theoretical analysis on what subset of the constraint and safe set can be learnable from safe demonstrations. We demonstrate our method on linear and nonlinear system dynamics, show that it can be modified to work with suboptimal demonstrations, and that it can also be used to learn constraints in a feature space.
C. Knuth*, G. Chou*, N. Ozay, D. Berenson Planning with Learned Dynamics: Probabilistic Guarantees on Safety and Reachability via Lipschitz Constants
[Abstract] [arXiv] [Cite]
2021 IEEE Robotics and Automation Letters (RA-L), with presentation at ICRA 2021
Abstract: We present a method for feedback motion planning of systems with unknown dynamics which provides probabilistic guarantees on safety, reachability, and goal stability. To find a domain in which a learned control-affine approximation of the true dynamics can be trusted, we estimate the Lipschitz constant of the difference between the true and learned dynamics, and ensure the estimate is valid with a given probability. Provided the system has at least as many controls as states, we also derive existence conditions for a one-step feedback law which can keep the real system within a small bound of a nominal trajectory planned with the learned dynamics. Our method imposes the feedback law existence as a constraint in a sampling-based planner, which returns a feedback policy around a nominal plan ensuring that, if the Lipschitz constant estimate is valid, the true system is safe during plan execution, reaches the goal, and is ultimately invariant in a small set about the goal. We demonstrate our approach by planning using learned models of a 6D quadrotor and a 7DOF Kuka arm. We show that a baseline which plans using the same learned dynamics without considering the error bound or the existence of the feedback law can fail to stabilize around the plan and become unsafe.
BibTeX:
@inproceedings{Knuth-RAL-21,
Author = "Craig Knuth*, Glen Chou*, Necmiye Ozay, and Dmitry Berenson", journal = {IEEE Robotics and Automation Letters (RA-L)},
Title = "Planning with Learned Dynamics: Probabilistic Guarantees on Safety and Reachability via Lipschitz Constants",
year = {2021}
}
G. Chou, N. Ozay, D. Berenson Learning Constraints from Locally-Optimal Demonstrations under Cost Function Uncertainty
[Abstract] [arXiv] [Cite]
2020 IEEE Robotics and Automation Letters (RA-L), with presentation at ICRA 2020
Abstract: We present an algorithm for learning parametric constraints from locally-optimal demonstrations, where the cost function being optimized is uncertain to the learner. Our method uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations within a mixed integer linear program (MILP) to learn constraints which are consistent with the local optimality of the demonstrations, by either using a known constraint parameterization or by incrementally growing a parameterization that is consistent with the demonstrations. We provide theoretical guarantees on the conservativeness of the recovered safe/unsafe sets and analyze the limits of constraint learnability when using locally-optimal demonstrations. We evaluate our method on high-dimensional constraints and systems by learning constraints for 7-DOF arm and quadrotor examples, show that it outperforms competing constraint-learning approaches, and can be effectively used to plan new constraint-satisfying trajectories in the environment.
BibTeX:
@inproceedings{Chou-RAL-20,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {IEEE Robotics and Automation Letters (RA-L)},
Title = "Learning Constraints from Locally-Optimal Demonstrations under Cost Function Uncertainty",
year = {2020}
}
G. Chou*, Y. E. Sahin*, L. Yang*, K. J. Rutledge, P. Nilsson, and N. Ozay Using control synthesis to generate corner cases: A case study on autonomous driving
[Abstract] [arXiv] [Cite] [Notes]
2018 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (ESWEEK-TCAD special issue)
Abstract: This paper employs correct-by-construction control synthesis, in particular controlled invariant set computations, for falsification. Our hypothesis is that if it is possible to compute a “large enough" controlled invariant set either for the actual system model or some simplification of the system model, interesting corner cases for other control designs can be generated by sampling initial conditions from the boundary of this controlled invariant set. Moreover, if falsifying trajectories for a given control design can be found through such sampling, then the controlled invariant set can be used as a supervisor to ensure safe operation of the control design under consideration. In addition to interesting initial conditions, which are mostly related to safety violations in transients, we use solutions from a dual game, a reachability game for the safety specification, to find falsifying inputs. We also propose optimization-based heuristics for input generation for cases when the state is outside the winning set of the dual game. To demonstrate the proposed ideas, we consider case studies from basic autonomous driving functionality, in particular, adaptive cruise control and lane keeping. We show how the proposed technique can be used to find interesting falsifying trajectories for classical control designs like proportional controllers, proportional integral controllers and model predictive controllers, as well as an open source real-world autonomous driving package.
BibTeX:
@article{Chou-et-al-Journal-18,
Author = "Glen Chou, Yunus E. Sahin, Liren Yang, Kwesi J. Rutledge, and Necmiye Ozay", journal = {IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (ESWEEK-TCAD special issue)},
Title = "Using control synthesis to generate corner cases: A case study on autonomous driving",
year = {2018}
}
Notes: Also presented at 2018 University of Michigan Engineering Graduate Symposium; won Emerging Research Social Impact award.


Peer-Reviewed Conference Papers

Authors Title Year Conference
G. Chou, N. Ozay, D. Berenson Model Error Propagation via Learned Contraction Metrics for Safe Feedback Motion Planning of Unknown Systems
[Abstract] [arXiv] [Cite]
2021 Proceedings of the 60th IEEE Conference on Decision and Control (CDC)
Abstract: We present a method for contraction-based feedback motion planning of locally incrementally exponentially stabilizable systems with unknown dynamics that provides probabilistic safety and reachability guarantees. Given a dynamics dataset, our method learns a deep control-affine approximation of the dynamics. To find a trusted domain where this model can be used for planning, we obtain an estimate of the Lipschitz constant of the model error, which is valid with a given probability, in a region around the training data, providing a local, spatially-varying model error bound. We derive a trajectory tracking error bound for a contraction-based controller that is subjected to this model error, and then learn a controller that optimizes this tracking bound. With a given probability, we verify the correctness of the controller and tracking error bound in the trusted domain. We then use the trajectory error bound together with the trusted domain to guide a sampling-based planner to return trajectories that can be robustly tracked in execution. We show results on a 4D car, a 6D quadrotor, and a 22D deformable object manipulation task, showing our method plans safely with learned models of high-dimensional underactuated systems, while baselines that plan without considering the tracking error bound or the trusted domain can fail to stabilize the system and become unsafe.
BibTeX:
@inproceedings{Chou-CDC-21,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of the 60th IEEE Conference on Decision and Control (CDC)},
Title = "Model Error Propagation via Learned Contraction Metrics for Safe Feedback Motion Planning of Unknown Systems",
year = {2021}
}
K. Rutledge*, G. Chou*, N. Ozay Compositional Safety Rules for Inter-Triggering Hybrid Automata
[Abstract] [PDF] [Cite]
2021 Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control (HSCC)
Abstract: In this paper, we present a compositional condition for ensuring safety of a collection of interacting systems modeled by inter-triggering hybrid automata (ITHA). ITHA is a modeling formalism for representing multi-agent systems in which each agent is governed by individual dynamics but can also interact with other agents through triggering actions. These triggering actions result in a jump/reset in the state of other agents according to a global resolution function. A sufficient condition for safety of the collection, inspired by responsibility-sensitive safety, is developed in two parts: self-safety relating to the individual dynamics, and responsibility relating to the triggering actions. The condition relies on having an over-approximation method for the resolution function. We further show how such over-approximations can be obtained and improved via communication. We use two examples, a job scheduling task on parallel processors and a highway driving example, throughout the paper to illustrate the concepts. Finally, we provide a comprehensive evaluation on how the proposed condition can be leveraged for several multi-agent control and supervision examples.
BibTeX:
@inproceedings{Rutledge-HSCC-21,
Author = "Kwesi Rutledge*, Glen Chou*, and Necmiye Ozay", journal = {Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control (HSCC)},
Title = "Compositional Safety Rules for Inter-Triggering Hybrid Automata",
year = {2021}
}
G. Chou, N. Ozay, D. Berenson Uncertainty-Aware Constraint Learning for Adaptive Safe Motion Planning from Demonstrations
[Abstract] [arXiv] [Cite]
2020 Proceedings of the 4th Conference on Robot Learning (CoRL)
Abstract: We present a method for learning to satisfy uncertain constraints from demonstrations. Our method uses robust optimization to obtain a belief over the potentially infinite set of possible constraints consistent with the demonstrations, and then uses this belief to plan trajectories that trade off performance with satisfying the possible constraints. We use these trajectories in a closed-loop policy that executes and replans using belief updates, which incorporate data gathered during execution. We derive guarantees on the accuracy of our constraint belief and probabilistic guarantees on plan safety. We present results on a 7-DOF arm and 12D quadrotor, showing our method can learn to satisfy high-dimensional (up to 30D) uncertain constraints, and outperforms baselines in safety and efficiency.
BibTeX:
@inproceedings{Chou-CoRL-20,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of the 4th Conference on Robot Learning (CoRL)},
Title = "Uncertainty-Aware Constraint Learning for Adaptive Safe Motion Planning from Demonstrations",
year = {2020}
}
G. Chou, N. Ozay, D. Berenson Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations
[Abstract] [arXiv] [Cite] [Notes]
2020 Proceedings of Robotics: Science and Systems (RSS) XVI
Abstract: We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the LTL formula, and the cost function is uncertain to the learner. Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counterexample-guided falsification strategy to learn the atomic proposition parameters and logical structure of the LTL formula, respectively. We provide theoretical guarantees on the conservativeness of the recovered atomic proposition sets, as well as completeness in the search for finding an LTL formula consistent with the demonstrations. We evaluate our method on high-dimensional nonlinear systems by learning LTL formulas explaining multi-stage tasks on 7-DOF arm and quadrotor systems and show that it outperforms competing methods for learning LTL formulas from positive examples.
BibTeX:
@inproceedings{Chou-RSS-20,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of Robotics: Science and Systems (RSS) XVI},
Title = "Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations",
year = {2020}
}
Notes: Invited to AURO special issue.
C. Knuth, G. Chou, N. Ozay, D. Berenson Inferring Obstacles and Path Validity from Visibility-Constrained Demonstrations
[Abstract] [arXiv] [Cite]
2020 Proceedings of the 14th International Workshop on the Algorithmic Foundations of Robotics (WAFR)
Abstract: Many methods in learning from demonstration assume that the demonstrator has knowledge of the full environment. However, in many scenarios, a demonstrator only sees part of the environment and they continuously replan as they gather information. To plan new paths or to reconstruct the environment, we must consider the visibility constraints and replanning process of the demonstrator, which, to our knowledge, has not been done in previous work. We consider the problem of inferring obstacle configurations in a 2D environment from demonstrated paths for a point robot that is capable of seeing in any direction but not through obstacles. Given a set of \textit{survey points}, which describe where the demonstrator obtains new information, and a candidate path, we construct a Constraint Satisfaction Problem (CSP) on a cell decomposition of the environment. We parameterize a set of obstacles corresponding to an assignment from the CSP and sample from the set to find valid environments. We show that there is a probabilistically-complete, yet not entirely tractable, algorithm that can guarantee novel paths in the space are unsafe or possibly safe. We also present an incomplete, but empirically-successful, heuristic-guided algorithm that we apply in our experiments to 1) planning novel paths and 2) recovering a probabilistic representation of the environment.
BibTeX:
@inproceedings{Knuth-WAFR-20,
Author = "Craig Knuth, Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of the 14th International Workshop on the Algorithmic Foundations of Robotics (WAFR)},
Title = "Inferring Obstacles and Path Validity from Visibility-Constrained Demonstrations",
year = {2020}
}
G. Chou, N. Ozay, D. Berenson Learning Parametric Constraints in High Dimensions from Demonstrations
[Abstract] [arXiv] [Cite]
2019 Proceedings of the 3rd Conference on Robot Learning (CoRL)
Abstract: We present a scalable algorithm for learning parametric constraints in high dimensions from safe expert demonstrations. To reduce the ill-posedness of the constraint recovery problem, our method uses hit-and-run sampling to generate lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a representation of the unsafe set that is compatible with the data by solving an integer program in that representation's parameter space. Our method can either leverage a known parameterization or incrementally grow a parameterization while remaining consistent with the data, and we provide theoretical guarantees on the conservativeness of the recovered unsafe set. We evaluate our method on high-dimensional constraints for high-dimensional systems by learning constraints for 7-DOF arm, quadrotor, and planar pushing examples, and show that our method outperforms baseline approaches.
BibTeX:
@inproceedings{Chou-CoRL-19,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of the 3rd Conference on Robot Learning (CoRL)},
Title = "Learning Parametric Constraints in High Dimensions from Demonstration",
year = {2019}
}
G. Chou, D. Berenson, N. Ozay Learning Constraints from Demonstrations
[Abstract] [arXiv] [Cite] [Notes]
2018 Proceedings of the 13th International Workshop on the Algorithmic Foundations of Robotics (WAFR)
Abstract: We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving an integer program. Our method generalizes across system dynamics and learns a guaranteed subset of the constraint. We also provide theoretical analysis on what subset of the constraint can be learnable from safe demonstrations. We demonstrate our method on linear and nonlinear system dynamics, show that it can be modi ed to work with suboptimal demonstrations, and that it can also be used to solve a transfer learning task.
BibTeX:
@inproceedings{Chou-WAFR-18,
Author = "Glen Chou, Dmitry Berenson, and Necmiye Ozay", journal = {Proceedings of the 13th International Workshop on the Algorithmic Foundations of Robotics (WAFR)},
Title = "Learning Constraints from Demonstration",
year = {2018}
}
Notes: Invited to IJRR special issue.
G. Chou*, Y. E. Sahin*, L. Yang*, K. J. Rutledge, P. Nilsson, and N. Ozay Using control synthesis to generate corner cases: A case study on autonomous driving
[Abstract] [arXiv] [Cite] [Notes]
2018 Proceedings of the ACM SIGBED International Conference on Embedded Software (EMSOFT)
Abstract: This paper employs correct-by-construction control synthesis, in particular controlled invariant set computations, for falsification. Our hypothesis is that if it is possible to compute a “large enough" controlled invariant set either for the actual system model or some simplification of the system model, interesting corner cases for other control designs can be generated by sampling initial conditions from the boundary of this controlled invariant set. Moreover, if falsifying trajectories for a given control design can be found through such sampling, then the controlled invariant set can be used as a supervisor to ensure safe operation of the control design under consideration. In addition to interesting initial conditions, which are mostly related to safety violations in transients, we use solutions from a dual game, a reachability game for the safety specification, to find falsifying inputs. We also propose optimization-based heuristics for input generation for cases when the state is outside the winning set of the dual game. To demonstrate the proposed ideas, we consider case studies from basic autonomous driving functionality, in particular, adaptive cruise control and lane keeping. We show how the proposed technique can be used to find interesting falsifying trajectories for classical control designs like proportional controllers, proportional integral controllers and model predictive controllers, as well as an open source real-world autonomous driving package.
BibTeX:
@inproceedings{Chou-et-al-EMSOFT-18,
Author = "Glen Chou, Yunus E. Sahin, Liren Yang, Kwesi J. Rutledge, and Necmiye Ozay", journal = {Proceedings of the ACM SIGBED International Conference on Embedded Software (EMSOFT)},
Title = "Using control synthesis to generate corner cases: A case study on autonomous driving",
year = {2018}
}
Notes: Also presented at 2018 University of Michigan Engineering Graduate Symposium; won Emerging Research Social Impact award.
G. Chou, N. Ozay, D. Berenson Incremental Segmentation of ARX Models
[Abstract] [PDF] [Cite]
2018 Proceedings of the 18th IFAC Symposium on System Identification (SYSID)
Abstract: We consider the problem of incrementally segmenting auto-regressive models with exogenous inputs (ARX models) when the data is received sequentially at run-time. In particular, we extend a recently proposed dynamic programming based polynomial-time algorithm for offline (batch) ARX model segmentation to the incremental setting. The new algorithm enables sequential updating of the models, eliminating repeated computation, while remaining optimal. We also show how certain noise bounds can be used to detect switches automatically at run-time. The efficiency of the approach compared to the batch method is illustrated on synthetic and real data.
BibTeX:
@inproceedings{Chou-SYSID-18,
Author = "Glen Chou, Necmiye Ozay, and Dmitry Berenson", journal = {Proceedings of the 18th IFAC Symposium on System Identification (SYSID)},
Title = "Incremental Segmentation of ARX Models",
year = {2018}
}
A. Dhinakaran*, M. Chen*, G. Chou, J. C. Shih, C. J. Tomlin A Hybrid Framework for Multi-Vehicle Collision Avoidance
[Abstract] [arXiv] [Cite]
2017 Proceedings of the 57th IEEE Conference on Decision and Control (CDC)
Abstract: With the recent surge of interest in UAVs for civilian services, the importance of developing tractable multi-agent analysis techniques that provide safety and performance guarantees have drastically increased. Hamilton-Jacobi (HJ) reachability has successfully provided these guarantees to small-scale systems and is flexible in terms of system dynamics. However, the exponential complexity scaling of HJ reachability with respect to system dimension prevents its direct application to larger-scale problems where the number of vehicles is greater than two. In this paper, we propose a collision avoidance algorithm using a hybrid framework for N+1 vehicles through higher-level control logic given any N-vehicle collision avoidance algorithm. Our algorithm conservatively approximates a guaranteed-safe region in the joint state space of the N+1 vehicles and produces a safety-preserving controller. In addition, our algorithm does not incur significant additional computation cost. We demonstrate our proposed method in simulation.
BibTeX:
@inproceedings{Dhinakaran-et-al-CDC-17,
  	author    = {Aparna Dhinakaran and
               Mo Chen and
               Glen Chou and
               Jennifer C. Shih and
               Claire J. Tomlin},
  title     = {A hybrid framework for multi-vehicle collision avoidance},
  booktitle = {56th {IEEE} Annual Conference on Decision and Control, {CDC} 2017,
               Melbourne, Australia, December 12-15, 2017},
  pages     = {2979--2984},
  year      = {2017},
}


Workshop Papers/Technical Reports

Authors Title Year Venue
H. Wang*, G. Chou*, D. Berenson Gaussian Process Constraint Learning for Scalable Safe Motion Planning from Demonstrations
[Abstract] [PDF] [Cite]
2021 Robotics: Science and Systems, Workshop on Integrating Planning and Learning
Abstract: We propose a method for learning constraints represented as Gaussian processes (GPs) from locally-optimal demonstrations. Our approach uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations to determine the location and shape of the constraints, and uses these to train a GP which is consistent with this information. We demonstrate our method on a 12D quadrotor constraint learning problem, showing that the learned constraint is accurate and can be used within a kinodynamic RRT to plan probabilistically-safe trajectories.
BibTeX:
@inproceedings{Wang-RSSWS-21,
Author = "Hao Wang, Glen Chou, and Dmitry Berenson", journal = {Robotics: Science and Systems, Workshop on Integrating Planning and Learning},
Title = "Gaussian Process Constraint Learning for Scalable Safe Motion Planning from Demonstrations",
year = {2021}
}
G. Chou, N. Ozay, D. Berenson Learning Parametric Constraints in High Dimensions from Demonstrations
[Abstract] [PDF] [Cite] [Notes]
2019 Robotics: Science and Systems, Workshop on Robust Autonomy
Abstract: We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving a mixed integer program. Additionally, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions. We show that our method can learn a six-dimensional pose constraint for a 7-DOF robot arm.
BibTeX:
@inproceedings{Chou-RSSWS-19,
Author = "Glen Chou, Dmitry Berenson, and Necmiye Ozay", journal = {Robotics: Science and Systems, Workshop on Robust Autonomy},
Title = "Learning Parametric Constraints in High Dimensions from Demonstration",
year = {2019}
}
Notes: Selected for long contributed talk.
F. Jiang*, G. Chou*, M. Chen, C. J. Tomlin Using neural networks to compute approximate and guaranteed feasible Hamilton-Jacobi-Bellman PDE solutions
[Abstract] [arXiv] [Cite]
2016 arXiv
Abstract: To sidestep the curse of dimensionality when computing solutions to Hamilton-Jacobi-Bellman partial differential equations (HJB PDE), we propose an algorithm that leverages a neural network to approximate the value function. We show that our final approximation of the value function generates near optimal controls which are guaranteed to successfully drive the system to a target state. Our framework is not dependent on state space discretization, leading to a significant reduction in computation time and space complexity in comparison with dynamic programming-based approaches. Using this grid-free approach also enables us to plan over longer time horizons with relatively little additional computation overhead. Unlike many previous neural network HJB PDE approximating formulations, our approximation is strictly conservative and hence any trajectories we generate will be strictly feasible. For demonstration, we specialize our new general framework to the Dubins car model and discuss how the framework can be applied to other models with higher-dimensional state spaces.
BibTeX:
@inproceedings{Jiang-et-al-16,
  	author    = {Frank J. Jiang and
               Glen Chou and
               Mo Chen and
               Claire J. Tomlin},
  title     = {Using neural networks to compute approximate and guaranteed feasible Hamilton-Jacobi-Bellman PDE solutions},
  journal   = {CoRR},
  volume    = {abs/1611.03158},
  year      = {2016},
}