Thursday, November 20, 2025 10:00AM

Master's Thesis Proposal

 

Lander Wolfgang Schillinger Arana

(Faculty Advisor: Sarah Li)

 

"A Markov Decision Process Framework for Multi-Agent Satellite Collision Avoidance"

 

Thursday, November 20

10:00 a.m.

Montgomery Knight Bldg., Room 325

 

Abstract: 

We extend upon a Markov decision process (MDP) framework that autonomously makes guidance decisions for satellite collision avoidance maneuver (CAM) (F. Ferrara et al. 2025). In this framework, a reinforcement learning policy gradient (RL-PG) algorithm is used to enable direct optimization of guidance policy using historic CAM data. In addition to maintaining acceptable collision risks, this approach seeks to minimize the average propellant consumption of CAMs by making early maneuver decisions. To determine propellant consumption, a high-thrust impulsive phasing maneuver is utilized. The CAM is modeled as a continuous state, discrete action and finite horizon MDP, where the critical decision is determining when to initiate the maneuver. The MDP models decision rewards us using analytical models of collision risk, propellant consumption, and transit orbit geometry. By deciding to maneuver earlier than conventional methods, the Markov policy effectively favors CAMs that achieve comparable rates of collision risk reduction while consuming less propellant. In this proposal we verify this framework using historical data of tracked conjunction events and conduct an extensive parameter-sensitivity study. When evaluated on synthetic conjunction events, the trained policy consumes significantly less propellant overall and per maneuver in comparison to a conventional cut-off policy that initiates maneuvers 24 hours before the time of closest approach (TCA). On historical conjunction events, the trained policy consumes more propellant overall but consumes less propellant per maneuver. For both historical and synthetic conjunction events, the trained policy is slightly more conservative in identifying conjunctions events that warrant CAMs in comparison to cutoff policies. Furthermore, we propose a constrained propellant consumption optimization to determine the optimal number of revolutions to optimize the current MDP model and optimize its performance. Additionally, we propose an alternative low-thrust maneuver with a continuous thrust model, and plan to implement it into the MDP model and compare its performance to the high-thrust case. Finally, we propose a multi-agent extension to this single-agent MDP. By employing a stochastic (Markov) game, we propose a method to determine a set of Nash Equilibria (NE) for a variety of communication, coordination and fairness metric settings. From this set, we select the best performing NE.

Committee:

Prof. Sarah Li (advisor), School of Aerospace Engineering
Prof. Lakshmi Sankar, School of Aerospace Engineering
Prof. Glenn Lightsey , School of Aerospace Engineering
Prof. Brian C. Gunter, School of Aerospace Engineering