A common task in many fields of science is to optimize a function defined as an expected value. In Lew et al. (2023), the authors introduced ADEV: an extension to forward mode automatic differentiation (AD) which derives unbiased gradient estimators for loss functions defined as expected values, for an expressive class of probabilistic computations. Forward mode AD has known scaling limitations when the input dimensionality is large compared to the output (e.g. $𝑓 : \mathbb{R}^n → \mathbb{R}$). This prohibits usage of ADEV in settings which include both large neural networks and probabilistic computation. In this work, we investigate the possibility of automatically deriving a reverse mode gradient estimator given a forward mode gradient estimator by extending the reverse mode algorithm from Radul et al. (2022).