Language model probabilistic programming extends standard PPLs with new primitives for sampling from and conditioning on the outputs of large language models. In principle, language model probabilistic programs can encode many distributions that would be difficult to elicit via prompting alone. This abstract advocates SMC for efficient inference in language model probabilistic programs. First, we briefly describe our LLaMPPL library for language model probabilistic programming, which makes it easy to rapidly explore a large space of sound SMC algorithms for a given language modeling task, and automates the efficient implementation of SMC, including auto-batching calls to LLMs. We then offer our perspective, informed by our preliminary experiments with LLaMPPL, on two key design challenges faced by users of SMC (designing the intermediate targets, and designing the proposal distributions), through the lens of three example models that outperform state-of-the-art LLMs and constrained generation techniques on several tasks.