We present an initial study of off-policy evaluation (OPE), a prob-lem prerequisite to real-world reinforcement learning (RL), in the context of building control. OPE is the problem of estimating a pol-icy’s performance without running it on the actual system, using historical data from the existing controller. It enables the control en-gineers to ensure a new, pretrained policy satisfies the performance requirements and safety constraints of a real-world system, prior to interacting with it. While many methods have been developed for OPE, no study has evaluated which ones are suitable for building operational data, which are generated by deterministic policies and have limited coverage of the state-action space. After reviewing existing works and their assumptions, we adopted the approxi-mate model (AM) method. Furthermore, we used bootstrapping to quantify uncertainty and correct for bias. In a simulation study, we evaluated the proposed approach on 10 policies pretrained with im-itation learning. On average, the AM method estimated the energy and comfort costs with 1.84% and 14.1% error, respectively.