A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is officially sponsored by the U.S. Department of Education. Yet, correspondence with the organization indicates that it disregards information regarding implementation fidelity in its summary ratings, relying on “replicated findings” and suggesting that any fidelity issues that “may have arisen are averaged.” This paper demonstrates the fallacy in this logic. Simulations show that the policy minimizes the positive impact of highly effective programs and the negative impact of highly ineffective programs. Implications are discussed.