Model misspecification is a potential problem for any parametric-model based analysis. However, the measurement and consequences of model misspecification have not been well formalized in the context of causal inference. A measure of model misspecification is proposed, and the consequences of model misspecification in non-experimental causal inference methods are investigated. The metric is then used to explore which estimators are more sensitive to misspecification of the outcome and/or treatment assignment model. Three frequently used estimators of the treatment effect are considered, all of which rely on the propensity score: (1) full matching, (2) 1:1 nearest neighbor matching, and (3) weighting. The performance of these estimators is evaluated under two different sampling designs: (1) simple ran- dom sampling (SRS) and (2) a two-stage stratified survey. As the degree of misspecification of either the propensity score or outcome model increases, so does the bias and the root mean square error, while the coverage decreases. Results are similar for the simple random sample and a complex survey design.