(365f) When Experimental and Predicted Data Are in Conflict, What Should We Trust? | AIChE

(365f) When Experimental and Predicted Data Are in Conflict, What Should We Trust?

Authors 

Bazyleva, A. - Presenter, National Institute of Standards and Technology
Paulechka, E., National Institute of Standards and Technology, Applied Chemicals and Materials Division
Diky, V., National Institute of Standards and Technology
Magee, J., National Institute of Standards and Technology, Applied Chemicals and Materials Division
Kroenlein, K., National Institute of Standards and Technology
Process simulation has become an essential part of development, optimization, and risk assessment in chemical engineering. To build adequate computer models of chemical, separation, and transport processes, engineers need reliable property data. Process simulations are usually developed on the basis of experimental results, and predicted values are involved when direct experimental data are missing. It is common that discrepancies between experimental and predicted values exceed the claimed uncertainties; it has long been an axiom that experimental measurements are fundamentally more trustworthy and experimental errors consistently lie within the boundaries of typical uncertainties for particular methods and samples. However, statistical analysis of modern publications shows a high probability of large deviations from expected behavior (e.g., comparison with literature data or trends for similar compounds, self-consistency within reported data) or questionable experimental methods that make many published data suspect, adoption of which can lead engineers to draw misleading or even dangerous conclusions from process simulations. In general, chemical engineers should realize that a published experimental value cannot be blindly trusted. Additional comparison frequently reveals that even smooth data in self-consistent reports may be erroneous. On the other hand, rigorously developed prediction schemes are usually developed to follow patterns in chemical series. While each prediction method has its bias arising from the underlying physical model and parameterization used and should only be applied within the chemical domain it was developed for, it represents the aggregate of a range of experimental results and theoretical expectations. Therefore, predictions are often a more conservative choice and robust against crude errors. It is also much easier to verify if a prediction procedure was correctly followed and to repeat the analysis. However, most prediction methods are ultimately based on experimental data and depend on their quality. Consequently, neither experimental nor predicted data should be considered in a vacuum, and only through mutual validation of experimental and predicted data can insight regarding true values be gained. The activities of the Thermodynamic Research Center (TRC) at NIST include collecting and evaluating published experimental property data as well as development and validation of property prediction methods. Quality assessment of newly measured data, which is implemented at TRC, is a complex process involving analysis of experimental details (sample, method, instrumentation, etc.), validation against independent additional measurements (preferably measured with a different method), consistency between different properties, and agreement with theoretical calculations or correlation with similar compounds. This detailed analysis requires a comprehensive property database and a variety of computational and correlation methods individual for each property. Comprehensive coverage of the entire scope of properties required by chemical engineers is a challenging goal, but with the TRC constantly increasing its data archive and continuing its development of new validation procedures, that goal grows more achievable each day. Typical data scenarios, the current coverage by TRC procedures, future plans, and some suggestions for practical engineers will be discussed.