Every chemical engineer is familiar with scale-up procedures. First, scale up the process to a pilot plant and measure conversion and selectivity at several conditions. Then, fit the data to a model and use that model to plan the scale-up to a commercial unit. The procedure works, but running a pilot-plant experimental campaign is expensive. Due to cost constraints, engineers must often build models with very little data. Typically, most or all of the data are used to train the model, and some of the model parameters may not be well-determined by the limited data.
Since there are not enough experimental data to determine many rate parameters, engineers are usually forced to use a very simplistic model with just a few parameters. It is not uncommon to lament the paucity of data when working with the model after the experimental campaign has ended. The model probably interpolates well between the datapoints, but most likely does not extrapolate well to untested conditions. This raises uncertainty as to which predictions of the model are trustworthy.
In the AIChE Journal Perspective article “Moving from Postdictive to Predictive Kinetics in Reaction Engineering,” author...
Would you like to reuse content from CEP Magazine? It’s easy to request permission to reuse content. Simply click here to connect instantly to licensing services, where you can choose from a list of options regarding how you would like to reuse the desired content and complete the transaction.