(287a) Machine Learning for Process Applications in Cement Industries | AIChE

(287a) Machine Learning for Process Applications in Cement Industries

Authors 

Gentimis, T. - Presenter, Symbiolabs Circular Intelligence P.C.
Dalamagas, T. - Presenter, SYMBIOLABS Circular Intelligence PC
Chatzilenas, C., SYMBIOLABS Circular Intelligence LC
Kokosis, A., National Technical University of Athens
1. Introduction

Cement grinding is a key process in the cement industry. During this phase , gypsum and mineral or artificial raw materials (eg pozzolan, ash, limestone) are added to the clinker (basic raw material for cement production) and, then grinded in special mills until a very small grain size is achieved. During the grinding process, corrective actions are constantly taken, in both the quota of the ingredients of the recipe and the operating parameters of the mill in order to achieve the appropriate fineness of the mixture. Large amount of electrical energy is consumed for mill operation. Therefore, it is essential for a cement industry to estimate its energy consumption in its cement mills. Estimation of the energy consumption can be used in a number of processes, such as assisting in the process of compliance with the requirements of ISO 50001 for energy efficiency, detecting changes in the operating parameters of the mill to save energy and executing scenarios for the energy consumption footprint of new mixtures. To this end, we propose a model that estimates the energy consumption of a cement mill. We followed a data-driven approach, utilizing historical data recorded during the grinding process, and we designed and implemented machine learning models for energy prediction. Our models have been applied in TITAN SA plant in Kamari Viotia. Results show that we achieved an improvement of accuracy for energy consumption prediction by one order of magnitude, based on baseline method.

2. Background

TITAN uses a proprietary methodology for creating a model for estimating energy consumption in the company’s cement mills. This methodology takes into consideration a number of variables, such as the cement mix composition and the product’s blaine. At the core of this modelling approach is a linear function with weighted fractions of clinker, gypsum, slag, fly ash, pozzolan, limestone and other components in the product. The weights assigned to each component of the mix are estimated for each of TITAN’s cement mills using linear regression on past data. The methodology also adds to the model corrections for overgrinding of the softer components, and also takes into consideration the operational parameters for each type of cement mill (e.g. mill type, separator type etc). The model created by this methodology, calculates the estimation of power consumption for a standard blaine value, and then, using an adjustment formula, corrects it to the final estimation based on the product’s blaine. In our approach we were granted full access to the proprietary methodology adopted by TITAN (not to be disclosed in this abstract), in order to use it as a baseline method and compare the results of our methodology.

3. Methodology

3.1 Data collection

We collected hourly operations data from TITAN Kamari cement plant, near Athens, from January 2018 to August 2019. The processing of the data resulted in a dataset with 15,592 records and 43 features. Data is comprised of features for cement mix composition (e.g. the percentage of, clinker, gypsum, fly ash, etc), and operational parameters (e.g, like separator speed, fan speed, etc). Furthermore, measurements from sensors (temperature in various stages of the mill process, etc), quantities of additives and qualitative characteristics of the mix (blaine, etc) are some extra sets of features that were included in the data.

3.2 Prediction Methods based on Machine Learning

We implemented a group of models using 7 prediction methods, grouped in 3 major categories, (a) Multiple Linear Regression, (b) Ensemble Methods, and (c) Hyperplanes. Multiple Linear Regression includes the classic form of linear regression that calculates the coefficients of a linear function by minimizing the mean square error between the predicted and the actual values.In the Ensemble Methods, AdaBoost, XGBoost and Random Forests perform target estimation by combining estimates of many individual prediction models based on decision trees. Finally, Hyperplanes included Support Vector Regression-SVR is based on the support vector machine classification model.

Feature selection and cross-validation. We performed feature ranking through recursive feature elimination and cross-validated selection of the best number of features. More particularly, starting with a list of all features, the method recursively deletes one feature in every repetition. Finally, we came up with a list of features that returned the best results during the evaluation of the model based on the validation set. For the implementation of our methods, 80% of the whole dataset is used as the training set and the rest as testing set which is used for the evaluation of the model. Model implementation was performed with the method of cross-validation Specifically, we applied K-Fold cross-validation. This method uses different parts of data to fit and test the model. In particular, the method consists of the following steps:

  1. Data is splitted into K equal-sized folds.
  2. For the k-th fold, the model is fitted to the other K-1 folds.
  3. Prediction error of the fitted model is calculated when predicting the k-th fold.
  4. The second and third step are repeated for all folds and the K estimates of prediction error are combined.

In the K-Fold cross-validation that we performed, we have split the training set into 4 folds with suffling.

Evaluation metrics. For every dataset that we used in our methodology (training, validation and testing), we calculate evaluation metrics to understand the model’s behavior and its’ predictive ability. A simple metric is Mean Absolute Error (MAE) which is the sum of the average of the absolute difference between the predicted and actual values. Another metric used is the Mean Squared Error (MSE), which is the same as the MAE, but the difference is that it squares the difference between actual and predicted values before summing them all. Root Mean Squared Error (RMSE) is taking the square root of MSE. Also, we used R^2 (R-squared) which quantifies the quality of fit of a set of predicted output values to the actual output values.

4. Results

As mentioned above, in our experiment the K-Fold cross-validation we have split the training set into 4 folds with suffling. Initially, we examined the evaluation metrics in a validation set. For the Multiple Linear Regression methods, the R^2 and MSE in every fold of the validation set had high standard deviation, indicating poor performance of these methods. Hence, it was not worthwhile to check their evaluation metrics in the testing set. The same behavior was observed for Support Vector Regression. In contrast, Ensemble methods performed very well, so we continued by exploring the evaluation metrics, for those methods, in the testing set.

The baseline method had a poor performance, indicated by the negative R^2, and high values of MAE, MSE and RMSE. Among Ensemble methods, AdaBoost had the worst performance with an R^2 in the testing set not exceeding 0.6, and MAE almost 1.8. Concerning Random Forest, even though the results in the training set seemed promising, its performance in the validation and the testing sets wasn’t as good as expected. Finally, XGBoost Regression delivered the best results among all methods, regarding all evaluation metrics, since the R^2 in the testing set fluctuated near 0.65, the MAE was reduced even more and the MSE was the lowest observed among all three methods. Comparing the results from the XG-Boost to those from the baseline method, we observe a decrease both in MAE and MSE. In general, Ensemble methods delivered improved results. To this end, as results demonstrate, the machine learning methods used in this study, outperformed the baseline method.

5. Conclusions

To summarise, following data science practices and adopting machine learning (ML) technologies, we developed energy consumption prediction models for a cement mill of TITAN SA plant in Kamari Viotia. The models exploit historical sensor measurements and operational data and give predictions for energy consumption achieving an improvement of accuracy for energy consumption prediction by one order of magnitude, based on baseline method. Specifically, the top three models reached MAE less than 1.77 (the existing baseline method has a MAE >4.00) and R2 more than 0.57 (the existing baseline method has a R2 <0,20).