(356i) AI Based Analysis for Graphene Synthesis - from Size Measurement to SEM Image Generation | AIChE

(356i) AI Based Analysis for Graphene Synthesis - from Size Measurement to SEM Image Generation


Hwang, S. - Presenter, Inha University
Hwang, G. - Presenter, Inha universitiy
Graphene is a single atomic layer compound made up of pi (Ï€) bonded carbon atoms. It is an excellent material used in the production of electronic and photonic components such as flexible transparent electrodes and field-effect transistors, due to its high mechanical strength, electrical conductivity, and mobility [1]. In this study, to synthesize high-quality graphene, a chemical vapor deposition experiment was conducted by using CH4 as a precursor on Cu substrate at high temperatures [2]. The size of graphene, coverage, domain density and aspect ratio, which vary depending on the synthesis conditions including temperature, annealing time, growth time, and hydrogen supply were measured and analyzed. Owing to the variations in the shapes of graphene grains, a manual measuring technique is often used to calculate the size. Not only is this method laborious but also inefficient and resource intensive. As such, a faster, efficient, inexpensive, and standard measuring method was developed by employing Region-proposal Convolutional Neural Network(R-CNN): an objective detection algorithm [3]. In this strategy, the graphene grains were assumed to be hexagonally shaped. Consequently, the aspect ratio was also estimated by the model. Monochrome processed graphene imprinted SEM images were used as inputs by k-means clustering for the R-CNN training [4]. The R-CNN predicted size and aspect ratio were in the error margin of about 0-10% when validated against experimental data. Subsequently, to ascertain the relationship between the process variables and the size, coverage, and domain density, an Artificial Neural Network (ANN) algorithm was modelled [5]. A Support Vector Machine (SVM) was further developed to investigate the effect of the process conditions on graphene shapes using the aspect ratio as target variable [6]. Finally, by developing a Generative Adversarial Network (GAN) [7], hexagonally shaped graphene grains were generated as black and white monochrome images with the same domain density as in the experimental data. A complete network of all the developed algorithms (R-CNN, ANN, SVM, GAN) was created to generate the graphene images. To obtain the same image color and outlook just as in the experimental SEM images, a Pix2Pix model was developed to transform the monochrome images into colored real SEM images [8]. By comparison, the ANN predicted images showed high agreement with the experimentally generated ones. As a result, we were able to find optimum experimental condition efficiently using this newly developed model, having a deeper insight into graphene CVD synthesis and predicting results in the form of images.

Literature cited:

[1] Phaedon Avouris and Fengniam Xia, “Graphene applications in electronics and photonics” ,MRS Bulletin,vol.37,pp. 1225-1234, 2012.

[2] Xuesong Li, Wei Wei Cai, Jinho An, Dongxing Yang,Richard Piner, Aruna Veamakanni, Inhwa Jung, Rodney S.Ruoff “Large-Area Synthesis of High Quality and Uniform Graphene Films on Copper Foils”, Science, vol. 324, pp. 1312-1314, 2009.

[3] Ross Girshick, Jeff Donahue, Trevor Darrel, Jitendra Malik “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation”, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580-587, 2014

[4] Ray Siddheswar, Turi R.H. “Determination of number of clusters in k-means clustering and application in colour image segmentation”, Proceedings of the 4th international conference on advances in pattern recognition and digital techniques, pp.137-143, 1999

[5] Kenji Suzuki, “Artificial Neural Networks - Methodological Advances and Biomedical Applications”, ISBN 978-953-307-243-2, pp. 2-14, 2011

[6] Hsu, C.-W., Chang, C.-C., and Lin, C.-J. “A practical guide to support vector classification. Tech. rep., Department of Computer Science”, National Taiwan University, 2003

[7] Goodfellow, Ian J, Pouget-Abadie “Generative adversarial nets”, Advances in Neural InformatiProcessing Systems, pp.2672-2680, 2014

[8] P. sola, Phillipzhu, Jun Yan Zhou “mage-to-Image Translation with Conditional Adversarial Networks”, Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967-5976, 2017