Data, data everywhere
First, you must have data to work with.
Suppose you can do a look-up. You need a heat capacity or a binding constant or the equation for a Gaussian distribution function. Maybe you pull a book off the shelf, assuming you know which book to retrieve. More likely, now you open up Google or Yahoo or Bing or some other search engine. Scott Fogler famously quoted G.J. Quarderer of Dow Chemical as saying, "Four to six weeks in the lab can save you an hour in the library." It may not be the library any more, but the idea is the same.
Then you need the uncertainty. How reliable is the source? Do different sources agree? Absolute accuracy is usually important, but for process development, accuracy of the trend may be all you need. Precision is another matter, and it often isn't listed. We rely on the author's number of significant digits, but are they overreported? What if you can't find the number? You can compute some things, but a lot of things have to be measured.
Analytics: Getting knowledge from big data
What you're aiming for is data-driven knowledge discovery, usually to aid decision-making. That includes identifying the objective, forming the question, and deciding what format of answer will be most useful. What if it's a lot of data? The simplest analytics approach is averaging and finding a standard deviation. One step up might be a linear regression or other curve fit. The book and movie Moneyball gave a human face to analytics for the public. Most simply, their approach was to identify statistical measures that correlated best with baseball success, examining alternative beyond traditional statistics like overall batting average. Another development was a February 2012 New York Times article on "The Age of Big Data" that amounted to public heralding of this approach as a vital business practice. Presently, business analytics are emphasized. One type is risk analytics, examining key vulnerabilities in the supply chain and product-delivery chain. Meanwhile, scientific and engineering analytics are advancing rapidly. What if the data aren't numbers at all? You might have a video stream or a text-based document. Once it is stored in digital form, you can identify a significant event or the most commonly used words. "Word clouds" visually show the results of text mining, like the one here that is formed from this blog post.
Leading the creation of "Smart Manufacturing"
ChEs are helping lead the use of data to advance "Smart Manufacturing," which was featured recently in Chemical Engineering Progress. Its vision is using cyberinfrastructure to transform manufacturing into a seamless ecosystem of design, operation, and business. Safety is an overarching goal: zero risk and zero emissions.
Key aspects are widespread deployment of physical and chemical sensors, collection and analytics of supply-chain data, system-scale control and planning of process units, integrated product and process design, rapid prototyping or simulation, and safety monitoring.
Three ChE leaders are Jim Davis, vice provost for Information Technology of UCLA; Tom Edgar, past VP of Computing at UTexas and widely known for his process-control and optimization texts; and Jim Porter, former DuPont chief engineer and vice president, Engineering and Operations. Working with diverse industries including energy, food, pharmaceuticals, machining, and materials, they have helped create a Smart Manufacturing Leadership Coalition that is developing both cyberinfrastructure and commercial test beds.
Chemical engineers to the fore
These advances echo my earlier thoughts that manufacturing is turning to the aspects where ChEs have long-time strengths: processing, property-dominated products, and integrated cyberinfrastructure. At the same time, the challenge for ChEs is to master the new cyberinfrastructure concepts and tools we need, from analytics to massively parallel computing. In my next post, I'll write about how the breadth of the profession is one of our greatest strengths as we move into our new Golden Age of Chemical Engineering.