Predictive Analytics in Operations

In this special guest feature, Michelle Odajima, Data Science Program Leader at OSIsoft, talks about applying predictive analytics to the operational space, and provides three basic rules for doing so: first, come up with an objective; second, get good data; third, build a team with data science skills. Michelle has a broad background in science, technology and predictive analytics/data science. She has worked in numerous industries such as Automotive and Chemicals, Semiconductors, Endocrinology and Pathology, Non-Profits and Hydrology, IT Consulting, Energy and Utilities. Moreover, she has been on both the IT and OT side for companies. With over 15+ years in science, technology and using math to solve problems, she brings deep industrial knowledge of solving problems, growing data science teams, supporting operations, big data and analytics, and enterprise data strategies.

There was a mystery that scientists at one of our leading national labs couldn’t solve.

Researchers at Lawrence Livermore National Laboratories couldn’t pinpoint why they experienced rapid variations in electrical load in its supercomputing center. The swings were substantial—with power dropping from 9 megawatts to a few hundred kilowatts—but weirdly predictable.

By cross-checking different data streams, the source of the problem became apparent: the droop coincided with scheduled maintenance for the chilling plant. By smoothing out the ramp, LLNL was able to help out its local utility.

The incident speaks volumes about the state of predictive analytics, big data and the potential impact they can have on operational environments. On one hand, companies are potentially facing tremendous and debilitating problems that only data analysis can solve. On the other hand, it can be sometimes easy to overlook the obvious.

When applying predictive analytics to the operational space, I generally try to follow three basic rules. First, come up with an objective. Second, get good data. Third, build a team with data science skills.

1. Come Up With an Objective

Predictive maintenance is like the gateway drug of big data. Industry advocacy groups have estimated that unplanned outages cost U.S. manufacturers $20 billion alone every year and that close to 80% of these can be prevented. The other great thing about predictive maintenance is that you can often achieve tangible results early.

Traditionally, companies have relied on scheduled maintenance checks to prevent failures. Vince Palsoni, who runs big data initiatives at East Coast utility company Alectra noted that approximately 89 percent of equipment failures are not related to run times and thus may not be detected in maintenance rounds.

They replaced the time based “make the rounds” program with a condition-based maintenance programs. Companies such as Alectra saved $3.5 billion in repair costs and lost downtime by switching to a condition-based program based around data signals.

2. Get Good Data

Once you have an objective, translate it into a solvable question. As one of the largest wind power providers in North America, Xcel Energy wanted to increase the use of wind power and decrease their dependency on coal. However, wind power remains unpredictable, and sudden changes could result in wind turbines “clutching” so that they failed to produce power. When a turbine clutched, Xcel had to quickly ramp up coal generation.

Xcel Energy took the broader problem of unreliable wind power and translated it into a solvable problem – how do you accurately forecast weather so you can optimize turbine performance, avoid clutching, and reduce dependency on coal power? Xcel tracked 49 data points on thousands of turbines and fed that data into a system they designed with NCAR and NREL.

Ultimately, Xcel reduced error by 38% and saved $46 million over six years, producing more wind from the same number of turbines, and reducing coal dependency and other operational costs.

3. Build from There

Mining company, Syncrude, was experiencing engine meltdowns with its trucks along their routes in the tundra. To mitigate the situation, they used 6600 data points from 131 trucks, processing as many as 1716 values per second, and were able to diagnose the issue and then implement a system of predictive monitoring – while saving approximately $20 million per year.

But there’s more to this than a happy predictive maintenance story because the data monitoring revealed that some employees were not following safety procedures correctly. By virtue of the fact that they had more operational visibility, they found the problem and were able to reduce the risk of particular types of injuries by 85 percent.

It’s the kind of lesson you see elsewhere. Similarly, water utilities that have begun to use analytics to pinpoint leaks and discovered they can share this data with the public and provide customers with better, more up-to-date information about system deficiencies. And as a result, customer satisfaction rises.

How will you know that a question is solvable with analytics?

Ultimately, it comes down to three things:

  • Do you know what data is needed to perform the analysis?
  • Is that data available?
  • Is that data trustworthy?

Data science can be dirty work. If you are going to spend the time to shape data to run the analytics, you want to make sure the problem you are answering is worth that time. Another important aspect is the people. Are they trustworthy enough to take that action on, especially if the decision involves millions of dollars of assets. That level of trust means impeccable data management and then iterating on your model. Both will build accuracy and trust over time.

It’s all the same data: companies are just discovering different ways to use it.

 

Sign up for the free insideAI News newsletter.