Showing posts with label SCADA. Show all posts
Showing posts with label SCADA. Show all posts

Thursday, September 12, 2013

[-18000] Have not heard from this server (OSI PI)

You're reading this because:
  • You are an automation engineer with (OSI) PI administration duties.
  • You support a PI High-Availability (PI HA) system
  • Your PI collective keeps going out-of-sync
This is what you see when you launch the PI Collective Manager:

This is a sporadic issue, with many possible causes; but, one cause is this tuning parameter:

Replication_SyncTimeoutPeriod

If the time it takes to read configuration changes exceeds the Replication_SyncTimeoutPeriod, the operation is aborted and the Secondary may get out of sync with the Primary. In this case, no amount of retrying by the Secondary will be successful because the configuration changes are dequeued after the initial request. Replication to that Secondary will halt until it gets re-initialized.
The default value for Replication_SyncTimeoutPeriod is 300 seconds (a.k.a. 5-minutes). So if, for some reason, your secondary server disconnects from the primary for 5-minutes, you need to re-initialize.

Re-initializing the secondary server is essentially a "copy-paste" of the primary PI server onto the secondary and can take several hours since it starts with a full backup of the primary PI server first. If you can avoid having to babysit a secondary re-initialization, you ought to change this tuning parameter: Replication_SyncTimeoutPeriod.  It can take up to 20-minutes for a server to reboot, so choose wisely.

p.s. - Thanks, Joy Wang for the support.

Monday, September 9, 2013

Automation Software: Pick OPC to be more Futureproof


When scoping out automation projects, we commonly run into the tension of cost versus extensibility.

A system that is "futureproof" is one that can withstand changes to requirements or specifications that are determined at some later time. As with a lot of decisions in life, choosing a short-term gain often comes at the expense of the long-term.

So is the case when deciding to go with OPC at a greater initial cost. OPC stands for "OLE for Process Control" where "OLE" is Microsoft terminology that stands for Object Linking Embedding. (I guess that would make OPC short for "Object Linking Embedding for Process Control.")

OPC Foundation logo As discussed here and here, OPC is the standard for communications between automation systems. OPC is maintained by the OPC Foundation… an industry consortium and is essentially what makes vendor agnosticism possible.
Vendor Agnosticism
Not having to believe in or commit to or be beholden to a single vendor.
For example, suppose you want to go with a Rockwell system. Rockwell has an embedded historian called, "FactoryTalk." Under the hood, FactoryTalk is OSIsoft PI. Side-by-side, they share the same folder structure, the same commands, the same services.

So the logical thing to do when setting up a corporate PI system is to buy a PItoPI interface and call it a day, right?

Not so fast.

Suppose the facility has a Building Automation System (BAS) and while out-of-scope for this project, the data from the BAS is expected to go into this corporate PI system. What happens then?

One option is to find that specific BAS vendor and see if OSIsoft sells a PI interface for this BAS vendor.

A second option is to see if this BAS vendor sells an OPC server for their data and if so purchase the OPC server and purchase the OPCtoPI interface and connect the two via OPC.

What happens if another control system enters the picture (suppose a Finesse/DeltaV system was purchased and commissioned)? The same questions have to get asked again (and re-answered).

Futureproofing in this case means choosing OPC and running every new system through this standard of communication, even at a greater cost to the initial system. In short-term, there is a larger budget to justify; in the long-term you eliminate future work (Design Reviews, meetings, revisiting past decisions…)


Wednesday, July 10, 2013

OSI PI Historian Software is Not only for Compliance

In October 1999, I joined a soon-to-be licensed biotech facility as Associate (Fermentation) Engineer. They had just got done solving some tough large-scale cell culture start-up problems and were on their way to getting FDA licensure (which happened in April 2000).

As the Manufacturing Sciences crew hadn't yet bulked up to support full-time commercial operations, there were 4 individuals from Process Sciences supporting the inoculum and production stages.

My job was to take over for these 4 individuals so they could resume their Process Science duties. And it's safe to say that taking over for 4 individuals would've not been possible were it not for the PI Historian.

The control system had an embedded PI system with diminished functionality: its primary goal in life was to serve trend data to HMIs. And because this was a GMP facility and because this embedded PI was an element of the validated system, the more access restrictions you could place on the embedded PI, the better it is for GMP and compliance.

Restricting access to process trends is good for GMP, but very bad for immediate-term process troubleshooting and long-term process understanding, thus Automation had created corporate PI: a full-functioned PI server on the corporate network that would handle data requests from the general cGMP citizen without impacting the control system.

Back in the early-2000's, this corporate PI system was not validated... and it didn't need to be as it was not used to make GMP forward-processing decisions.

If you think about it: PI is a historian. In addition to capturing real-time data, it primarily serves up historical data from the PI Archives. Making process decisions involves real-time, data, which was available from the validated embedded PI system viewed from the HMI.

Nonetheless, the powers that be moved towards a validating the corporate PI system, which appears to be the standard as of the late-2000's. 

Today, the success for PI system installations in the biotech/pharma sector is measured by how flawlessly the IQ and OQ documents were executed.   Little consideration is really given to the usability of the system in terms of solving process issues or Manufacturing Sciences efficiency until bioreactor sterility issues come knocking and executive heads start rolling over microbial contamination.

Most PI installations I run into try to solve the compliance problem, not a manufacturing problem, and I think this largely the case because automation engineers have been sucked into the CYA-focus rather than value-focus of this process information:
  • Trends are created with "whatever" pen colors.  
  • Tags are named the same as the instrumenttag that came from the control system.  
  • Tag descriptors don't follow a nomenclature
  • Data compression settings do not reflect reality...
  • PI Batch/EventFrames is not deployed
  • PI ModuleDB/ AF is minimally configured
The key to efficiencies that allow 1 Associate Engineer to take over the process monitoring and troubleshooting duties of 4 seasoned PD scientists/engineers lie precisely having a lot of freedom in using and improving the PI Historian.  

If said freedom is not palatable to the QA folks (despite the fact that hundreds of lots were compliantly released when manufacturing plants allowed the use of unvalidated PI data for non-GMP decision), the answer is to bring process troubleshooters and data scientists in on at the system specification phase of your automation implementation.

If your process troubleshooters don't know what to ask for upfront, there are seasoned consultants with years of experience that you can bring onto your team to help.

Let's be clear: I'm not downplaying the value of a validated PI system; I'm saying to get user input on system design upfront.

Saturday, May 18, 2013

James T. Kirk, Plant Manager

If you think about it, the starship Enterprise is a plant (i.e. factory).

It's a plant that manufactures light-years.


Construction of the USS Enterprise NCC-1701 as depicted in Star Trek 2009

Kirk is the Plant Manager.

Spock is the Director of Technology.

McCoy is charge of EH&S.

Scotty is Director of Production (i.e. running the warp drive that produces all those light years).

And the SCADA (supervisory control and data acquisition) system is what they call the Enterprise's "Computer."

Nothing illustrates this better than this one scene from J.J. Abrams' 2009 reboot of Star Trek.

SPOILER ahead... If you haven't seen it, you should stop reading this post and go rent it on Amazon.

Then, you can go look up movie times and get tickets to the sequel(out this week).






Anyway, at some point in the movie, Kirk and Scotty get beamed aboard the Enterprise, but end up in utilities. Scotty gets beamed into the piping so Kirk has to go free him.


Where does he go?



Looks like an HMI (human machine interface) to me....



What's he doing? Oh, manually overriding a valve.



It's hardly recognizable as an HMI with those sexy lights across the top and snazzy faceplate graphics.



I guess they covered basic SCADA training in Starfleet ensign training.

But make no mistake. That looks like either a PLC (programmable logic controller) or a DCS (distributed control system).

And this is just for the utilities. The control system for the entirety of the Enterprise would be far more sophisticated.

I keep reading about how long we have to wait before we get Star Trek technologies... or how long before we have hoverboards...

But the fact of the matter is this. So long as we are minting CS and ChemE grads whose purpose in life is to get internet users to click on ads (as opposed to them creating and deploying SCADA software), it's going to be a long, long time.

At Zymergi, we're doing our part, furthering the deployment of these technologies by helping install, validate, and use these SCADA systems to manufacture biologics.

How about you?

Other reading:

All screenshots are from the Star Trek 2009 movie from Paramount Pictures, Spyglass Entertainment and Bad Robot Productions.
-->