Showing posts with label PAT. Show all posts
Showing posts with label PAT. Show all posts

Thursday, January 23, 2014

Multivariate Analysis: Pick Actionable Factors Redux

When performing multivariate analysis, say multiple linear regression, there's typically an objective (like "higher yields" or "troubleshoot campaign titers"). And there's typically a finite set of parameters that are within control of the production group (a.k.a. operators/supervisors/front-line managers).

This finite parameter set is what I call, "actionable factors," or "process knobs." For biologics manufacturing, parameters like

  • Inoculation density
  • pH/temperature setpoint
  • Timing of shifts
  • Timing of feeds
  • Everything your process flow diagram says is important
are actionable factors.

Examples of non-actionable parameters include:
  • Peak cell density
  • Peak lactate concentration
  • Final ammonium
  • etc.
In essence, non-actionable parameters are generally measured and cannot be changed during the course of the process.

Why does this matter to multivariate analysis? I pick on this one study I saw where someone built a model against a commercial CHO process and proved that final NH4+ levels inversely correlates with final titer.

What are we to do now?  Reach into the bioreactor with our ammonium-sponge and sop up the extra NH4+ ion?

With the output of this model, I can do absolutely nothing to fix the lagging production campaign. Since NH4+ is evolved as a byproduct of glutamine metabolism, this curious finding may lead you down the path of further examining CHO metabolism and perhaps some media experiments, but there's no immediate action nor medium-term action I can take.

On the other hand, had I discovered that initial cell density of the culture correlates with capacity-based volumetric productivity, I could radio into either the seed train group or scheduling and make higher inoc densities happen.


Wednesday, July 10, 2013

OSI PI Historian Software is Not only for Compliance

In October 1999, I joined a soon-to-be licensed biotech facility as Associate (Fermentation) Engineer. They had just got done solving some tough large-scale cell culture start-up problems and were on their way to getting FDA licensure (which happened in April 2000).

As the Manufacturing Sciences crew hadn't yet bulked up to support full-time commercial operations, there were 4 individuals from Process Sciences supporting the inoculum and production stages.

My job was to take over for these 4 individuals so they could resume their Process Science duties. And it's safe to say that taking over for 4 individuals would've not been possible were it not for the PI Historian.

The control system had an embedded PI system with diminished functionality: its primary goal in life was to serve trend data to HMIs. And because this was a GMP facility and because this embedded PI was an element of the validated system, the more access restrictions you could place on the embedded PI, the better it is for GMP and compliance.

Restricting access to process trends is good for GMP, but very bad for immediate-term process troubleshooting and long-term process understanding, thus Automation had created corporate PI: a full-functioned PI server on the corporate network that would handle data requests from the general cGMP citizen without impacting the control system.

Back in the early-2000's, this corporate PI system was not validated... and it didn't need to be as it was not used to make GMP forward-processing decisions.

If you think about it: PI is a historian. In addition to capturing real-time data, it primarily serves up historical data from the PI Archives. Making process decisions involves real-time, data, which was available from the validated embedded PI system viewed from the HMI.

Nonetheless, the powers that be moved towards a validating the corporate PI system, which appears to be the standard as of the late-2000's. 

Today, the success for PI system installations in the biotech/pharma sector is measured by how flawlessly the IQ and OQ documents were executed.   Little consideration is really given to the usability of the system in terms of solving process issues or Manufacturing Sciences efficiency until bioreactor sterility issues come knocking and executive heads start rolling over microbial contamination.

Most PI installations I run into try to solve the compliance problem, not a manufacturing problem, and I think this largely the case because automation engineers have been sucked into the CYA-focus rather than value-focus of this process information:
  • Trends are created with "whatever" pen colors.  
  • Tags are named the same as the instrumenttag that came from the control system.  
  • Tag descriptors don't follow a nomenclature
  • Data compression settings do not reflect reality...
  • PI Batch/EventFrames is not deployed
  • PI ModuleDB/ AF is minimally configured
The key to efficiencies that allow 1 Associate Engineer to take over the process monitoring and troubleshooting duties of 4 seasoned PD scientists/engineers lie precisely having a lot of freedom in using and improving the PI Historian.  

If said freedom is not palatable to the QA folks (despite the fact that hundreds of lots were compliantly released when manufacturing plants allowed the use of unvalidated PI data for non-GMP decision), the answer is to bring process troubleshooters and data scientists in on at the system specification phase of your automation implementation.

If your process troubleshooters don't know what to ask for upfront, there are seasoned consultants with years of experience that you can bring onto your team to help.

Let's be clear: I'm not downplaying the value of a validated PI system; I'm saying to get user input on system design upfront.

Wednesday, September 5, 2012

Cell Culture Database - Batch Information

You work in biopharma. Maybe you're a fermentation guru... or a cell culture hot shot. Whatever the case... This is your process.
We muggles don't have the luxury of waving our wands and having protein fold themselves mid-air. There's usually a container where the process happens. processes happen in a unit
A time-window (starttime to endtime) is when processes happen.

Operators execute process instructions; these procedures is how the process happens.
The execution of process instruction results in an output. The output of the process step is the product and constitutes the what.
Lastly, the process (step) is given a name describing who the batch is.
It stands to reason that the who, what, how, when, where of a batch is characterized by:
  • batchid
  • product
  • procedure
  • starttime - endtime
  • unit
and fully describe batch information for cell cultures and fermentation.

Organize Your Cell Culture Data

Friday, July 27, 2012

Manufacturing Sciences: Campaign monitoring v. Process Improvement

Manufacturing Sciences is the name of the department responsible for stable, predictable performance of the large-scale biologics process.

Manufacturing Sciences also describes the activities of supporting for-market, large-scale, GMP campaigns. The three main functions of Manufacturing Sciences are:
  1. Campaign monitoring
  2. Long-term process improvement
  3. Technology Transfer
Within the department are:
  1. Data analysis resources - responsible for campaign monitoring
  2. Lab resources - responsible for process improvement
manufacturing sciences flow
Figure 1: Flow of information within Manufacturing Sciences

The data group is responsible for monitoring the campaign and handing off hypothesis to the lab group.  The lab group is responsible for studying the process under controlled conditions and handing off plant trials back to the data group.

Campaign Monitoring

When a cGMP campaign is running, we want eyeballs watching each batch. There are automated systems in place to prevent simple excursions, but on a macro level, we still want human eyeballs. Eyeballs from the plant floor are the best. Eyeballs from the Manufacturing Sciences department are next best because they come with statistical process control (SPC) tools that help identify common and special cause.

Activities here involve:
Ultimately, all this statistical process control enables data-based, defensible decisions for the plant floor and to production management, much of which will involve the right decisions for decreasing process variability, increasing process capability and reliability.

Long-term Process Improvement

The holy-grail of manufacturing is reliability/predictability. Every time we turn the crank, we know what we're going to get: a product that meets the exact specifications that can be produced with a known quantity of inputs within a known or expected duration.

Long-term process improvement can involve figuring out how to more product with the same inputs. Or figuring out how to reduce cycle time. Or figuring out how to make the process more reliable (which means to reduce waste or variability.

This is where we transition from statistical process control to actual statistics. We graduate from uni- and bivariate analysis into multivariate analysis because biologics processes have multiple variables that impact yield and product quality. To understand where there are opportunities for process improvement, we must understand the system rather than simple relationships between the parts. To get this understanding, we need to have a good handle on:
Note: in order to have a shot at process improvement, you need variable data from large-scale. Meanwhile if you succeed at statistical process control, you will have eradicated variability from your system.

This is why a manufacturing sciences lab is the cornerstone of large-scale, commercial process improvement - so that you can pursue process improvement without sacrificing process variability and the results of your statistical process control initiatives.

Outsource Manufacturing Sciences

Monday, July 16, 2012

OSI PI is Process Omniscience

Employee #2 at OSIsoft is this guy named Don Smith. And the first or second time I met him, I found out that he has an aquarium is hooked up to an OSI PI server and that he uses ProcessBook to trend things like pH, dO2, and temperature for his fish.

I've never seen the setup, but as he was telling me this story, it occurred to me that if his fish were theists, that Don would be their god. Not in the sense that he was their creator, but in the sense that at all times, Don knows every important thing that needs to be known about their "earth".

At the time, I was a fermentation engineer for Genentech, so if Don was the omniscient presence for his fish, then I was the omniscient presence for these mammalian cells that were flying around these bioreactors.

Being the omniscient with respect to CHO cells or fish with the aid of a PI system is one thing. Knowing everything there is to know about a process is another.
There was this one time I got a call from the Instrumentation & Electrical (I&E) department asking me to see if I could tell if a probe got calibrated sometime between 1:00 and 1:15pm. I pulled up PI ProcessBook and was looking for the typical calibration characteristic of zeroing and spanning of the probe signal. I looked at the squiggly flat line and told those guys that there didn't seem to be any activity on the probe at all.

A week later, I find out that they fired an instrument technician on account of falsifying a work order. The trespass? Claiming that he executed the calibration when, according to PI data, nothing was going on at the time.

I&E must have had their suspicions, but when they confirmed with the all-knowing process guru (and had me print out a screenshot), they had enough to let the guy go.

When it comes to operating in a cGMP environment, it really pays to have process omniscience... like an OSIsoft PI system recording every last detail about your process.

Thursday, March 29, 2012

Multiple Linear Regression (Multivariate Analysis)

Here's your process:

generic blackbox process
It's a black box. All you know is that you have multiple process inputs (X) and at least one process output (Y) that you care about. Multivariate analysis is the method by which you analyze how Y varies with your multiple inputs (x1, x2,... xn). There a lot of ways to go about figuring out how Y relates.

One way to go is to turn that black box into a transparent box where you try to understand the fundamentals from first principles. Say you identify x1 as cell growth and you believe that your cells grow exponentially, you can try to apply an equation like Y = Y0eµx1.

But this is large-scale manufacturing. You don't have time for that. You have to supply management with an immediate solution followed by a medium-term solution. What you can do is assume that each parameter varies with Y linearly.

y mx b
Just like we learned in 8th grade. How can we just say that Y relates to X linearly? Well, for one, I can say whatever I want (it's a free country). Secondly, all curves (exponential, polynomial, logarithmic, asymptotic...) are linear over small ranges... you know, like the proven acceptable range in which you ought to be controlling your manufacturing process.

Assuming everything is linear keeps things simple and happens to be rooted in manufacturing reality. What next?

y m1x1 m2x2 b
Next you start adding more inputs to your equation... applying a different coefficient for each new input. And if you think that a few of your inputs may interact, you can add their interactions like this:

mlr with interactions
You achieve interactions by multiplying the inputs and giving that product its own coefficient. So now you - the big nerd - have this humongous equation that you need solving. You don't know:
  • Which inputs (x's) to put in the equation
  • What interactions (x1 * x2) to put in the equation
  • What coefficients to put in the keep (m's)

What you're doing with multiple linear regression is picking the right inputs, interactions and so that the data you have fits that your statistical software package and brute-force the coefficients (m's) to fit an equation that gives you the least error.

Here's the thing: The fewer rows you have in your data table, the fewer inputs you get to throw into your equation. If you have 10 samples, but 92 inputs, you're going to have to be very selective with what you try in your model.

It's a tough job, but someone's got to do it. And when you finally do (i.e. explain the relationship between, say, cell culture titer and your cell culture process inputs), millions of dollars can literally roll into your company's coffers.

Your alternative is to hire Zymergi and skip that learning curve.

More reading:

Friday, March 23, 2012

How Manufacturing Sciences Works

The Manufacturing Sciences laboratory and data groups interact like this:

zymergi manufacturing sciences business process flow
Favorable special cause signals at large-scale give us opportunities for finding significant factors and interactions that produced these special causes. With a significant correlation adjusted (for cell culture: R2> 0.65 and p < 0.05), we are able to justify expending lab resources to test our hypothesis.

Significant actionable factors from the multivariate analysis of large-scale data become the basis for a DOE. Once the experiment design is vetted, documents can be drafted and experiment prepped to test those conditions.

There are a lot of reasons we go to the lab first. Here are a few:
  1. You have more N (data samples)
  2. You can test beyond the licensed limits
  3. You get to isolate variables
  4. You get the scientific basis for changing your process.

Should your small-scale experiments confirm your hypothesis, your post-experiment memo becomes the justification for plant trials. Depending on how your organization views setpoint changes within the acceptable limits or license limits, you will run into varying degrees of justification to "fix what isn't broken." Usually, the summary of findings attached to the change order is sufficient for with-license changes to process setpoints. If your outside-of-license-limitsfindings can produce significant (20 to 50%) increase in yields (or improvements in product quality), you may have to go to the big guns (Process Sciences) to get more N and involve the nice folks in Regulatory Affairs.

From a plant trial perspective, I've seen large-scale process changes run under QA-approved planned deviation for big changes. I've seen on-the-floor production-supervision-approved changes for within acceptable range changes. I've seen managers so panicked by a potentially failing campaign that they shoot first and ask questions later (i.e. intiate the QA discrepancies, address the cGMP concerns later).

Whatever the case. The flow of hypothesis from the plant to the lab is how companies gain process knowledge and process understanding. The flow of plant trials from the lab back to the plant is how we realize continuous improvement.

More reading:

Credit goes to Jesse Bergevin for inculcating this model under adverse conditions.

Tuesday, March 20, 2012

Manufacturing Sciences - Local Lab

The other wing of the Manufacturing Sciences group was a lab group.

Manufacturing Sciences Lab
Basically, you enter the virtuous cycle thusly:
  1. Design an experiment
  2. Execute the experiment
  3. Analyze the data for clues
  4. Go to Step 1.

You're thinking, "Gosh, that looks a lot like Process Sciences (aka Process R&D)." And you'd be right. That's exactly what they do; they run experiments at small scale to figure out something about the process.

Territorial disputes are common when it comes to local Manufacturing Sciences groups having local labs. From the Process Science's perspective, you have these other groups that may be duplicating work, operating outside of your system, basically doing things out of your control. From the Manufacturing Science's perspective, you need a local resource that works on the timetable commercial campaigns to address very specific and targeted issues. People who can sit at a table to update the local plant on findings.

If your cashflow can support it, I recommend developing a local lab and here's why:

The lab counterpart of the Manufacturing Sciences group ran an experiment that definitively proved a physical bioreactor part was the true root cause of poor cell growth... this poor cell growth had delayed licensing of the 400+ million dollar plant by 10 months. The hypothesis was unpopular with the Process Science department at corporate HQ and there was much resistance to testing it. In the end, it was the local lab group that ended the political wrangling and provided the data to put the plant back on the tracks towards FDA licensure.

I do have to say that not everything is adversarial. We received quite a bit of help from Process Sciences when starting up the plant and a lot of our folks hailed from Process Sciences (after all, where do you think we got the know-how?). When new products came to our plant, we liaised with Process Science folk.

My point is: in more cases than not, a local manufacturing sciences group with laboratory capability is crucial to the process support mission.

Monday, March 19, 2012

Manufacturing Sciences - Local Data

My second job out of college was to be the fermentation engineer at what was then the largest cell culture plant (by volume) in the United States. As it turns out, being "large" isn't the point; but this was 1999 and we didn't know that yet, we were trying to be the lowest per gram cost of bulk product; but I digress.

I was hired into a group called Manufacturing Sciences, which reported into the local technology department that reported to the plant manager. My job was to observe the large-scale cell culture process and analyze the data.

Our paramount concern was quantifying process variability and trying to reduce it. The reason, of course, is to make the process stable so that manufacturing is predictable. Should special cause variability show up, the job was to look for clues to improve volumetric productivity.

The circle of life (with respect to data) looks like this:

data flow mfg support

Data and observations come from the large-scale process. We applied statistical process control (SPC) and statistical analysis like control charts and ANOVA. And from our analysis, we are able to implement within-license changes to make the process more predictable. And should the special cause signals arise, we stood ready with more statistical analysis/methods to increase volumetric productivity.

Get Contract Plant Support!

Sunday, March 18, 2012

Variability Reduction is a core objective

Reducing process variability is a core objective for process improvement initiatives because low variability helps you identify small changes in the process.

Here's a short example to illustrate this concept. Suppose you are measuring osmolality in your buffer solution and the values for the last 10 batch are as follows:

293, 295, 299, 297, 291, 299, 298, 292, 293, 296.

Then the osmolality of the 11th batch of buffer comes back at 301 mOsm/kg. Is this 301 result "anomalous" or "significantly different"?

It's hard to tell, right? I mean, it's the first value greater than 300, so that's something. But it is only 2 mOsm/kg greater than the highest previously observed while the measurement ranges from 291 to 299, an 8 mOsm/kg difference.

Let's try another series of measurements - this time, only 7 measurements:

295, 295, 295, 295, 295, 295, 295.

Then the measurement of the eighth batch is 297 mOsm/kg. Is this result anomalous or significantly different? The answer is yes. Here's why:

The process demonstrates no variability (within measurement error) and all of the sudden, there is a measurable difference. The 297 mOsm/kg is a distance of 2 mOsm/kg from the highest measured value. But the range is 0 (with all values measuring 295). The difference is infinitely greater than the range.

There are far more rigorous data analysis methods to better quantify the statistics comparing differences that will be discussed in the future, but you can see how variability reduction helps you detect differences sooner.

Also, remember that variability (a.k.a. standard deviation) is the denominator of the capability equation:

capability equation

Reducing process variability increases process capability.

To summarize: reducing process variability helps in 2 ways:

  1. Deviations (or differences) in the process can be detected sooner.
  2. Capability of the process (a.k.a. robustness) increases.

Hitting the aforementioned two birds with the proverbial one stone (variability reduction) is a core objective of any continuous process improvement initiative. Applying the statistical tools to quantify process variability ought to be a weapon in every process engineer's arsenal.

    Wednesday, October 26, 2011

    How to Compute Production Culture KPI

    Production culture/fermentation is the process step where the active pharmaceutical ingredient (API) is produced. The KPI for production cultures relates to how much product gets produced.

    The product concentration (called "titer") is what is typically measured. Operators submit a sterile sample to QC who have validated tests that produce a result in dimensions of mass/length3, for biologics processes, this is typically grams/liter.

    Suppose you measured the titer periodically from the point of inoculation; it would look something like this:

    titer curve typical production culture
    The curve for the majority of cell cultures is an "S"-shaped, also called "sigmoidal," curve. The reason for this "S"-shaped curve was described by Malthus in 1798 where he described population growth as geometric (i.e. exponential growth) while increases in agricultural production was arithmetic (i.e. linear); at some point, the food supply is incapable of carrying the population and thus the population crashes.

    In the early stages of the production culture, there is a surplus of nutrients and cells - unlimited by nutrient supply - grow exponentially. Unlike humans and agriculture, however, a production fermentation does necessarily have an increasing supply of nutrient, so the nutrient levels are fixed. Some production culture processes are fed-batch, meaning at some point during the culture, you send in more nutrients. Regardless, at some point, the nutrients run low and the cell population is unable to continue growing. Hence the growth curve flattens and basically heads east.

    In many cases, the titer curve looks similar to the biomass curve. In fact, the integral (area under that biomass curve) is what the titer curve typically mimicks.

    The reason this titer curve is so important is because the slope of the line drawn from the origin (0,0) to the last point on the curve is the volumetric productivity.

    Volumetric Productivity
    Titer/culture duration (g/L culture/day)

    The steeper this slope, the greater the volumetric productivity. Assuming your bioreactors are filled to capacity and that supplying the market with as much product as fast as possible, then maximizing volumetric productivity ought to be your goal.

    Counter-intuitively, maximizing your rate of production means shortening your culture duration. Due to the Malthusian principles described above, your titer curve flattens out as your cell population stagnates from lack of nutrients. Maximizing your volumetric productivity means stopping your culture when the cells are just beginning to stagnate. End the culture early and you lose the opportunity cost of producing more product; end the culture late and you've wasted valuable bioreactor time on dying cells.

    The good news is that maximizing your plant's productivity is a scheduling function:

    1. Get non-routine samples going to measure the in-process titer to get the curve.
    2. Study this curve and and draw a line from the origin tangent to this curve.
    3. Draw a straight line down to find the culture duration that maximizes volumetric productivity.
    4. Call the Scheduling Department and tell them the new culture duration.
    5. Tell your Manufacturing Sciences department to control chart this KPI to reduce variability

    There's actually more to this story for Production Culture KPI, which we'll cover next.

    Tuesday, October 25, 2011

    How to Compute Seed Fermentation KPI

    So,  if you agree that the purpose of seed fermentation (a.k.a inoculum culture) is to scale-up biomass, then the correct key performance indicator is final specific growth rate.

    To visualize final specific growth rate, plot biomass against time:

    The cell density increases exponentially, which means on a log-scale, the curve becomes linear. The specific growth rate (μ) is the slope of the line. The final specific growth rate (μF) is the slope of all the points recorded in the last 24-hours prior to the end of the culture.

    To compute the final specific growth rate, simply put datetime or culture duration in the first column, biomass in the second column, and the natural log of biomass in the third column:

    tabular inoc culture kpi
    In Excel, use the SLOPE function to compute the slope of the natural log of biomass:

    Alternatively, if you don't want to bother with the third column:

    This number has engineering units of inverse time (day-1). While this measure is somewhat hard to physically understand, we look towards the ln(2) = 0.693 as a guide: If a culture has a specific growth rate ~ 0.70 day-1, then its cell population is doubling once per day.

    Computing this KPI for seed fermentation and then control charting this KPI is the best start you can make towards monitoring and controlling your process variability.

    Monday, October 24, 2011

    KPIs for Cell Culture/Fermentation

    Control charting each process step of your biologics process is a core activity for manufacturing managers that are serious about reducing process variability.

    Sure, there's long-term process understanding gained from the folks in manufacturing sciences, but that work will be applied several campaigns from now.

    What are the key performance indicators (KPIs) for my cell culture process today?

    To answer this question, start with the purpose(s) of cell culture:
    1. Grow cells (increase cell population)
    2. Make product (secrete the active pharmaceutical ingredient)

    Seed Fermentation (Grow Cells)

    There are plenty of words that describe cell cultures whose purpose is to scale-up biomass; to wit, seed fermentation, inoculum cultures, inoc train etc. Whatever your terminology, the one measurement of seed fermentation success is growth rate (μ), which is constant in the exponent of the Arrhenius equation:

    X = X0eμΔt

    • X = current cell density
    • X0 = initial cell density
    • Δt = elapsed time since inoculation

    For seed fermentation, the correct KPI is the final specific growth rate; which is the growth rate in the final 24-hours prior to transfer. The reason the final specific growth rate is the KPI is because the way seed fermentation ends is more important than how it starts.

    Production Fermentation (Make Product)

    The output of the Production Fermentor is drug substance; the more and the faster, the better. This why the logical KPI for Production Fermentation is Capacity-Based Volumetric Productivity.

    A lot of folks look at culture titer as their performance metric. Mainly because it's easy. You ship those samples off to QC and after they run their validated tests, you get a number back.

    Culture Titer
    Mass of product per volume of culture (g/L culture)

    The problem with using culture titer is that it does not take into account the rate of production of product. After all, if it took culture A takes ten days to make 2g/L and culture B takes 12 days to make the 2g/L, according to titer, they are equivalent, even though A was better. This is why we use volumetric productivity:

    Volumetric Productivity
    Titer/culture duration (g/L culture/day)

    Culture volumetric productivity takes into account the rate of production pretty well, and in our example culture A's performance is 0.20g/L/day while culture B's performance is 0.17 g/L/day. But what of the differences between the actual amount of product manufactured? I can run a 2L miniferm and get 0.40g/L/day, but that isn't enough to supply the market. This is why bioreactory capacity must be included in the true KPI for production cultures.

    Capacity-based Volumetric Productivity
    Volumetric Productivity * L culture / L capacity (g/L capacity/day)

    Capacity-based Volumetric Productivity is the Culture Volumetric Productivity multiplied by the percent of fermentor capacity-used, such that a filled fermentor scores higher than a half-full fermentor.

    KPIs are generally not product-specific; instead, they are process class specific. For instance, all seed fermentation for CHO processes ought to have the same KPI.

    Generally, KPIs are simple calculations derived from easily measured parameters such that the cost of producing the calculation is insignificant relative to the value it provides.

    KPIs deliver significant value when they can be used to identify anomalous performance and actionable decisions made by Production/Manufacturing in order to amend the special cause variability observed.

    Monday, October 17, 2011

    PIModuleDB: "It's What Makes PI Batch Possible!"

    In addition to correlating unit/alias to tags, the PI Module Database is the foundation for PI Batch, in fact, it is a requirement.

    You see, there's a special type of module called, "PIUnit". And the main difference between a PIUnit and a regular module is that a PIUnit can keep track of start/end times (a.k.a. PIUnitBatches or UnitProcedures as defined by S88).

    If you go to your Module Database, you can discern modules from piunits because they have different icons. The Module looks like a red/yellow/green cube with an "M" in the center of it. The PIUnit looks like a half-filled tank of water with pipes in and out.

    Edit PIUnit PIModuleDB

    When you right-click on the PIUnit and select Edit, the following form will present itself:

    Edit View PIModule Attributes

    Pay particular attention to the Unique ID attribute of the PIUnit. The key here is that when you create a PIUnit, the PI server will create a PIPoint (a tag) for the purpose of storing PIUnitBatches.

    You can prove it to yourself by doing a tag search on that gibberish text. In my case, I went straight to the PI SMT > Data > Archive Editor

    PIUnit uniqueID is tag

    What's more is that these events correspond with the unitbatches stored in PI Batch. The batch information about the batchid, product, procedure and endtime are stored at the starttime of the batch.

    uniqueID is tagname that stores batches
    You see, PI Batch is rationally a simple table... one with 7 columns and as many rows as you have batches. But if you are OSIsoft and alls you have is PI (hammer), everything starts looking like a tag (nail).

    This is why PI Batch Database... while seemingly tabular... is actually a data structure that is a hybrid of the hierarchical structure presented by the PI Module Database and PI tags. What makes PI Batch possible is that uniqueID of a PIUnit in PIModuleDB is the name of the tag that archives unitbatch information.

    Wednesday, September 14, 2011

    OSI PI Batch Database (BatchDB) for biologics lab and plant - part 1

    Biologics manufacturing is a batch process, which means that process steps have a defined starttime and endtime.

    CIPs start and end. SIPs start and end. Equipment preparations start and end. Fermentation, Harvest, Chromatography, Filtration, Filling are all process steps that start and end.

    Even the lab experiments are executed in a batch manner with defined starts and end.

    Like the ModuleDB, OSIsoft has a data structure within PI that describes batch and it is called PI Batch Database (PI Batch). While it comes free, it does cost at least 1 tag per unit (PIUnit) to use.

    The most important table is the UnitBatch table. The UnitBatch table contains the following fields:
    • starttime
    • endtime - when the batch happens
    • unit - where the batch happened (with which equipment)
    • batchid - who (name of the batch)
    • product - what was produced?
    • procedure - how was it produced?

    PIBatchIn essence, the UnitBatch table describes everything there is to know about a process step that happens on a unit. Remember: units are defined in the PI ModuleDB, which means the PI BatchDB depends on a configured PI ModuleDB.

    So why bother configuring yet another part of your PI server? The main reason is to increase the productivity of your PI users. In our experience, up to 50% of the time spent using PI ProcessBook inputting timestamps into the trend dialog. Configuring PI Batch makes it so that your users can change time-windows in ProcessBook with just a click.

    We have seen power-users put eyeballs on more trends in even less time than without PI Batch; and the more trends your team seems, the more process experience they gain.

    In this dismal economic environment, simply configuring PI Batch on your PI server can make your team up to 400% more productive. This particular modification takes less than a day to accomplish.

    Friday, September 9, 2011

    Multivariate Analysis in Biologics Manufacturing

    All these tools for data acquisition and trend visualization and search are nice. But at the end of the day, what we really want is process understanding and control of our fermentations, cell cultures and chromatographies.

    Whether a process step performs poorly, well or within expectations, put simply, we want to know why. 

    For biological systems, the factors that impact process performance are many and there are often interactions between factors for even simple systems such as viral inactivation of media.

    One time,  clogged filters with white residue were the result when transferring media from the prep tank to the bioreactor. On several occasions, this clogging put the transfer in hold and stopped production.

    After studying the data, we found that pH and Temperature were the two main effects that significantly impacted clogging. If the pH was high AND the temperature was high, the solids would precipitate from the media. But the pH or temperature during the viral inactivation was low, the media would transfer without exception.

    After identifying the multiple variables and their interactions, we were able to change the process to eliminate clogging as well as simplify the process.

    For even more complex systems like production fermentation, multivariate analysis produces results. In 2007, I co-published a paper with Rob Johnson describing how multivariate data analysis can save production campaigns. From the article is the regression pictured below.

    Multiple Linear Regression

    You can see that it isn't even that great a fit. Statisticians shrug all the time at RSquares less than 0.90. But from this simple model, we were able to turn around a lagging production campaign and achieve 104% Adherance To Plan (ATP).

    The point is not to run into trouble and use these tools & know-how to fix the problem. Ideally, we understand the process ahead of time by designing in-process capability and then fine tune it at large-scale; we are less fortunate in the real world.

    My point in all this is if you are buying tools and assembling a team without process understanding and control,  then you won't know which are the right tools or what is the best training. Keeping your eye on the process understanding/multivariate analysis prize will put you in control of your bioprocesses and out of the spotlight of QA or the FDA.