Showing posts with label process troubleshooting. Show all posts
Showing posts with label process troubleshooting. Show all posts

Friday, June 20, 2014

How to Scale X-Axis for ZOOMS Trends

In the last post, we talked about scaling the Y-Axis of ZOOMS trends.  What about the X-Axis?

Well, the X-axis (or as pedantic mathematicians would call it, "abscissa") for ZOOMS is always the time-axis.  As with any chart, there is an "x-min" for the minimum value of x as well as an "x-max."

In the world of time-series data, there are more colloquial names:
  • x-min is starttime
  • x-max is endtime
Unfortunately, there are also hundreds of ways of representing time:
  • 04-Jul-1776
  • 12/8/1941
  • back to the future time

Type Time Range into Search Box

And typing in the starttime and endtime is one way to set the time range:

ZOOMS is able to interpret a lot of date inputs, but not all of them.

Use the Trend Buttons

What happens when you've got a trend and you want to go from there?
There are 3 buttons at the bottom of each trend.
  1. Back Arrow is an arrow pointing to the left that will take the trend one time-range into the past.
  2. Forward Arrow is an arrow pointing to the right that will take the trend one time-range into the future.
  3. Refresh/Revert will restore the time-range to the original as-loaded trend.

Highlight Area On Trend

You can magnify a time range by highlighting it with the mouse: clicking and holding down the mouse button at one edge of the time range and dragging the mouse to the other edge of the time range:
When you release the mouse button, the trend will zoom to the selected time range.

Summary

In summary, there are at least 3 ways to scale the X-Axis of a ZOOMS trend:

  1. Type in the time range in the search box.
  2. Use the "back" and "forward" buttons
  3. Highlight an area on a trend
See Also:


Thursday, January 23, 2014

Multivariate Analysis: Pick Actionable Factors Redux

When performing multivariate analysis, say multiple linear regression, there's typically an objective (like "higher yields" or "troubleshoot campaign titers"). And there's typically a finite set of parameters that are within control of the production group (a.k.a. operators/supervisors/front-line managers).

This finite parameter set is what I call, "actionable factors," or "process knobs." For biologics manufacturing, parameters like

  • Inoculation density
  • pH/temperature setpoint
  • Timing of shifts
  • Timing of feeds
  • Everything your process flow diagram says is important
are actionable factors.

Examples of non-actionable parameters include:
  • Peak cell density
  • Peak lactate concentration
  • Final ammonium
  • etc.
In essence, non-actionable parameters are generally measured and cannot be changed during the course of the process.

Why does this matter to multivariate analysis? I pick on this one study I saw where someone built a model against a commercial CHO process and proved that final NH4+ levels inversely correlates with final titer.



What are we to do now?  Reach into the bioreactor with our ammonium-sponge and sop up the extra NH4+ ion?

With the output of this model, I can do absolutely nothing to fix the lagging production campaign. Since NH4+ is evolved as a byproduct of glutamine metabolism, this curious finding may lead you down the path of further examining CHO metabolism and perhaps some media experiments, but there's no immediate action nor medium-term action I can take.

On the other hand, had I discovered that initial cell density of the culture correlates with capacity-based volumetric productivity, I could radio into either the seed train group or scheduling and make higher inoc densities happen.

by

Monday, November 11, 2013

Contamination Control of Cell Culture Bioreactors

"Contamination Control"

A misnomer. I can see how they got that name... from "Pest Control," but I still hate it.

"Bioreactor control" makes sense as cell culture manufacturers try to direct the behavior of pH, dissolved oxygen, temperature, agitation, pressure...

But "contamination control"? No one is trying to direct the behavior of bioreactor contamination: Everyone tries to abolish bioreactor contaminations.

The abolition of bioreactor contamination in a large-scale setting is generally a team sport. It can take just one person to solve the contamination. But usually, the person who figures out what went wrong (the brains) is unlikely the same as the person who implements the fix (the hands). And in a GMP environment, the change implementation is a coordinated process involving many minds, personalities, and motivations. With all those people come an inordinate amount of politics for a goal that everyone seems to want to reach: no contams.

Immediate-/Medium-term Fixes

The first thing to realize is that operations management is usually the customer when it comes to solving bioreactor contaminations. Everyone's butt is on the line, but no group burns more resources responding to bioreactor contaminations than them. And in my experience, there is no "short-term" vs. "long-term" solution.

There is no long-termTo operations management, there's just immediate solution and medium-term solution.
  • Immediate solution :: what are you going to do for me today?
  • Medium-term solution :: what lasting solution are you going to implement after the immediate solution?

Science... if it fits

The second thing to realize is that there's no room for science. The prime objective is to stop the contaminations. The secondary objective is to find the root cause. If identifying the root cause helps stop the contamination, that's a bonus; but root cause or not, you still must stop the contaminations.

For example, if your contamination investigation team thinks that there are five contamination risks, the directive from management will be to implement all CAPAs necessary to address all five risks. If the fixes work, "Great! You met the objective." Do you know what the true root cause was? Not a clue (it was one of those five, but you'll never know which one).

Political

The third thing to realize is that contamination response is as much political as it is technical.
  • You can have the right solution, but present it the wrong way - and it's the wrong solution.
  • You can formulate the right solution that is not immediately actionable, no one wants to hear about it.
  • You can irrefutably identify the true root cause (thereby shining light on GMP deficiencies), and run against resistance.
Being right is different than being effective. And "bioreactor contamination control" at large-scale requires effectiveness. For in-house resources, it requires a keen understanding of interpersonal dynamics. For organizations that are at either a technical or political impasse, there are external bioreactor consultants who understand how to effectively troubleshoot and abolish bioreactor contaminations.


Abolish Bioreactor Contaminations

Wednesday, July 10, 2013

OSI PI Historian Software is Not only for Compliance

In October 1999, I joined a soon-to-be licensed biotech facility as Associate (Fermentation) Engineer. They had just got done solving some tough large-scale cell culture start-up problems and were on their way to getting FDA licensure (which happened in April 2000).

As the Manufacturing Sciences crew hadn't yet bulked up to support full-time commercial operations, there were 4 individuals from Process Sciences supporting the inoculum and production stages.

My job was to take over for these 4 individuals so they could resume their Process Science duties. And it's safe to say that taking over for 4 individuals would've not been possible were it not for the PI Historian.

The control system had an embedded PI system with diminished functionality: its primary goal in life was to serve trend data to HMIs. And because this was a GMP facility and because this embedded PI was an element of the validated system, the more access restrictions you could place on the embedded PI, the better it is for GMP and compliance.

Restricting access to process trends is good for GMP, but very bad for immediate-term process troubleshooting and long-term process understanding, thus Automation had created corporate PI: a full-functioned PI server on the corporate network that would handle data requests from the general cGMP citizen without impacting the control system.

Back in the early-2000's, this corporate PI system was not validated... and it didn't need to be as it was not used to make GMP forward-processing decisions.

If you think about it: PI is a historian. In addition to capturing real-time data, it primarily serves up historical data from the PI Archives. Making process decisions involves real-time, data, which was available from the validated embedded PI system viewed from the HMI.

Nonetheless, the powers that be moved towards a validating the corporate PI system, which appears to be the standard as of the late-2000's. 

Today, the success for PI system installations in the biotech/pharma sector is measured by how flawlessly the IQ and OQ documents were executed.   Little consideration is really given to the usability of the system in terms of solving process issues or Manufacturing Sciences efficiency until bioreactor sterility issues come knocking and executive heads start rolling over microbial contamination.

Most PI installations I run into try to solve the compliance problem, not a manufacturing problem, and I think this largely the case because automation engineers have been sucked into the CYA-focus rather than value-focus of this process information:
  • Trends are created with "whatever" pen colors.  
  • Tags are named the same as the instrumenttag that came from the control system.  
  • Tag descriptors don't follow a nomenclature
  • Data compression settings do not reflect reality...
  • PI Batch/EventFrames is not deployed
  • PI ModuleDB/ AF is minimally configured
The key to efficiencies that allow 1 Associate Engineer to take over the process monitoring and troubleshooting duties of 4 seasoned PD scientists/engineers lie precisely having a lot of freedom in using and improving the PI Historian.  

If said freedom is not palatable to the QA folks (despite the fact that hundreds of lots were compliantly released when manufacturing plants allowed the use of unvalidated PI data for non-GMP decision), the answer is to bring process troubleshooters and data scientists in on at the system specification phase of your automation implementation.

If your process troubleshooters don't know what to ask for upfront, there are seasoned consultants with years of experience that you can bring onto your team to help.

Let's be clear: I'm not downplaying the value of a validated PI system; I'm saying to get user input on system design upfront.

Wednesday, June 26, 2013

If Kryptonite is Root Cause, What's the CAPA?

Superman is out there beating up and capturing your above-average-IQ'ed criminals. What do you suppose his success rate is?

100%, right?

I mean, how do you go up against a guy who can fly, has heat-ray vision, X-ray vision and can withstand all applications of the Second Amendment?

superman kryptonite
Superman succumbing to kryptonite
Answer: You can't.

But suppose Superman pits himself against an evil genius,  say Lex Luthor, who discovered that Superman loses his powers upon exposure to kryptonite.

Superman's success rate just plummeted to 0%.

Were he to perform an analysis while floating around helplessly in a pool of water with kryptonite chained around his neck to identify the root cause of his downfall, what do you suppose the most probable cause (MPC) would be?

  • Would it be the gullibility of honest/small-town upbringing?
    If so, this wouldn't explain his high success against other criminals.

  • Would it be Lex Luthor's cunning?
    If so, this root cause could not explain the supervillain Brainiac of similarly high intellect.

  • Would it be the kryptonite?
Most people would stop here and say the true root cause of his new low-success rate would be the kryptonite itself. Take away the kryptonite and Superman is back to 100% success. Bring back the kryptonite:  0%.

But if the Son of Jor-El assigned blame to kryptonite, what's the CAPA?

 A federal regulation banning the possession of kryptonite?  Federally-licensed kryptonite dealers?  Universal-background check on kryptonite purchases?

See, I say the true root cause is Superman's pre-existing condition that makes him vulnerable to kryptonite. If you (somehow) take away this vulnerability, Superman would have 100% success all the time.

If he hired me to increase his success rates, I'd craft a super-suit made of lead. The extra bulk would not encumber him given his strength nor would he be susceptible to lead poisoning. Maybe throw in lead face-paint, lead gloves and lead boots to deflect the radiation from the kryptonite.

The key to a permanent increase in success rates may be to challenge conventional thinking and to address pre-existing vulnerabilities in your process--no matter how much success your process delivered in the past.

See also:

Friday, May 10, 2013

Automation Engineer's Take on Wall-E

I was talking with a buddy from my Cornell ChemE days (who now works in social media) about the odd trajectory of his career. Having had a successful career in biopharma and hospital administration, he's now a social entrepreneur. And it puzzled me that he is fulfilled "not using his degree" in social media.

From his side, he was puzzled that I liked running an automation business helping people get and interpret machine data so their factories operate more efficiently.

As an MBA, he explained, "Business is about people and relationships. I want operate in a world where people matter, and that's what 'social' is."

I have no disagreements with that statement. I did add:
Business is about making money...creating wealth. A world where everyone is wealthy is one where no one has to work; in that world, we have machines at our beck and call. Automation is the means to that world.


Screenshot from Disney Pixar's WALL-E where we find humans have fled Earth in a galactic cruise ship where no one has to work because their life is 100% automated.

Pixar's writers pose the question: What does the world look like when no one has to work?

Don't let Pixar's distinctly American interpretation (out-of-shape, chair-loungers watching TV while robots get us our beverage) distract from the world where everyone gets to enjoy leisure and no one has to work.

Some will jump in and say, "See, employment and working is good for man, else we'll end up all fat and lazy." It's true that some will choose this path, but the vast majority of others would do something else with all that time.

No truer words were spoken when man first uttered the phrase, "Time is Money."

Having vast wealth is synonymous with having vast amounts of time to do what you want; this time to do whatever we choose is called, "leisure." And the purpose of an economy is to lift as many of us from the bonds of employment as efficiently as possible.

As an aside, it's rather hilarious that our politicians run around trying to decrease unemployment. The world where everyone has the luxury of 100% leisure is a world where unemployment is 100%.

And all this leisure can only be possible because we created the machines to automate the tasks that would otherwise be manual.

But back to my buddy: he's also right. Ultimately, business is handled with strong personal relationships. And even after we've automated ourselves into a world where no one has to work, we'd probably spend all that leisure time socializing anyway.

More general commentary:

Tuesday, December 4, 2012

How the @OSIsoftPI System Tracks Time

Here's a primer on how PI tracks time.

GMT

The first thing you need to know is that everything is archived Greenwich Mean Time, so it doesn't matter what timezone the server is in. It doesn't matter what timezone you're in. Everything gets archived GMT.


When you create a PI point, you have to tell the PI server what data type it ought to store with the pointtype attribute. This is important because some data types can be converted to others while others cannot. Date/times happen to be one of those that can be tracked in at least two ways:

Time as Integer

One convention that PI uses to track time is using integers where the numerical value refers to the number of seconds since 31-Dec-1969 16:00. I'm not entirely certain of the significance of this date.... maybe prior to this date, no computers existed? Whatever the case, 0 refers to 12/31/1969 @ 4pm.

To get Jan 1, 1970 @ midnight, that's 8-hours. So 8 hour is the same as 480 minutes... which is 28800 seconds. So if you wanted to write the date 1/1/1970 to PI using integers, you'd send the numerical value 28800 as the timestamp field.

Time as Float32

Another convention that PI follows to track time is using floating points where the numerical value of the floating point refers to the number of days since 1/1/1900. Incidentally, this is the same as the Excel convention.

As an example, Marty McFly goes back on 5-Nov-55 (11/5/1955). To figure out what this is in the Float32 format, you simply do this:

1955 is 55 years after 1900... so 55 years * 365 days per year = 20088 days



Credit ©1985 Universal Pictures

November 5th is the 309th day of the year, so 20088 + 309 = 20398

Handling Local Time

PI uses the local time of the PI server or the local time of the PI client (ProcessBook as it were) to figure out how to display the time.  No data is deleted because all of it archived against GMT.

So take the silly American ritual of Daylight Savings where during the summer hours, we adjust our clocks forward and then in the winter, we adjust the clock back.

On 11/4, when we got to 2am, we set the clocks back to 1am and repeat the time from 1am to 2am.  This is what it looks like on PI:

OSI PI daylight savings
You can see that in the 1-day period between 11/4 and 11/5, 1.04 days is shown.

You can also see in the sinusoid trend (which is based on local clock time apparently), the trend repeats the hour between 1am and 2am.

In summary:

  • PI works off of GMT and the translation to local time depends on your computer or the PI server's local time.  
  • PI can store timestamps as integers representing seconds since 12/31/1969 @ 4pm
          - or -
  • PI can store timestamps as float32 representing days since 1/1/1900

Tuesday, September 18, 2012

How to Use ZOOMS (for OSIsoft PI System)

This was where our OSI PI Search Engine was as of 2008



In a single textbox, you could type in any set of words... just like Google.

And after you typed in a few concepts like:

  • Reactor1
  • Temperature
  • Concentration
  • Volume
ZOOMS could figure out that Reactor1 was a unit while Temperature, Concentration and Volume were process parameters.

Not needed, but if you felt like typing in a time-window (in OSIsoft PI-ese), it would simply show you the trend that you meant to see.

Perfect for people who don't have PI ProcessBook installed... which is basically management, scheduling, QA, instrumentation, process engineering... you know, everyone.

Get The ZOOMS Brochure

Wednesday, September 5, 2012

Cell Culture Database - Batch Information

You work in biopharma. Maybe you're a fermentation guru... or a cell culture hot shot. Whatever the case... This is your process.
We muggles don't have the luxury of waving our wands and having protein fold themselves mid-air. There's usually a container where the process happens. processes happen in a unit
A time-window (starttime to endtime) is when processes happen.

Operators execute process instructions; these procedures is how the process happens.
The execution of process instruction results in an output. The output of the process step is the product and constitutes the what.
Lastly, the process (step) is given a name describing who the batch is.
It stands to reason that the who, what, how, when, where of a batch is characterized by:
  • batchid
  • product
  • procedure
  • starttime - endtime
  • unit
and fully describe batch information for cell cultures and fermentation.

Organize Your Cell Culture Data

Wednesday, August 22, 2012

Bioreactor Contamination Failure Modes

When it comes to bioreactor contamination, there are exactly two failure modes:
  1. Failure to Kill (the bugs)
  2. Failure to Keep (the bugs) Out
The way it works is this:

For the volume of space (say a bioreactor) you want sterile, you start with a clean tank free of residue.  You do this by executing a Clean-In-Place (CIP).

Once your reactor is clean, you seal off orifices of your envelope.  Once this envelope is sealed, you sterilize it with steam (SIP), creating a sterile envelope.  Once the interior of this envelope bug-free, you try to keep it that way.

Failure To Kill

"Failure to Kill" is a deficiency in creating a sterile boundary.... the failure to reach sterilizing temperatures, thereby leaving viable microbes inside of the envelope.  Steam-In-Place procedures can fail for a variety of reasons, including:
  • Mis-calibrated temperature probes
  • Failure to maintain sterilizing temperature
  • Failure to reach sterilizing temperature
  • Dirty Surface
  • Inaccessible Process Surface

For validated GMP systems, the sterilization procedure is usually automated with several critical manual steps.  The faithful execution of the sterilization procedure will rarely result in the "Failure To Kill."

Some microbial contaminants are harder to kill than others.  Gram-positive bacteria, which contains a hardy outer shell - "peptidoglycan" - can withstand higher temperatures for longer than gram-negative bacterial.

Also, gram-positive bacteria have the ability to form-spores where they reduce themselves into viable, but dormant form.  The spores lie dormant until more favorable environmental conditions emerge.

Most sterilizing procedures are validated to inactivate spores, but if your bioreactor contaminations are predominantly gram-positive, hard-to-kill, spore-formers only, then look for "Failure To Kill" as one of the components of the bioreactor contamination.

Failure To Keep Out

"Failure to Keep Out" is a deficiency in maintaining the integrity of the sterile boundary.  The non-moving-parts of a sterile boundary includes the physical vessel, elastomers, valve diaphragms, and sterile filters.

During contamination responses, people often visually inspect elastomers and valve diaphragms for nicks and cuts as there is wear and tear on this material over the course of use.

Filters are often suspected and post-use integrity tests are done to ensure that the filter integrity was not breached.

When more media or feed is needed, the sterile boundary may be extended.  This is when a second envelope is created near the bioreactor, sterilized and then opened next to the bioreactor.

Maintaining the integrity of the sterile boundary often requires positive pressure.  The failure to maintain an outwardly flow can often result in the deficiency of maintaining integrity.

In fact, when steam is rapidly cooled, the pressure will drop suddenly often creating a vacuum that can suck "bugs" into the envelope.

Summary

There are exactly two failure modes that cause bioreactor contamination.  While failures can be a combination of both, it is important for the bioreactor contamination response to simplify and clarify these two modes so that the true root cause is easily enunciated and therefore found.

Consult A Bioreactor Sterility Expert

Friday, July 27, 2012

Manufacturing Sciences: Campaign monitoring v. Process Improvement

Manufacturing Sciences is the name of the department responsible for stable, predictable performance of the large-scale biologics process.

Manufacturing Sciences also describes the activities of supporting for-market, large-scale, GMP campaigns. The three main functions of Manufacturing Sciences are:
  1. Campaign monitoring
  2. Long-term process improvement
  3. Technology Transfer
Within the department are:
  1. Data analysis resources - responsible for campaign monitoring
  2. Lab resources - responsible for process improvement
manufacturing sciences flow
Figure 1: Flow of information within Manufacturing Sciences

The data group is responsible for monitoring the campaign and handing off hypothesis to the lab group.  The lab group is responsible for studying the process under controlled conditions and handing off plant trials back to the data group.

Campaign Monitoring

When a cGMP campaign is running, we want eyeballs watching each batch. There are automated systems in place to prevent simple excursions, but on a macro level, we still want human eyeballs. Eyeballs from the plant floor are the best. Eyeballs from the Manufacturing Sciences department are next best because they come with statistical process control (SPC) tools that help identify common and special cause.

Activities here involve:
Ultimately, all this statistical process control enables data-based, defensible decisions for the plant floor and to production management, much of which will involve the right decisions for decreasing process variability, increasing process capability and reliability.

Long-term Process Improvement

The holy-grail of manufacturing is reliability/predictability. Every time we turn the crank, we know what we're going to get: a product that meets the exact specifications that can be produced with a known quantity of inputs within a known or expected duration.

Long-term process improvement can involve figuring out how to more product with the same inputs. Or figuring out how to reduce cycle time. Or figuring out how to make the process more reliable (which means to reduce waste or variability.

This is where we transition from statistical process control to actual statistics. We graduate from uni- and bivariate analysis into multivariate analysis because biologics processes have multiple variables that impact yield and product quality. To understand where there are opportunities for process improvement, we must understand the system rather than simple relationships between the parts. To get this understanding, we need to have a good handle on:
Note: in order to have a shot at process improvement, you need variable data from large-scale. Meanwhile if you succeed at statistical process control, you will have eradicated variability from your system.

This is why a manufacturing sciences lab is the cornerstone of large-scale, commercial process improvement - so that you can pursue process improvement without sacrificing process variability and the results of your statistical process control initiatives.

Outsource Manufacturing Sciences

Monday, February 27, 2012

Process Troubleshooting using Patterns


The variability in your process output is caused by variability from your process inputs.

This means that patterns you observe in your process output (as measured by your key performance indicators, or KPIs) are caused by patterns in your process inputs.

Recognizing which pattern you're dealing with can, hopefully, lead you quickly to the source of variability so you can eliminate it.

Stable

Boring processes that do the same thing day in and day out are stable processes. Everyday you show up for work and the process is doing exactly what you expected. Control charts of your KPIs look like this:

control chart stable process
Boring is good: it is predictable, you can count on it (like Maytag products) so you can plan around it. Well-defined, well-understood, well-controlled processes often take this form. The only thing you really have to worry about is job security (like the Maytag repairman).

Periodic


Processes where special-cause signals show up at a fixed interval exhibit a "periodic" pattern.

periodic process
This pattern is extremely common because in reality, many things in life are periodic:


  • Every day is a cycle.
  • Manufacturing shift structures repeat every 7-days.
  • The rotation of equipment being used is cyclical
  • Plant run-rates
On one occasion, we had a rash of production bioreactor contaminations. By the end of it all, we had five contaminations over the course of seven weeks and they all happened late Sunday/early Monday. On Fridays going into the weekend, people would bet whether or not we'd see something by Monday of the following week. Here, the frequency is once-per-week and ultimately, the root cause was found to be related to manufacturing shifts, which cycle once-per-week.
All these naturally occurring cycles at varying intervals and the key to solving a the periodic pattern is identifying the periodic process input that cycles at the same frequency.

Step-change


A step-change pattern is when, one day, your process output changes and doesn't go back to the way it was... not exactly "irreversible", but at least "difficult to go back."

control chart step change
Step patterns are also commonly observed in manufacturing because many manufacturing activities, "can't be taken back." For example:
  • After a plant shutdown when projects get implemented.
  • After equipment maintenance.
  • When the current lot of material is depleted and a new lot is used.

One time coming out of shutdown, we had a rash of contamination: every single 500L* bioreactor came down contaminated. It turns out that a project securing the media filter executed during changeover for safety reasons changed the system mechanics and caused the media filter to be shaken loose. Filter stability was restored with another project so that the safety modifications would remain.

Step pattern is harder to troubleshoot than the periodic pattern because the irreversibility makes the system untestable. The key to solving a step pattern is to focus on "irreversible changes" of process inputs that happen prior to the observed step change.

Sporadic


A sporadic pattern is basically a random pattern.

control chart sporadic
Sporadic patterns are unpredictable and difficult to troubleshoot because at its core, the special-causes in-process outputs are often caused by two or more process inputs coming together. When two or more process inputs combine to cause a different result than if either two inputs alone, this is called an interaction.

A good example is the Ford Explorer/Firestone tires debacle that happened in the early 2000's. At the time, they observed a higher frequency of Ford Explorer SUVs rolling over than other SUVs. After further investigation, the rolled-over Ford Explorers had tires mainly made by Firestone. Ford Explorer owners using other tires weren't rolling over. Other SUV drivers using Firestone tires weren't rolling over. It was only when Firestone tires AND Ford Explorers used in combination that caused the failure.

To be blunt, troubleshooting sporadic patterns basically sucks. The best thing about a sporadic pattern is that it tells you is to look for more complex interactions within your process inputs.

Summary


Because the categories of patterns are not well defined (i.e. "I know it when I see it"), identifying the pattern is subject to debate. But know that the true root cause of the pattern must - itself - have the same pattern.


Wednesday, October 26, 2011

How to Compute Production Culture KPI


Production culture/fermentation is the process step where the active pharmaceutical ingredient (API) is produced. The KPI for production cultures relates to how much product gets produced.

The product concentration (called "titer") is what is typically measured. Operators submit a sterile sample to QC who have validated tests that produce a result in dimensions of mass/length3, for biologics processes, this is typically grams/liter.

Suppose you measured the titer periodically from the point of inoculation; it would look something like this:

titer curve typical production culture
The curve for the majority of cell cultures is an "S"-shaped, also called "sigmoidal," curve. The reason for this "S"-shaped curve was described by Malthus in 1798 where he described population growth as geometric (i.e. exponential growth) while increases in agricultural production was arithmetic (i.e. linear); at some point, the food supply is incapable of carrying the population and thus the population crashes.

In the early stages of the production culture, there is a surplus of nutrients and cells - unlimited by nutrient supply - grow exponentially. Unlike humans and agriculture, however, a production fermentation does necessarily have an increasing supply of nutrient, so the nutrient levels are fixed. Some production culture processes are fed-batch, meaning at some point during the culture, you send in more nutrients. Regardless, at some point, the nutrients run low and the cell population is unable to continue growing. Hence the growth curve flattens and basically heads east.

In many cases, the titer curve looks similar to the biomass curve. In fact, the integral (area under that biomass curve) is what the titer curve typically mimicks.

The reason this titer curve is so important is because the slope of the line drawn from the origin (0,0) to the last point on the curve is the volumetric productivity.

Volumetric Productivity
Titer/culture duration (g/L culture/day)

The steeper this slope, the greater the volumetric productivity. Assuming your bioreactors are filled to capacity and that supplying the market with as much product as fast as possible, then maximizing volumetric productivity ought to be your goal.


Counter-intuitively, maximizing your rate of production means shortening your culture duration. Due to the Malthusian principles described above, your titer curve flattens out as your cell population stagnates from lack of nutrients. Maximizing your volumetric productivity means stopping your culture when the cells are just beginning to stagnate. End the culture early and you lose the opportunity cost of producing more product; end the culture late and you've wasted valuable bioreactor time on dying cells.

The good news is that maximizing your plant's productivity is a scheduling function:


  1. Get non-routine samples going to measure the in-process titer to get the curve.
  2. Study this curve and and draw a line from the origin tangent to this curve.
  3. Draw a straight line down to find the culture duration that maximizes volumetric productivity.
  4. Call the Scheduling Department and tell them the new culture duration.
  5. Tell your Manufacturing Sciences department to control chart this KPI to reduce variability

There's actually more to this story for Production Culture KPI, which we'll cover next.


Tuesday, October 25, 2011

How to Compute Seed Fermentation KPI


So,  if you agree that the purpose of seed fermentation (a.k.a inoculum culture) is to scale-up biomass, then the correct key performance indicator is final specific growth rate.

To visualize final specific growth rate, plot biomass against time:


The cell density increases exponentially, which means on a log-scale, the curve becomes linear. The specific growth rate (μ) is the slope of the line. The final specific growth rate (μF) is the slope of all the points recorded in the last 24-hours prior to the end of the culture.

To compute the final specific growth rate, simply put datetime or culture duration in the first column, biomass in the second column, and the natural log of biomass in the third column:

tabular inoc culture kpi
In Excel, use the SLOPE function to compute the slope of the natural log of biomass:

=SLOPE(C5:C7,A5:A7)
Alternatively, if you don't want to bother with the third column:

=SLOPE(LN(B5:B6),A5:A7)
This number has engineering units of inverse time (day-1). While this measure is somewhat hard to physically understand, we look towards the ln(2) = 0.693 as a guide: If a culture has a specific growth rate ~ 0.70 day-1, then its cell population is doubling once per day.

Computing this KPI for seed fermentation and then control charting this KPI is the best start you can make towards monitoring and controlling your process variability.


Monday, October 24, 2011

KPIs for Cell Culture/Fermentation


Control charting each process step of your biologics process is a core activity for manufacturing managers that are serious about reducing process variability.

Sure, there's long-term process understanding gained from the folks in manufacturing sciences, but that work will be applied several campaigns from now.

What are the key performance indicators (KPIs) for my cell culture process today?

To answer this question, start with the purpose(s) of cell culture:
  1. Grow cells (increase cell population)
  2. Make product (secrete the active pharmaceutical ingredient)

Seed Fermentation (Grow Cells)


There are plenty of words that describe cell cultures whose purpose is to scale-up biomass; to wit, seed fermentation, inoculum cultures, inoc train etc. Whatever your terminology, the one measurement of seed fermentation success is growth rate (μ), which is constant in the exponent of the Arrhenius equation:

X = X0eμΔt

Where:
  • X = current cell density
  • X0 = initial cell density
  • Δt = elapsed time since inoculation

For seed fermentation, the correct KPI is the final specific growth rate; which is the growth rate in the final 24-hours prior to transfer. The reason the final specific growth rate is the KPI is because the way seed fermentation ends is more important than how it starts.

Production Fermentation (Make Product)


The output of the Production Fermentor is drug substance; the more and the faster, the better. This why the logical KPI for Production Fermentation is Capacity-Based Volumetric Productivity.

A lot of folks look at culture titer as their performance metric. Mainly because it's easy. You ship those samples off to QC and after they run their validated tests, you get a number back.

Culture Titer
Mass of product per volume of culture (g/L culture)

The problem with using culture titer is that it does not take into account the rate of production of product. After all, if it took culture A takes ten days to make 2g/L and culture B takes 12 days to make the 2g/L, according to titer, they are equivalent, even though A was better. This is why we use volumetric productivity:

Volumetric Productivity
Titer/culture duration (g/L culture/day)

Culture volumetric productivity takes into account the rate of production pretty well, and in our example culture A's performance is 0.20g/L/day while culture B's performance is 0.17 g/L/day. But what of the differences between the actual amount of product manufactured? I can run a 2L miniferm and get 0.40g/L/day, but that isn't enough to supply the market. This is why bioreactory capacity must be included in the true KPI for production cultures.

Capacity-based Volumetric Productivity
Volumetric Productivity * L culture / L capacity (g/L capacity/day)

Capacity-based Volumetric Productivity is the Culture Volumetric Productivity multiplied by the percent of fermentor capacity-used, such that a filled fermentor scores higher than a half-full fermentor.

KPIs are generally not product-specific; instead, they are process class specific. For instance, all seed fermentation for CHO processes ought to have the same KPI.

Generally, KPIs are simple calculations derived from easily measured parameters such that the cost of producing the calculation is insignificant relative to the value it provides.

KPIs deliver significant value when they can be used to identify anomalous performance and actionable decisions made by Production/Manufacturing in order to amend the special cause variability observed.


Friday, September 9, 2011

Multivariate Analysis in Biologics Manufacturing


All these tools for data acquisition and trend visualization and search are nice. But at the end of the day, what we really want is process understanding and control of our fermentations, cell cultures and chromatographies.

Whether a process step performs poorly, well or within expectations, put simply, we want to know why. 

For biological systems, the factors that impact process performance are many and there are often interactions between factors for even simple systems such as viral inactivation of media.

One time,  clogged filters with white residue were the result when transferring media from the prep tank to the bioreactor. On several occasions, this clogging put the transfer in hold and stopped production.

After studying the data, we found that pH and Temperature were the two main effects that significantly impacted clogging. If the pH was high AND the temperature was high, the solids would precipitate from the media. But the pH or temperature during the viral inactivation was low, the media would transfer without exception.

After identifying the multiple variables and their interactions, we were able to change the process to eliminate clogging as well as simplify the process.

For even more complex systems like production fermentation, multivariate analysis produces results. In 2007, I co-published a paper with Rob Johnson describing how multivariate data analysis can save production campaigns. From the article is the regression pictured below.

Multiple Linear Regression

You can see that it isn't even that great a fit. Statisticians shrug all the time at RSquares less than 0.90. But from this simple model, we were able to turn around a lagging production campaign and achieve 104% Adherance To Plan (ATP).

The point is not to run into trouble and use these tools & know-how to fix the problem. Ideally, we understand the process ahead of time by designing in-process capability and then fine tune it at large-scale; we are less fortunate in the real world.

My point in all this is if you are buying tools and assembling a team without process understanding and control,  then you won't know which are the right tools or what is the best training. Keeping your eye on the process understanding/multivariate analysis prize will put you in control of your bioprocesses and out of the spotlight of QA or the FDA.


Thursday, September 8, 2011

Process Capability (CpK)


From a manufacturing perspective, a capable process is one that can tolerate a lot of input variability. Said another way, a capable process produces the same end result despite large changes in material, controlled parameters or methods.

As the cornerstone of "planned, predictable performance," a robust/capable process lets manufacturing VPs sleep at night. Inversely, if your processes do not tolerate small changes in materials, parameters or methods, you will not make consistent product and ultimately end up making scrap.

To nerd out for a bit, the capability of a process parameter is computed by subtracting the lower specification limit (LSL) from the upper specification limit (USL) and dividing this by the standard deviation measured of your at-scale process:

1fc1cbd2a59a0da04cb5e11abc816b77[1]

The greater the Cp, the more capable your process. There are many other measures of capability, but all involve specifications in the numerator, standard deviation in the denominator and values of 1 or greater means "capable."

A closer look at this metric shows why robust processes are rarely found in industry:

  • Development sets the specifications (USL/LSL)
  • Manufacturing controls the at-scale variables that determine standard deviation.

And most of the time, development is rewarded for specifications that produce high yields rather than wide specifications that increase process robustness.

Let's visualize a capable process:

cc

Here, we have a product quality attribute whose specifications are 60 to 90 with 1 stdev = 3. So Cp is (90-60)/6*3 = 30/18 = 1.6. The process has no problems meeting this specification and as you can see, the distribution is well within the limits.

Let's visualize an incapable process:

ncnc

Again, USL = 90, LSL = 60. But this time, the standard deviation of the process measurements is 11 with a mean of 87.

Cp = (90 - 60)/ 6 * 11 = 30/66 = 0.45. We can expect the process to meet the specification approximately 45% of the time.

Closer examination shows that the process is also not centered and vastly overshoots the target; even if variability reduction initiatives succeeded, the process would still fail often because it is not centered.

If you are having problems with your process reliably meeting their specifications, apply capability studies to assess your situation. If you are not having problems with your process, apply capability studies to see if you are at risk of failing.

The take-away is that process robustness is a joint manufacturing/development effort, and manufacturing managers must credibly communicate process capability to development in order to improve process robustness.

Get a Proven Biotech SPC Consultant

Wednesday, September 7, 2011

PI ProcessBook Is A Trend Visualization Tool, Not An Analysis Tool


ProcessBook is the trend visualization tool written by OSIsoft for their PI system. It is what is called a rich-client, which basically means that it is installed on your local computer and uses your computer's CPU to give the users a rich set of features. Because PI ProcessBook is how users interact with PI, this program is often confused for the PI system itself.

Our customers really like PI (the server) and ProcessBook (the client) - so do we - and sometimes fall in the trap of thinking that PI should be everything to everyone. And why shouldn't they?

ProcessBook provides everything you need for real-time monitoring. One time, I was watching this oxygen flow control valve to my bioreactor flicker on and off. I verified this was abnormal behavior by checking the O2 flow control valve tag from history. I called to the plant floor and met up with the lead technician in the utilities space to walk down the line and found that oxygen was actually leaking from it. There were contractors welding in that space at the time and though risks were low, we got them to stop until we fixed the problem.

Another time using ProcessBook, we saw a fermentor demanding base (alkali) solution prior to inoculation... something that ought not happen since there were no cells producing carbonic acid that required pH control. We called into the floor to turn off pH control to stop more base from going in. Confirmed the failed probe and switched to secondary. $24,000 of raw material costs were saved from looking at PI ProcessBook to see what the trends were saying.

The reason you don't put everything in PI (hence ProcessBook) is because ProcessBook is not an analysis tool. Analysis requires quantification. Good analysis applies statistics to let you know if differences you are measuring are significant. ProcessBook does not do that. It is there to help you put eyeballs on trends.

Spending funds to make PI ProcessBook into an analysis tool has a diminishing ROI. Your money is better spent elsewhere.

Get Expert OSI PI Pharma Consulting
-->