Showing posts with label continuous improvement. Show all posts
Showing posts with label continuous improvement. Show all posts

Monday, August 4, 2014

Process Data for the Rest of Us

I first started using OSIsoft's PI in 1999 when I got hired as a fermentation engineer at the world's largest cell culture plant (at the time, 144,000-liters).

I remember the first week vividly.  Me - not knowing anything about how read cell culture trends - watching my boss run his cursor over PI trends, nodding his head and then running off to meetings telling the operations staff what's happening in the bioreactors.

Over the years, I've put my eyeballs on thousands of cell culture runs and became an expert on the matter.  Yet, no matter how many trainings gave to tech(nician)s, supe(rvisor)s, and managers to increase the population of process experts at the company, I got the sense that ultimately it was still my team's (i.e. manufacturing sciences') job to know what's happening to the cell culture runs in-progress.

OR, maybe not...

The VP of the plant had the PI administrator write a script to open up PI ProcessBook and snapshot current trends as a images and put it on a website (remember, this was back in 1999).  Clearly management recognized the value of these trends, but was just too much activation energy to get PI data to "The People."

So when I left my awesome company (and awesome job), I set out to do one thing:

To bring PI to the PI-less

And by PI, I mean "process data."

Google had already pioneered the one text-box, one click format for accessing website data.   So why is it that cGMP citizens can find information on the internet faster than they can find critical process data from within the enterprise?

This is why I bothered creating ZOOMS and this is why I think that there's a place for ZOOMS in the competitive space of trend visualization on the web.

It actually wasn't until the Jared Spool lecture at this year's OSIsoft PI User's Conference that I learned how to better enunciate this creation.  

magic escalator of acquired knowledge
A quick recap:

The bottom of the escalator is where you are if you know nothing about an app (new users live here).  The top of the escalator is if you know everything about the app (developers live here).

Current knowledge is where the user is today; and target knowledge is where a user needs to be to know enough about the application to perform his duties.

Mr. Spool tells us that intuitive design happens when the knowledge gap (the difference between target knowledge and current knowledge) is zero.

A key observation is that the more powerful and feature-rich the tool, the higher up the target knowledge is...and the harder the knowledge gap is to close.

The success of Google (which is to be replicated with ZOOMS in the process trend visualization context) is a modest number of features in order to lower the target knowledge... and thus diminish this knowledge gap and achieve intuitive design.

There are plenty of feature rich PI trend visualization tools for process experts.  ZOOMS is process trends for the rest of us; in other words: PI for the "non-user."

At the end of the day, it's People, Process, and Technology... in that order.  You can buy awesome technology, but if only a small minority of your people use it, you're neither capitalizing on their potential, nor your process.

Thursday, January 23, 2014

Multivariate Analysis: Pick Actionable Factors Redux

When performing multivariate analysis, say multiple linear regression, there's typically an objective (like "higher yields" or "troubleshoot campaign titers"). And there's typically a finite set of parameters that are within control of the production group (a.k.a. operators/supervisors/front-line managers).

This finite parameter set is what I call, "actionable factors," or "process knobs." For biologics manufacturing, parameters like

  • Inoculation density
  • pH/temperature setpoint
  • Timing of shifts
  • Timing of feeds
  • Everything your process flow diagram says is important
are actionable factors.

Examples of non-actionable parameters include:
  • Peak cell density
  • Peak lactate concentration
  • Final ammonium
  • etc.
In essence, non-actionable parameters are generally measured and cannot be changed during the course of the process.

Why does this matter to multivariate analysis? I pick on this one study I saw where someone built a model against a commercial CHO process and proved that final NH4+ levels inversely correlates with final titer.



What are we to do now?  Reach into the bioreactor with our ammonium-sponge and sop up the extra NH4+ ion?

With the output of this model, I can do absolutely nothing to fix the lagging production campaign. Since NH4+ is evolved as a byproduct of glutamine metabolism, this curious finding may lead you down the path of further examining CHO metabolism and perhaps some media experiments, but there's no immediate action nor medium-term action I can take.

On the other hand, had I discovered that initial cell density of the culture correlates with capacity-based volumetric productivity, I could radio into either the seed train group or scheduling and make higher inoc densities happen.

by

Thursday, August 1, 2013

Every MSAT's Response to Process Development



Reducing variability is the only thing the Manufacturing team can control.  Ways to do this involve getting more accurate probes, improving control algorithms, upgrading procedures, etc.

But there are limits. Probes are only so precise. Transmitter may discretize the signal and add error to the measurement. The cell culture may have intrinsic variability.

What makes for releasable lots are cell cultures executed within process specifications.  And measuring a process parameter's variability in relation to the process specification is the SPC metric: capability.

1fc1cbd2a59a0da04cb5e11abc816b77[1]

Process specifications are created by Process Development (PD). And at the lab-scale, it's their job to run DOE and explore the process space and select process specifications narrow enough to produce the right product, but wide enough that any facility can manufacture it.

It's tempting to select the ranges that produce the highest culture volumetric productivity.  But that would be a mistake if those specifications were too narrow relative to the process variability.  You may get 100% more productivity, but at large-scale be only able to hit those specifications 50% of the time resulting in a net 0% improvement.

The key is to pick specification limits (USL and LSL) that are wide so that the large-scale process is easy to execute.  And at large-scale, let the MSAT guys find the sweet-spot.

Monday, January 7, 2013

Moneyball for Manufacturing

I'm quite behind the times when it comes to watching movies. The last movie I saw was The Dark Knight Rises...

at a matinee...

so I don't get shot.

A few nights ago, I finally sat down and watched Moneyball, the movie with Brad Pitt and six Oscar nods. It is a "based on a true story" of how the perennially under-budgeted Oakland A's baseball club builds a near-championship team only to lose not only playoff games, but also their best players to big-money baseball clubs when the players' contract expire.

The Oakland A's general manager, Billy Beane, realizes his underfunded system will continue to produce good-enough results that will never win the championship. And to continue running his system the same way is insanity:
Doing the same thing over and over again and expecting different results. - Albert Einstein
To win, Beane decides to do something different, and that something different is focusing on the key performance indicators (KPIs) of winning and getting players that contribute positively to those KPIs... applying statistics and math to baseball is what they call, "Moneyball."

How many of us are in the same boat as this Oakland A's GM?
  • How many of us are getting by with under-funded budgets?
  • How many of us are managing our systems the same way they've been managed for years?
  • How many of us can improve our systems by applying data-driven statistics?
Moneyball is to baseball what Manufacturing Sciences is to manufacturing:
Biotech and pharma manufacturing is in a period of static or diminishing budgets. Do more with the same or make do with less is the general mantra as the dollars go towards R&D or to acquisitions. To make matters worse, biosimilars are coming on-line to drive revenues even farther down.

Questions I'm getting these days are:

What systems do I need to collect the right data?

What KPIs should I be monitoring?

What routine and non-routine analysis capabilities should I have?

Let's Play

p.s. - Watch the movie if you haven't seen it.  It's as good a movie as it is a good business case study.

Wednesday, August 15, 2012

PI ProcessBook - Best Biopharm Practices 2

The thing about visualizing process trends is that at some point, you're going to have to make a decision with it.  And when you make a decision with it in the cGMP environment, you're going to need someone to review the decision.  Perhaps this decision is big enough that you're going to need to document it.

Or perhaps, this decision needs a little CYA and you're going to have to present it to the plant management.  Whatever the case, there comes a time when you're going to have to take what you're seeing out of PI ProcessBook and put it in a Microsoft Word document or Powerpoint slide.

This is where the core philosophy of my ex-boss "Jesse B." comes in handy.  Jess is a big believer in one-stop or "few-stop" shopping for data.  You go to one or two places for data and spend the rest of your time analyzing.  OSIsoft PI server is the ideal stop for plant information.

The corollary to "One-Stop Shopping" is "No Additional Editing."  This means having the PI display formatted correctly where "correctly" means no additional work has to be done.

Face it, when you're facing a big decision and you've spent all your bandwidth on analyzing the data, the last thing you want to do is mouse around trying to get the background color right, or putting lipstick on the proverbial PI display pig.

Here are a few things to help you out:
  • Use a white or transparent background for your PI ProcessBook displays.
    All Word documents and most Powerpoint slides have white backgrounds
  • Use dark colors for primary tags
    The tags that need the most attention should have dark to contrast against the white background
  • Use lighter colors for noisier tags
    Noisy tags like jacket temperature or controller output will do well with lighter colors
Have a look at this display and you can tell immediately that this can be copied and pasted directly into a memo with no additional editing:


You're shaving 5 to 10 minutes off every presentation you make.  Over the course of a week, we're easily talking an hour or two of your life you're never going to have to spend on making your memo look pretty.


Friday, July 27, 2012

Manufacturing Sciences: Campaign monitoring v. Process Improvement

Manufacturing Sciences is the name of the department responsible for stable, predictable performance of the large-scale biologics process.

Manufacturing Sciences also describes the activities of supporting for-market, large-scale, GMP campaigns. The three main functions of Manufacturing Sciences are:
  1. Campaign monitoring
  2. Long-term process improvement
  3. Technology Transfer
Within the department are:
  1. Data analysis resources - responsible for campaign monitoring
  2. Lab resources - responsible for process improvement
manufacturing sciences flow
Figure 1: Flow of information within Manufacturing Sciences

The data group is responsible for monitoring the campaign and handing off hypothesis to the lab group.  The lab group is responsible for studying the process under controlled conditions and handing off plant trials back to the data group.

Campaign Monitoring

When a cGMP campaign is running, we want eyeballs watching each batch. There are automated systems in place to prevent simple excursions, but on a macro level, we still want human eyeballs. Eyeballs from the plant floor are the best. Eyeballs from the Manufacturing Sciences department are next best because they come with statistical process control (SPC) tools that help identify common and special cause.

Activities here involve:
Ultimately, all this statistical process control enables data-based, defensible decisions for the plant floor and to production management, much of which will involve the right decisions for decreasing process variability, increasing process capability and reliability.

Long-term Process Improvement

The holy-grail of manufacturing is reliability/predictability. Every time we turn the crank, we know what we're going to get: a product that meets the exact specifications that can be produced with a known quantity of inputs within a known or expected duration.

Long-term process improvement can involve figuring out how to more product with the same inputs. Or figuring out how to reduce cycle time. Or figuring out how to make the process more reliable (which means to reduce waste or variability.

This is where we transition from statistical process control to actual statistics. We graduate from uni- and bivariate analysis into multivariate analysis because biologics processes have multiple variables that impact yield and product quality. To understand where there are opportunities for process improvement, we must understand the system rather than simple relationships between the parts. To get this understanding, we need to have a good handle on:
Note: in order to have a shot at process improvement, you need variable data from large-scale. Meanwhile if you succeed at statistical process control, you will have eradicated variability from your system.

This is why a manufacturing sciences lab is the cornerstone of large-scale, commercial process improvement - so that you can pursue process improvement without sacrificing process variability and the results of your statistical process control initiatives.

Outsource Manufacturing Sciences

Friday, June 22, 2012

Toyota visits Zymergi(.com)


This is pretty flattering. Someone from Toyota - the creators of Toyota Production System... which is the precursor to Lean... looked up Process Capability (CpK) on Google and read my blog post:

toyota zymergi
I'm pretty excited. When you study manufacturing and look for the words of truth that apply across industries, Toyota is the leader in quality. Competing with limited resources and applying the principles that Deming taught them, they went from post-World-War 2 clothmaker to the largest automobile company on planet Earth.


Sunday, April 22, 2012

Continuous Process Improvement and SPC


Buzzwords are aplenty in this line of work: Lean manufacturing, lean six sigma, value stream mapping, business process management, Class A. But at the end of the day, we're talking about exactly one thing: continuous process improvement:

How to get your manufacturing processes (as well as your business process) to be better each day.

shewhartAnd to that, I say: "Pick your weapon and let's get to work." For me, I prefer statistical process control because SPC was invented in the days before continuous process improvement collided with information technology.

Back in those days, things had to be done by hand, concepts had to be boiled down in simple terms: special cause vs. common cause variability could simplify what was going on and clarify decision making. And having just Excel spreadsheets is a vast technological improvement to paper and pencil. In those days, there was no time for complexity of words and thought.

If we say words from the slower days of yesteryear, but use tools from today, we can solve a lot of problems and make a lot of good decisions.

Companies like Zymergi are third-party consultants who can help develop your in-house continuous process improvement strategy; especially for cell culture and fermentation companies. We focus applying statistical process control as well as knowledge management so that once we reduce process variability and increase reliability.

The technology is there to institutionalize the tribal knowledge so that when people leave, your high-paid consultants leave, the continuous process improvement know-how stays.

We use SPC and statistical analysis because it has been proven by others and it is proven by us. Data-driven decisions deliver real results.

7 Tools of SPC

  1. Control Charts
  2. Histograms
  3. Correlations
  4. Ishikawa Diagrams
  5. Run Charts
  6. Process Flow Diagrams
  7. Pareto Charts

Monday, April 9, 2012

SPC - Cause/Effect Diagrams and Run Charts


The next two tools were used constantly for large-scale manufacturing sciences support of cell culture: Cause Effect Diagram and the Run Chart.


Cause/Effect (Ishikawa) Diagram


The cause/effect diagram (aka) Ishikawa diagram is essentially taxonomy for failure modes. You break down failures (effects) into 4 categories:


  1. Man
  2. Machine
  3. Method
  4. Materials

It's used as a brainstorming tool to put it all out there and to help visualize how an event can cause the effect. This was particularly helpful contamination investigations. In fact, there's a "politically correct" Ishikawa diagram in my FREE case study on large-scale bioreactor contamination.

Get Contamination Cause/Effect Diagram

The cause/effect diagram helps clarify thinking and keeps the team on-task.

Run Chart


The Run Chart is basically what a chart-recorder spits out. In this day and age, it's what we call OSIsoft PI. You plot a parameter against time (called a trend), and when you do this, you get to see what's happening in sequential order. When you plot a lot of parameters on top of one another, you begin to understand sequence. Things that happen later cannot cause events that happened earlier. Say your online dissolved oxygen readings spiked below 5% for 10 seconds, yet your pO2 remains steady and the following viability measurement shows no drop off in cell viability, you can basically say that the dO2 spike was measurement error.

Here's an example of the modern-day run chart, it's called, "PI":


Run charts (i.e. PI) are crucial for solving immediate problems. A drifting pH probe can dump excess CO2 into a media-batched fermentor. Being able to see real-time data from your instruments and have the experience to figure out what is going on is key to troubleshooting large-scale cell culture and fixing the problem real-time so that the defect is not sent downstream.

Get #1 Biotech/Pharma PI Systems Integrator

As you can see, SPC concepts are timelessly applied today to cell culture and fermentation... albeit with new technology.


Friday, April 6, 2012

SPC - Process Flow Diagram/Pareto Charts


So that little SPC Book goes into 7-tools to use, the next page goes into Process Flow Diagrams and Pareto charts.


Process Flow Diagram


The first tool appears to be the Process Flow Diagram[tm], where one is supposed to draw out the inputs and outputs of each process step. I suppose in the "Lean" world, this is the equivalent of value-stream mapping.

The text of the booklet calls it a

Pictoral display of the movement through a process. It simply shows the various process stages in sequential order.

Normally, I see this on a Powerpoint slide somewhere. And frankly, I've rarely seen it used in practice. More often, if we show this to consultants to get them up to speed.

Pareto Chart


The pareto chart is essentially a pie chart in bar-format. The key difference is that pie charts are for the USA Today readership while pareto charts are for real engineers -- this is to say that if you're putting pie charts in Powerpoint and you're an engineer, you're doing it wrong.

Pareto charts are super useful because they help figure out your most pressing issue. For example, say you're create a table of your fermentation failures:


So you have counted the number of observed failures alongside a weight of how devastating the failure is. Well, in JMP, you can simply create a pareto chart:


and out pops a pareto chart.


What this pareto chart shows you is the most important things to focus your efforts on. If you solve the top 2 items on this pareto chart, you will have solved 80% of your problems - on a weighted scale.

The pareto is a great tool for metering out extremely limited resources and has been proven extremely effective in commercial cell culture/fermentation applications.


Thursday, March 29, 2012

Multiple Linear Regression (Multivariate Analysis)

Here's your process:

generic blackbox process
It's a black box. All you know is that you have multiple process inputs (X) and at least one process output (Y) that you care about. Multivariate analysis is the method by which you analyze how Y varies with your multiple inputs (x1, x2,... xn). There a lot of ways to go about figuring out how Y relates.

One way to go is to turn that black box into a transparent box where you try to understand the fundamentals from first principles. Say you identify x1 as cell growth and you believe that your cells grow exponentially, you can try to apply an equation like Y = Y0eµx1.

But this is large-scale manufacturing. You don't have time for that. You have to supply management with an immediate solution followed by a medium-term solution. What you can do is assume that each parameter varies with Y linearly.

y mx b
Just like we learned in 8th grade. How can we just say that Y relates to X linearly? Well, for one, I can say whatever I want (it's a free country). Secondly, all curves (exponential, polynomial, logarithmic, asymptotic...) are linear over small ranges... you know, like the proven acceptable range in which you ought to be controlling your manufacturing process.

Assuming everything is linear keeps things simple and happens to be rooted in manufacturing reality. What next?

y m1x1 m2x2 b
Next you start adding more inputs to your equation... applying a different coefficient for each new input. And if you think that a few of your inputs may interact, you can add their interactions like this:

mlr with interactions
You achieve interactions by multiplying the inputs and giving that product its own coefficient. So now you - the big nerd - have this humongous equation that you need solving. You don't know:
  • Which inputs (x's) to put in the equation
  • What interactions (x1 * x2) to put in the equation
  • What coefficients to put in the keep (m's)

What you're doing with multiple linear regression is picking the right inputs, interactions and so that the data you have fits that your statistical software package and brute-force the coefficients (m's) to fit an equation that gives you the least error.

Here's the thing: The fewer rows you have in your data table, the fewer inputs you get to throw into your equation. If you have 10 samples, but 92 inputs, you're going to have to be very selective with what you try in your model.

It's a tough job, but someone's got to do it. And when you finally do (i.e. explain the relationship between, say, cell culture titer and your cell culture process inputs), millions of dollars can literally roll into your company's coffers.

Your alternative is to hire Zymergi and skip that learning curve.


More reading:

Sunday, March 18, 2012

Variability Reduction is a core objective


Reducing process variability is a core objective for process improvement initiatives because low variability helps you identify small changes in the process.

Here's a short example to illustrate this concept. Suppose you are measuring osmolality in your buffer solution and the values for the last 10 batch are as follows:

293, 295, 299, 297, 291, 299, 298, 292, 293, 296.

Then the osmolality of the 11th batch of buffer comes back at 301 mOsm/kg. Is this 301 result "anomalous" or "significantly different"?

It's hard to tell, right? I mean, it's the first value greater than 300, so that's something. But it is only 2 mOsm/kg greater than the highest previously observed while the measurement ranges from 291 to 299, an 8 mOsm/kg difference.

Let's try another series of measurements - this time, only 7 measurements:

295, 295, 295, 295, 295, 295, 295.

Then the measurement of the eighth batch is 297 mOsm/kg. Is this result anomalous or significantly different? The answer is yes. Here's why:

The process demonstrates no variability (within measurement error) and all of the sudden, there is a measurable difference. The 297 mOsm/kg is a distance of 2 mOsm/kg from the highest measured value. But the range is 0 (with all values measuring 295). The difference is infinitely greater than the range.

There are far more rigorous data analysis methods to better quantify the statistics comparing differences that will be discussed in the future, but you can see how variability reduction helps you detect differences sooner.

Also, remember that variability (a.k.a. standard deviation) is the denominator of the capability equation:

capability equation

Reducing process variability increases process capability.

To summarize: reducing process variability helps in 2 ways:


  1. Deviations (or differences) in the process can be detected sooner.
  2. Capability of the process (a.k.a. robustness) increases.

Hitting the aforementioned two birds with the proverbial one stone (variability reduction) is a core objective of any continuous process improvement initiative. Applying the statistical tools to quantify process variability ought to be a weapon in every process engineer's arsenal.



    Monday, February 27, 2012

    Process Troubleshooting using Patterns


    The variability in your process output is caused by variability from your process inputs.

    This means that patterns you observe in your process output (as measured by your key performance indicators, or KPIs) are caused by patterns in your process inputs.

    Recognizing which pattern you're dealing with can, hopefully, lead you quickly to the source of variability so you can eliminate it.

    Stable

    Boring processes that do the same thing day in and day out are stable processes. Everyday you show up for work and the process is doing exactly what you expected. Control charts of your KPIs look like this:

    control chart stable process
    Boring is good: it is predictable, you can count on it (like Maytag products) so you can plan around it. Well-defined, well-understood, well-controlled processes often take this form. The only thing you really have to worry about is job security (like the Maytag repairman).

    Periodic


    Processes where special-cause signals show up at a fixed interval exhibit a "periodic" pattern.

    periodic process
    This pattern is extremely common because in reality, many things in life are periodic:


    • Every day is a cycle.
    • Manufacturing shift structures repeat every 7-days.
    • The rotation of equipment being used is cyclical
    • Plant run-rates
    On one occasion, we had a rash of production bioreactor contaminations. By the end of it all, we had five contaminations over the course of seven weeks and they all happened late Sunday/early Monday. On Fridays going into the weekend, people would bet whether or not we'd see something by Monday of the following week. Here, the frequency is once-per-week and ultimately, the root cause was found to be related to manufacturing shifts, which cycle once-per-week.
    All these naturally occurring cycles at varying intervals and the key to solving a the periodic pattern is identifying the periodic process input that cycles at the same frequency.

    Step-change


    A step-change pattern is when, one day, your process output changes and doesn't go back to the way it was... not exactly "irreversible", but at least "difficult to go back."

    control chart step change
    Step patterns are also commonly observed in manufacturing because many manufacturing activities, "can't be taken back." For example:
    • After a plant shutdown when projects get implemented.
    • After equipment maintenance.
    • When the current lot of material is depleted and a new lot is used.

    One time coming out of shutdown, we had a rash of contamination: every single 500L* bioreactor came down contaminated. It turns out that a project securing the media filter executed during changeover for safety reasons changed the system mechanics and caused the media filter to be shaken loose. Filter stability was restored with another project so that the safety modifications would remain.

    Step pattern is harder to troubleshoot than the periodic pattern because the irreversibility makes the system untestable. The key to solving a step pattern is to focus on "irreversible changes" of process inputs that happen prior to the observed step change.

    Sporadic


    A sporadic pattern is basically a random pattern.

    control chart sporadic
    Sporadic patterns are unpredictable and difficult to troubleshoot because at its core, the special-causes in-process outputs are often caused by two or more process inputs coming together. When two or more process inputs combine to cause a different result than if either two inputs alone, this is called an interaction.

    A good example is the Ford Explorer/Firestone tires debacle that happened in the early 2000's. At the time, they observed a higher frequency of Ford Explorer SUVs rolling over than other SUVs. After further investigation, the rolled-over Ford Explorers had tires mainly made by Firestone. Ford Explorer owners using other tires weren't rolling over. Other SUV drivers using Firestone tires weren't rolling over. It was only when Firestone tires AND Ford Explorers used in combination that caused the failure.

    To be blunt, troubleshooting sporadic patterns basically sucks. The best thing about a sporadic pattern is that it tells you is to look for more complex interactions within your process inputs.

    Summary


    Because the categories of patterns are not well defined (i.e. "I know it when I see it"), identifying the pattern is subject to debate. But know that the true root cause of the pattern must - itself - have the same pattern.


    Tuesday, February 14, 2012

    Pick Actionable Factors for Multivariate Analysis


    Here's you:

    • You collect a ton of data from your large-scale cell culture/fermentation process.
    • You're going blind alt-tabbing between Excel and JMP.
    • You spend waaayyyyyyy too much time pushing around data and not getting answers.
    And when you finally have the data the way you want it, your multivariate analysis tells you something like,

    Final NH4+ (mmol) and Peak Lactate (g/L) correlate with Volumetric Productivity (mg/L/day).

    Scientific curiosities are great for long-term process understanding, but when you're in the middle of a flagging campaign, manufacturing managers want to hear about immediate and short-term actions they can take to meet the campaign goals.

    The key to avoiding this career blunder (of presenting irrelevant work to your customers) is to select only actionable parameters for your main effects and interactions when building your multivariate analysis. In JMP, it looks something like:

    How to build multivariate analysis JMP

    In the above example, we can control inoculation density (Ini VCD) by extending the previous culture's duration. As well, a biologics license agreement may allow a window for executing pH shifts (VCD at pH Shift) as well when to feed (Cult Dur at Batch Feed). Actions that manufacturing can take by simple scheduling changes are ideal for putting into the multivariate analysis that deliver immediate solutions.

    Constructing the main effects of your model by selecting actionable parameters is best for solving REAL manufacturing problems as well as for advancing your career as the person who finds the way to meet campaign goals.


    Tuesday, January 3, 2012

    What Lean says about the Stanford Kicker


    Mark Graban has an excellent post on the Lean Blog on the blaming of the Stanford kicker for their 41-38 loss to Oklahoma State in the Fiesta Bowl. Holding with the dogma of lean, failure of a system (football team to win the game) is rarely a single root cause (the kicker).

    Mark's post is entertaining as it is informative. Here is yet another great application of lean thinking:

    http://www.leanblog.org/2012/01/blame-the-stanford-kicker-blame-the-kicker/


    Friday, October 28, 2011

    Example of Production Culture KPI: Volumetric Productivity


    Say you are running a 2g/L product from a ten-day process at your 1 x 6000L plant, with strict orders from management to minimize downtime. This product is selling like gangbusters, which means every gram you make gets sold, which means you've got to make the most of the 80-day campaign allotted for this product.

    The volumetric productivity for the process is 2g/L/10days = 0.2 g/L/day.Running a 6000L capacity plant gives you
    • 12 kilos every 10 days.
    • 8 run slots given the 80-day campaign
    • Maximum product is going to be: 96 kg for the campaign.

    But suppose your Manufacturing Sciences team ordered in-process titer measurements and found that Day 8 titers were 1.8 grams per liter. in process titersHarvesting at day 8 means:
    • 10.8 kilos every eight days.
    • 10 run slots given the 80-day campaign
    • Maximum product is going to be 108 kg.
    By harvesting earlier, you gain two additional run slots... during which time you can make 21.6 kg; but since you lost 1.2 kg/run for 8 runs totalling 9.6 kg, the net gain is 12 kgs.

    There are a lot of assumptions here:
    • Your raw material costs are low relative to the price at which you can sell your product
    • Your organization is agnostic to doing more work (ten runs instead of eight).
    It is difficult for plant managers to end a culture early to get 10.8 kgs when simply waiting two more days will get you 12 kgs. It quickly becomes easy when you see how two-run slots open up and you have the opportunity to make 21.6 kgs to make up for the lost product from ending the fermentation early, or rather, the point of maximum volumetric productivity.


    Wednesday, October 26, 2011

    How to Compute Production Culture KPI


    Production culture/fermentation is the process step where the active pharmaceutical ingredient (API) is produced. The KPI for production cultures relates to how much product gets produced.

    The product concentration (called "titer") is what is typically measured. Operators submit a sterile sample to QC who have validated tests that produce a result in dimensions of mass/length3, for biologics processes, this is typically grams/liter.

    Suppose you measured the titer periodically from the point of inoculation; it would look something like this:

    titer curve typical production culture
    The curve for the majority of cell cultures is an "S"-shaped, also called "sigmoidal," curve. The reason for this "S"-shaped curve was described by Malthus in 1798 where he described population growth as geometric (i.e. exponential growth) while increases in agricultural production was arithmetic (i.e. linear); at some point, the food supply is incapable of carrying the population and thus the population crashes.

    In the early stages of the production culture, there is a surplus of nutrients and cells - unlimited by nutrient supply - grow exponentially. Unlike humans and agriculture, however, a production fermentation does necessarily have an increasing supply of nutrient, so the nutrient levels are fixed. Some production culture processes are fed-batch, meaning at some point during the culture, you send in more nutrients. Regardless, at some point, the nutrients run low and the cell population is unable to continue growing. Hence the growth curve flattens and basically heads east.

    In many cases, the titer curve looks similar to the biomass curve. In fact, the integral (area under that biomass curve) is what the titer curve typically mimicks.

    The reason this titer curve is so important is because the slope of the line drawn from the origin (0,0) to the last point on the curve is the volumetric productivity.

    Volumetric Productivity
    Titer/culture duration (g/L culture/day)

    The steeper this slope, the greater the volumetric productivity. Assuming your bioreactors are filled to capacity and that supplying the market with as much product as fast as possible, then maximizing volumetric productivity ought to be your goal.


    Counter-intuitively, maximizing your rate of production means shortening your culture duration. Due to the Malthusian principles described above, your titer curve flattens out as your cell population stagnates from lack of nutrients. Maximizing your volumetric productivity means stopping your culture when the cells are just beginning to stagnate. End the culture early and you lose the opportunity cost of producing more product; end the culture late and you've wasted valuable bioreactor time on dying cells.

    The good news is that maximizing your plant's productivity is a scheduling function:


    1. Get non-routine samples going to measure the in-process titer to get the curve.
    2. Study this curve and and draw a line from the origin tangent to this curve.
    3. Draw a straight line down to find the culture duration that maximizes volumetric productivity.
    4. Call the Scheduling Department and tell them the new culture duration.
    5. Tell your Manufacturing Sciences department to control chart this KPI to reduce variability

    There's actually more to this story for Production Culture KPI, which we'll cover next.


    Tuesday, October 25, 2011

    How to Compute Seed Fermentation KPI


    So,  if you agree that the purpose of seed fermentation (a.k.a inoculum culture) is to scale-up biomass, then the correct key performance indicator is final specific growth rate.

    To visualize final specific growth rate, plot biomass against time:


    The cell density increases exponentially, which means on a log-scale, the curve becomes linear. The specific growth rate (μ) is the slope of the line. The final specific growth rate (μF) is the slope of all the points recorded in the last 24-hours prior to the end of the culture.

    To compute the final specific growth rate, simply put datetime or culture duration in the first column, biomass in the second column, and the natural log of biomass in the third column:

    tabular inoc culture kpi
    In Excel, use the SLOPE function to compute the slope of the natural log of biomass:

    =SLOPE(C5:C7,A5:A7)
    Alternatively, if you don't want to bother with the third column:

    =SLOPE(LN(B5:B6),A5:A7)
    This number has engineering units of inverse time (day-1). While this measure is somewhat hard to physically understand, we look towards the ln(2) = 0.693 as a guide: If a culture has a specific growth rate ~ 0.70 day-1, then its cell population is doubling once per day.

    Computing this KPI for seed fermentation and then control charting this KPI is the best start you can make towards monitoring and controlling your process variability.


    Monday, October 24, 2011

    KPIs for Cell Culture/Fermentation


    Control charting each process step of your biologics process is a core activity for manufacturing managers that are serious about reducing process variability.

    Sure, there's long-term process understanding gained from the folks in manufacturing sciences, but that work will be applied several campaigns from now.

    What are the key performance indicators (KPIs) for my cell culture process today?

    To answer this question, start with the purpose(s) of cell culture:
    1. Grow cells (increase cell population)
    2. Make product (secrete the active pharmaceutical ingredient)

    Seed Fermentation (Grow Cells)


    There are plenty of words that describe cell cultures whose purpose is to scale-up biomass; to wit, seed fermentation, inoculum cultures, inoc train etc. Whatever your terminology, the one measurement of seed fermentation success is growth rate (μ), which is constant in the exponent of the Arrhenius equation:

    X = X0eμΔt

    Where:
    • X = current cell density
    • X0 = initial cell density
    • Δt = elapsed time since inoculation

    For seed fermentation, the correct KPI is the final specific growth rate; which is the growth rate in the final 24-hours prior to transfer. The reason the final specific growth rate is the KPI is because the way seed fermentation ends is more important than how it starts.

    Production Fermentation (Make Product)


    The output of the Production Fermentor is drug substance; the more and the faster, the better. This why the logical KPI for Production Fermentation is Capacity-Based Volumetric Productivity.

    A lot of folks look at culture titer as their performance metric. Mainly because it's easy. You ship those samples off to QC and after they run their validated tests, you get a number back.

    Culture Titer
    Mass of product per volume of culture (g/L culture)

    The problem with using culture titer is that it does not take into account the rate of production of product. After all, if it took culture A takes ten days to make 2g/L and culture B takes 12 days to make the 2g/L, according to titer, they are equivalent, even though A was better. This is why we use volumetric productivity:

    Volumetric Productivity
    Titer/culture duration (g/L culture/day)

    Culture volumetric productivity takes into account the rate of production pretty well, and in our example culture A's performance is 0.20g/L/day while culture B's performance is 0.17 g/L/day. But what of the differences between the actual amount of product manufactured? I can run a 2L miniferm and get 0.40g/L/day, but that isn't enough to supply the market. This is why bioreactory capacity must be included in the true KPI for production cultures.

    Capacity-based Volumetric Productivity
    Volumetric Productivity * L culture / L capacity (g/L capacity/day)

    Capacity-based Volumetric Productivity is the Culture Volumetric Productivity multiplied by the percent of fermentor capacity-used, such that a filled fermentor scores higher than a half-full fermentor.

    KPIs are generally not product-specific; instead, they are process class specific. For instance, all seed fermentation for CHO processes ought to have the same KPI.

    Generally, KPIs are simple calculations derived from easily measured parameters such that the cost of producing the calculation is insignificant relative to the value it provides.

    KPIs deliver significant value when they can be used to identify anomalous performance and actionable decisions made by Production/Manufacturing in order to amend the special cause variability observed.


    Thursday, September 8, 2011

    Process Capability (CpK)


    From a manufacturing perspective, a capable process is one that can tolerate a lot of input variability. Said another way, a capable process produces the same end result despite large changes in material, controlled parameters or methods.

    As the cornerstone of "planned, predictable performance," a robust/capable process lets manufacturing VPs sleep at night. Inversely, if your processes do not tolerate small changes in materials, parameters or methods, you will not make consistent product and ultimately end up making scrap.

    To nerd out for a bit, the capability of a process parameter is computed by subtracting the lower specification limit (LSL) from the upper specification limit (USL) and dividing this by the standard deviation measured of your at-scale process:

    1fc1cbd2a59a0da04cb5e11abc816b77[1]

    The greater the Cp, the more capable your process. There are many other measures of capability, but all involve specifications in the numerator, standard deviation in the denominator and values of 1 or greater means "capable."

    A closer look at this metric shows why robust processes are rarely found in industry:

    • Development sets the specifications (USL/LSL)
    • Manufacturing controls the at-scale variables that determine standard deviation.

    And most of the time, development is rewarded for specifications that produce high yields rather than wide specifications that increase process robustness.

    Let's visualize a capable process:

    cc

    Here, we have a product quality attribute whose specifications are 60 to 90 with 1 stdev = 3. So Cp is (90-60)/6*3 = 30/18 = 1.6. The process has no problems meeting this specification and as you can see, the distribution is well within the limits.

    Let's visualize an incapable process:

    ncnc

    Again, USL = 90, LSL = 60. But this time, the standard deviation of the process measurements is 11 with a mean of 87.

    Cp = (90 - 60)/ 6 * 11 = 30/66 = 0.45. We can expect the process to meet the specification approximately 45% of the time.

    Closer examination shows that the process is also not centered and vastly overshoots the target; even if variability reduction initiatives succeeded, the process would still fail often because it is not centered.

    If you are having problems with your process reliably meeting their specifications, apply capability studies to assess your situation. If you are not having problems with your process, apply capability studies to see if you are at risk of failing.

    The take-away is that process robustness is a joint manufacturing/development effort, and manufacturing managers must credibly communicate process capability to development in order to improve process robustness.

    Get a Proven Biotech SPC Consultant
    -->