Showing posts with label manufacturing sciences. Show all posts
Showing posts with label manufacturing sciences. Show all posts

Monday, November 11, 2013

Contamination Control of Cell Culture Bioreactors

"Contamination Control"

A misnomer. I can see how they got that name... from "Pest Control," but I still hate it.

"Bioreactor control" makes sense as cell culture manufacturers try to direct the behavior of pH, dissolved oxygen, temperature, agitation, pressure...

But "contamination control"? No one is trying to direct the behavior of bioreactor contamination: Everyone tries to abolish bioreactor contaminations.

The abolition of bioreactor contamination in a large-scale setting is generally a team sport. It can take just one person to solve the contamination. But usually, the person who figures out what went wrong (the brains) is unlikely the same as the person who implements the fix (the hands). And in a GMP environment, the change implementation is a coordinated process involving many minds, personalities, and motivations. With all those people come an inordinate amount of politics for a goal that everyone seems to want to reach: no contams.

Immediate-/Medium-term Fixes

The first thing to realize is that operations management is usually the customer when it comes to solving bioreactor contaminations. Everyone's butt is on the line, but no group burns more resources responding to bioreactor contaminations than them. And in my experience, there is no "short-term" vs. "long-term" solution.

There is no long-termTo operations management, there's just immediate solution and medium-term solution.
  • Immediate solution :: what are you going to do for me today?
  • Medium-term solution :: what lasting solution are you going to implement after the immediate solution?

Science... if it fits

The second thing to realize is that there's no room for science. The prime objective is to stop the contaminations. The secondary objective is to find the root cause. If identifying the root cause helps stop the contamination, that's a bonus; but root cause or not, you still must stop the contaminations.

For example, if your contamination investigation team thinks that there are five contamination risks, the directive from management will be to implement all CAPAs necessary to address all five risks. If the fixes work, "Great! You met the objective." Do you know what the true root cause was? Not a clue (it was one of those five, but you'll never know which one).

Political

The third thing to realize is that contamination response is as much political as it is technical.
  • You can have the right solution, but present it the wrong way - and it's the wrong solution.
  • You can formulate the right solution that is not immediately actionable, no one wants to hear about it.
  • You can irrefutably identify the true root cause (thereby shining light on GMP deficiencies), and run against resistance.
Being right is different than being effective. And "bioreactor contamination control" at large-scale requires effectiveness. For in-house resources, it requires a keen understanding of interpersonal dynamics. For organizations that are at either a technical or political impasse, there are external bioreactor consultants who understand how to effectively troubleshoot and abolish bioreactor contaminations.


Abolish Bioreactor Contaminations

Friday, August 23, 2013

10 Ways to Tell If You Suck at Cell Culture Support

Here are 10 ways to tell if your support of large-scale cell culture, well, sucks:
  1. s-curve volumetric productivityKey performance indicators.
    You don't know what the right KPIs are for cell culture, but you're 100% certain that it's titer.
  2. Proven Acceptable Limits.
    You don't have any defined for your critical process parameters and you failed to demand them of Process Development.
  3. control chart IR spcControl charts. You're not using them or you don't know how to make them, and your bar graphs are just fine, thankyouverymuch. They're not just fine and it's because you can't identify:
  4. Special cause vs. common cause variability.
    You investigate common cause variability because that titer seemed too low or too high.
  5. CpK. You don't know what process capability is and you're not calculating them.
  6. Histograms. You aren't looking at the distribution of your KPIs.
  7. Bi-variate Analysis.
    Linear-regressions, ANOVA, Tukey-Kramer.  You have no idea what this stuff is, 我還不如寫中文.
  8. multivariate analysisMultivariate Analysis.
    You're not doing these and when you do, Y-responses are treated as X-factors.
  9. MSAT local labLocal Lab. You don't have a local MSAT lab to run satellite experiments to confirm the hypothesis generated from the plant.

    A lot of people assume that you can use the resources of a distant process development lab; but then again, a lot of people like blood sausage.
  10. Excel. You're still using Excel to store data. You're still using Excel to analyze data. If you're looking to play varsity cell culture support, you really need to be using a cell culture data management system.

See also:

Friday, July 19, 2013

What IS Peptone Anyway?

According to the free dictionary,

pep·tone
n.
Any of various water-soluble protein derivatives obtained by partial hydrolysis of a protein by an acid or enzyme during digestion and used in culture media in bacteriology.

Question: what is the source of protein?

Answer: Do you ever wonder what happens to the parts of the animal that humans DON'T eat or use?

Peptone vendors will get the animal scraps and make peptone by "dissolving" them with acid or digesting them with enzymes and eventually make them into a powder which gets sold to cell culture manufacturers who use them.

I know of bovine- and porcine-derived peptones... (aka "beef" or "pork").  With the mad cow scare from several years back, Process Development departments were moving away from bovine- to porcine.  And since then processes that use peptones have tried to move towards non-animal derived (aka "veggie") peptone.

As I've said before, peptone is that je ne sai quoi that the cells like and boosts their productivity.  A process development department that continues to use peptones do so at the risk of increasing manufacturing variability in favor of higher small-scale cell culture productivity.

And doing so risks making the process susceptible to peptone lot variability, which ultimately diminishes process robustness.

tl:dr - peptone is dissolved cow/pig/veggie parts ground into fine powder used by some biologics manufacturers to increase cell culture productivity.

Sunday, June 30, 2013

Examples of Cell Culture Productivity KPIs

Let's apply what we know of cell culture productivity KPIs.

Below is a control chart of a process that produces a stable, albeit variable titer:
control chart of titer
The titer is a very simple data point to collect. QC measures it following their procedures and they spit out this one number every time a sample gets submitted.

Time required at the production culture stage to achieve this 2g/L is 10 days give or take a few hours.control chart of culture duration
The culture duration is also relatively easy to determine since we know the timestamp of inoculation and we know the harvest time. An arithmetic subtraction is all that is required to find this number.

Culture Volumetric Productivity

The culture volumetric productivity is computed by dividing titer by culture duration.control chart of volumetric productivity
It stands to reason that control chart of culture volumetric productivity shows a stable, in-control KPI.

It turns out, there was a scheduled facility shutdown after Run 8. And starting with Run 9, there was a mis-specified parameter that determines the fermentor volume.control chart of culture volume
Our control chart shows special cause signals from Run 9 - 12 indicating that it took 4 batches before the QA Change Control system was able to push through the change.

Capacity-based Volumetric Productivity

By including bioreactor volume - which is determined by load cells or radar and known to the control system's process historian - we can compute capacity-based volumetric productivity:
control chart of capacity-based volumetric productivity
If you look at it, the control chart for capacity-based VP doesn't look that different from the culture VP. Even with runs 9 - 12 at 83% capacity, there is still no obvious, control-limit-violating-special-cause signals.

The shutdown lasted seven days  and you can see that even though the bioreactors were cleaned and sterilized, they were left fallow for several days before the plant went back into production. The turn time spikes
control chart of turn time
Let's have a look at plant-based volumetric productivity.

Plant-based Volumetric Productivity

control chart of turn time
Again, we see here that runs 9-11 show a depressed plant-volumetric productivity, but still no obvious control chart violations. Plant-based volumetric productivity is a lot harder to compute since you're talking non-row data (in the SQL sense).

Typically, your manufacturing control system (MCS) is enumerating UnitProcedures and storing each UnitProcedure in their own row. To compute the turn time, you actually have to list out the previous several UnitProcedures and find the previous harvest and reliably getting this data is a pain in the butt.

Plant-based volumetric productivity violates Principle #2 of MSAT data:
The benefits derived from collecting the data needs to out-weight the costs.
In this case, for this operation where variability in other parameters are relatively high, all this extra work doesn't give you much that much benefit.

Conclusion

In the perfect world, data is easy to get and KPIs tell you a lot. In reality, it may tell you that you need to reduce your process variability before your KPIs are worth collecting.

Alls I'm saying is that you need not forge ahead and apply every KPI that you learn about. In some cases, getting the data may cost more than the data is worth.

Related articles:

Saturday, May 18, 2013

James T. Kirk, Plant Manager

If you think about it, the starship Enterprise is a plant (i.e. factory).

It's a plant that manufactures light-years.


Construction of the USS Enterprise NCC-1701 as depicted in Star Trek 2009

Kirk is the Plant Manager.

Spock is the Director of Technology.

McCoy is charge of EH&S.

Scotty is Director of Production (i.e. running the warp drive that produces all those light years).

And the SCADA (supervisory control and data acquisition) system is what they call the Enterprise's "Computer."

Nothing illustrates this better than this one scene from J.J. Abrams' 2009 reboot of Star Trek.

SPOILER ahead... If you haven't seen it, you should stop reading this post and go rent it on Amazon.

Then, you can go look up movie times and get tickets to the sequel(out this week).






Anyway, at some point in the movie, Kirk and Scotty get beamed aboard the Enterprise, but end up in utilities. Scotty gets beamed into the piping so Kirk has to go free him.


Where does he go?



Looks like an HMI (human machine interface) to me....



What's he doing? Oh, manually overriding a valve.



It's hardly recognizable as an HMI with those sexy lights across the top and snazzy faceplate graphics.



I guess they covered basic SCADA training in Starfleet ensign training.

But make no mistake. That looks like either a PLC (programmable logic controller) or a DCS (distributed control system).

And this is just for the utilities. The control system for the entirety of the Enterprise would be far more sophisticated.

I keep reading about how long we have to wait before we get Star Trek technologies... or how long before we have hoverboards...

But the fact of the matter is this. So long as we are minting CS and ChemE grads whose purpose in life is to get internet users to click on ads (as opposed to them creating and deploying SCADA software), it's going to be a long, long time.

At Zymergi, we're doing our part, furthering the deployment of these technologies by helping install, validate, and use these SCADA systems to manufacture biologics.

How about you?

Other reading:

All screenshots are from the Star Trek 2009 movie from Paramount Pictures, Spyglass Entertainment and Bad Robot Productions.

Friday, May 10, 2013

Automation Engineer's Take on Wall-E

I was talking with a buddy from my Cornell ChemE days (who now works in social media) about the odd trajectory of his career. Having had a successful career in biopharma and hospital administration, he's now a social entrepreneur. And it puzzled me that he is fulfilled "not using his degree" in social media.

From his side, he was puzzled that I liked running an automation business helping people get and interpret machine data so their factories operate more efficiently.

As an MBA, he explained, "Business is about people and relationships. I want operate in a world where people matter, and that's what 'social' is."

I have no disagreements with that statement. I did add:
Business is about making money...creating wealth. A world where everyone is wealthy is one where no one has to work; in that world, we have machines at our beck and call. Automation is the means to that world.


Screenshot from Disney Pixar's WALL-E where we find humans have fled Earth in a galactic cruise ship where no one has to work because their life is 100% automated.

Pixar's writers pose the question: What does the world look like when no one has to work?

Don't let Pixar's distinctly American interpretation (out-of-shape, chair-loungers watching TV while robots get us our beverage) distract from the world where everyone gets to enjoy leisure and no one has to work.

Some will jump in and say, "See, employment and working is good for man, else we'll end up all fat and lazy." It's true that some will choose this path, but the vast majority of others would do something else with all that time.

No truer words were spoken when man first uttered the phrase, "Time is Money."

Having vast wealth is synonymous with having vast amounts of time to do what you want; this time to do whatever we choose is called, "leisure." And the purpose of an economy is to lift as many of us from the bonds of employment as efficiently as possible.

As an aside, it's rather hilarious that our politicians run around trying to decrease unemployment. The world where everyone has the luxury of 100% leisure is a world where unemployment is 100%.

And all this leisure can only be possible because we created the machines to automate the tasks that would otherwise be manual.

But back to my buddy: he's also right. Ultimately, business is handled with strong personal relationships. And even after we've automated ourselves into a world where no one has to work, we'd probably spend all that leisure time socializing anyway.

More general commentary:

Wednesday, April 24, 2013

Viral Inactivation of Cell Culture Media HTST

For all this talk of bioreactor sterility, the vast majority of this contamination talk refers to bacterial contamination.

What about viral contamination?

Viral contamination is mitigated with HTST treatment of cell culture media. This is where you put the cell culture media in continuous flow while subject to a high temperature for a short period time.
  • High-temperature inactivates the virus.
  • Short-time ensures that cell culture media components do not denature.
The best time to put the media in continuous flow is when you pump the media from the media prep tank to the bioreactor, so viral inactivation often happens during this transfer. The HTST unit is essentially:
  • Heater
  • Hold-tube (insulated pipe)
  • Cooler
At large-scale, the first plug of media that goes through the HTST may not meet the specification, so this plug of media cannot be permitted to be delivered to the bioreactor. This plug may be sent to drain or recycled through the HTST unit until the HTST unit reaches steady-state.

Once at steady-state, the remainder of the media is pumped (through a sterile-filter and subsequent sterile pipes) into the bioreactor; when batch volume is reached, the remaining media is sent to drain.

Simple enough, right?

The hard part is when your HTST performance begins to degrade:
  • Perhaps your sterile filter starts clogging
  • Perhaps your heater controller output maxes out
  • Perhaps your media-prep post-use is showing problems
As recently as January of this year, Amgen's Drug Product Development published a paper titled, "Identification and root cause analysis of cell culture media precipitates in the viral deactivation treatment with high-temperature/short-time method."

I haven't read the paper, but my manufacturing sciences consulting experience predicts it to say the following:

The calcium phosphate in the cell culture media becomes insoluble at the high temperatures during the HTST. This calcium phosphate precipitate may collect on the surface of the holding-tubes thereby decreasing the heat-transfer coefficient, sporadically causing the HTST to fail.

This calcium phosphate (sandy white stuff) may also clog up the 0.1 micron sterile filter causing a high delta-pressure across the sterile filter, maybe even diminishing the media flow rate.

The problem is that calcium phosphate stays in solution except when both the temperature is high and the pH is high.


Unfortunately, high-temperature is a requirement of HTST; which means the only solution to preventing calcium phosphate precipitation and the ensuing HTST performance degradation and filter clogging is to run the media through at low pH.

See Our Fix

This is a classic multivariate problem where operating in a different range will solve the problem.

See also:


Monday, January 7, 2013

Moneyball for Manufacturing

I'm quite behind the times when it comes to watching movies. The last movie I saw was The Dark Knight Rises...

at a matinee...

so I don't get shot.

A few nights ago, I finally sat down and watched Moneyball, the movie with Brad Pitt and six Oscar nods. It is a "based on a true story" of how the perennially under-budgeted Oakland A's baseball club builds a near-championship team only to lose not only playoff games, but also their best players to big-money baseball clubs when the players' contract expire.

The Oakland A's general manager, Billy Beane, realizes his underfunded system will continue to produce good-enough results that will never win the championship. And to continue running his system the same way is insanity:
Doing the same thing over and over again and expecting different results. - Albert Einstein
To win, Beane decides to do something different, and that something different is focusing on the key performance indicators (KPIs) of winning and getting players that contribute positively to those KPIs... applying statistics and math to baseball is what they call, "Moneyball."

How many of us are in the same boat as this Oakland A's GM?
  • How many of us are getting by with under-funded budgets?
  • How many of us are managing our systems the same way they've been managed for years?
  • How many of us can improve our systems by applying data-driven statistics?
Moneyball is to baseball what Manufacturing Sciences is to manufacturing:
Biotech and pharma manufacturing is in a period of static or diminishing budgets. Do more with the same or make do with less is the general mantra as the dollars go towards R&D or to acquisitions. To make matters worse, biosimilars are coming on-line to drive revenues even farther down.

Questions I'm getting these days are:

What systems do I need to collect the right data?

What KPIs should I be monitoring?

What routine and non-routine analysis capabilities should I have?

Let's Play

p.s. - Watch the movie if you haven't seen it.  It's as good a movie as it is a good business case study.

Friday, March 23, 2012

How Manufacturing Sciences Works

The Manufacturing Sciences laboratory and data groups interact like this:

zymergi manufacturing sciences business process flow
Favorable special cause signals at large-scale give us opportunities for finding significant factors and interactions that produced these special causes. With a significant correlation adjusted (for cell culture: R2> 0.65 and p < 0.05), we are able to justify expending lab resources to test our hypothesis.

Significant actionable factors from the multivariate analysis of large-scale data become the basis for a DOE. Once the experiment design is vetted, documents can be drafted and experiment prepped to test those conditions.

There are a lot of reasons we go to the lab first. Here are a few:
  1. You have more N (data samples)
  2. You can test beyond the licensed limits
  3. You get to isolate variables
  4. You get the scientific basis for changing your process.

Should your small-scale experiments confirm your hypothesis, your post-experiment memo becomes the justification for plant trials. Depending on how your organization views setpoint changes within the acceptable limits or license limits, you will run into varying degrees of justification to "fix what isn't broken." Usually, the summary of findings attached to the change order is sufficient for with-license changes to process setpoints. If your outside-of-license-limitsfindings can produce significant (20 to 50%) increase in yields (or improvements in product quality), you may have to go to the big guns (Process Sciences) to get more N and involve the nice folks in Regulatory Affairs.

From a plant trial perspective, I've seen large-scale process changes run under QA-approved planned deviation for big changes. I've seen on-the-floor production-supervision-approved changes for within acceptable range changes. I've seen managers so panicked by a potentially failing campaign that they shoot first and ask questions later (i.e. intiate the QA discrepancies, address the cGMP concerns later).

Whatever the case. The flow of hypothesis from the plant to the lab is how companies gain process knowledge and process understanding. The flow of plant trials from the lab back to the plant is how we realize continuous improvement.

More reading:

Credit goes to Jesse Bergevin for inculcating this model under adverse conditions.


Tuesday, March 20, 2012

Manufacturing Sciences - Local Lab

The other wing of the Manufacturing Sciences group was a lab group.

Manufacturing Sciences Lab
Basically, you enter the virtuous cycle thusly:
  1. Design an experiment
  2. Execute the experiment
  3. Analyze the data for clues
  4. Go to Step 1.

You're thinking, "Gosh, that looks a lot like Process Sciences (aka Process R&D)." And you'd be right. That's exactly what they do; they run experiments at small scale to figure out something about the process.

Territorial disputes are common when it comes to local Manufacturing Sciences groups having local labs. From the Process Science's perspective, you have these other groups that may be duplicating work, operating outside of your system, basically doing things out of your control. From the Manufacturing Science's perspective, you need a local resource that works on the timetable commercial campaigns to address very specific and targeted issues. People who can sit at a table to update the local plant on findings.

If your cashflow can support it, I recommend developing a local lab and here's why:

The lab counterpart of the Manufacturing Sciences group ran an experiment that definitively proved a physical bioreactor part was the true root cause of poor cell growth... this poor cell growth had delayed licensing of the 400+ million dollar plant by 10 months. The hypothesis was unpopular with the Process Science department at corporate HQ and there was much resistance to testing it. In the end, it was the local lab group that ended the political wrangling and provided the data to put the plant back on the tracks towards FDA licensure.

I do have to say that not everything is adversarial. We received quite a bit of help from Process Sciences when starting up the plant and a lot of our folks hailed from Process Sciences (after all, where do you think we got the know-how?). When new products came to our plant, we liaised with Process Science folk.

My point is: in more cases than not, a local manufacturing sciences group with laboratory capability is crucial to the process support mission.

Monday, March 19, 2012

Manufacturing Sciences - Local Data

My second job out of college was to be the fermentation engineer at what was then the largest cell culture plant (by volume) in the United States. As it turns out, being "large" isn't the point; but this was 1999 and we didn't know that yet, we were trying to be the lowest per gram cost of bulk product; but I digress.

I was hired into a group called Manufacturing Sciences, which reported into the local technology department that reported to the plant manager. My job was to observe the large-scale cell culture process and analyze the data.

Our paramount concern was quantifying process variability and trying to reduce it. The reason, of course, is to make the process stable so that manufacturing is predictable. Should special cause variability show up, the job was to look for clues to improve volumetric productivity.

The circle of life (with respect to data) looks like this:

data flow mfg support

Data and observations come from the large-scale process. We applied statistical process control (SPC) and statistical analysis like control charts and ANOVA. And from our analysis, we are able to implement within-license changes to make the process more predictable. And should the special cause signals arise, we stood ready with more statistical analysis/methods to increase volumetric productivity.


Get Contract Plant Support!
-->