Transcript For:

Unbridled Excellence #7

May 15, 2024

Navigating Nonclinical Development for CGT Products

Participants

  • Oliver Ball - Host, Dark Horse Consulting
  • Nathan Manley - Senior Principal and Head of Nonclinical, Dark Horse Consulting (5 years at Dark Horse, previously at Serious Biotherapy, PhD and postdoc at Stanford in Gary Steinberg group)
  • Sean O'Farrell - Senior Consultant, Dark Horse Consulting (UK-based, specialist in immunobiology and Gamma Delta T-cells, PhD and postdoc at King's College, previously at Gamma Delta Therapeutics)

Introduction and Welcome

Oliver Ball: Hey! Welcome to those people who have joined the webinar on time! We'll just wait a minute for some more people to dial in, and we'll get started in just a second.

Yeah, I think we'll get going, and the others can join as and when. So, welcome everybody to the seventh of the Unbridled Excellence webinar series. I'm your host, Oliver Ball.

We set up this webinar series to share some of the insights and experience that Dark Horse has generated over the 10 years now that we've been operating. In that period we've had about 375 clients, all within cell and gene, but across all product modalities and all development stages. So there's quite a good level of experience that we wanted to use this webinar series to share across the industry a little bit.

Since starting the webinar series, one of the most popular requests for webinar topics has been on nonclinical development strategy. So the people have spoken, and today we are answering that call for a webinar on this topic.

So across this webinar, we're going to cover overall nonclinical strategy, deep dives into model selection, dose determination, safety study design and a few other things, too.

So I'm delighted to introduce today Nate and Sean, who are presenting the main topic. Nate is a senior principal and the head of nonclinical at Dark Horse. He's been with the practice now for 5 years, and most recently, before that was leading preclinical development at Serious Biotherapy, a public Bay Area biotech developing numerous cell therapy types. And earlier in his career did his PhD and postdoc at Stanford, in the Gary Steinberg group, where he focused on developing neural cell therapies for stroke.

At Dark Horse, Nate works mainly on nonclinical strategy pathways, including nonclinical testing, analytical development, early stage regulatory interactions like INTERACT and pre-INDs, as well as basically everything that is included within preclinical development.

Sean is a UK-based senior consultant who has specialist expertise in immunobiology, and in particular Gamma Delta T-cells, having previously done his PhD and postdoc at King's College, in the Adrian Hayday lab, and then going on to work at Gamma Delta Therapeutics before joining Dark Horse.

So just before we get started, I want to also mention the next webinar that we have coming up. So this will be on the increasingly popular topic of AAV product characterization. This has been a theme that we have published a few pieces on recently that you may already be aware of. And if you saw us at GCC last week, we had another panel discussion on this topic. So it's very much something that is front and center for people's minds at the moment developing AAV products.

So do sign up for that topic on June 26th. We're going to be having that webinar hosted by Jacob Sexton, one of our principal gene therapy consultants.

So just before I hand over to Nate and Sean for the main topic today, just a quick reminder that you can submit questions throughout the webinar today using the Q&A function in Zoom interface, and we will be having a Q&A session at the later stage of the webinar, where we'll be answering those questions. So don't be shy. Submit your questions there, and we'll try to get to them later in the webinar.

And finally a reminder that the webinar will be available to view on demand. So if you want to share it with your colleagues, or re-watch it in future, there will be that option.

So without further ado, I will hand over to Nate and Sean to get into the meat of the discussion today.

Overview of Nonclinical Development Strategy

Nathan Manley: Great. Thank you, Oli, and thank you, everyone for attending and taking time out of your busy schedules to join us for this webinar. Today, Sean and I will be discussing nonclinical development for cell and gene therapy products. Suffice it to say there is no one size fits all solution when it comes to nonclinical strategy for cell and gene based products. Rather, these kinds of biologics typically require de novo construction of a nonclinical package that can account for the product's biological complexity and how that may impact the target clinical population. Nonetheless, Sean and I have observed some common challenges and lessons learned both from our time at Dark Horse and having previously worked at cell and gene therapy companies which we'd like to share with you today.

So during this webinar, we will focus on 3 main topics in nonclinical development, namely, model selection, dose determination and design of pivotal nonclinical safety studies, after which, as indicated by Oli, there will then be time for some question and answer sessions. So please, as you're listening to this talk, if some questions pop into your mind, go ahead and place those into the Q&A panel, and we will try to get to those during the Q&A session.

So before we dig into our 3 focus topics, I would first like to set the stage a bit with an overview of nonclinical development strategy. Generally speaking, formal nonclinical development can be divided into 3 main stages, each of which we will touch on in today's webinar.

Stage 1, starting on the left, involves the selection and development of suitable nonclinical models to characterize product efficacy and safety which may include a combination of in vitro and in vivo model systems. Stage 1 also is when a candidate product will be verified to have therapeutic potential via generation of pilot proof of concept or POC efficacy data, which serves as a key indicator that further product development is warranted.

During Stage 2, moving along to the right, chosen nonclinical models are then used to perform dose finding studies both with respect to efficacy and safety. In addition, Stage 2 activities often include further refinement of things, such as study endpoints as well as gaining a deeper understanding of model variability and collection of pilot safety and biodistribution data.

Then, finally, information gathered during stages 1 and 2 then feed directly into Stage 3, which consists of the final pivotal nonclinical studies to enable first in human or FIH studies. Critically, the progression of nonclinical development must be properly aligned with ongoing CMC and regulatory activities to ensure that the resulting nonclinical data are maximally positioned for clinical translation. At the end of the presentation, we'll return to this concept of interdependency, and we'll highlight some of the key CMC and regulatory activities that should align with nonclinical development, which are also represented here on this slide.

So this is, of course, kind of an ideal scenario where there's this nice logical stage progression of nonclinical development. It doesn't always happen this way. And we're going to touch on a couple of scenarios where this may not be the case and what to potentially do about it, but where and when possible, this is certainly the way that we recommend one progresses through their stages of nonclinical development.

Model Selection

So now, moving to our focus topics, let's begin with model selection. And beginning specifically with pharmacology model selection, 2 key parameters to consider in this case are, 1, the model's ability to recapitulate disease state or relevant injury state, and 2, the model's ability to maximally support product engraftment or integration depending on what the product is, and its subsequent persistence.

In the context of pharmacology studies, accurate modeling of disease or injury state should be the primary driver to maximize the likelihood that nonclinical efficacy data will translate to meaningful therapeutic benefit in the target clinical population. Some key questions relevant to modeling disease or injury state include: is there a gold standard model? If you can answer yes to that, then you have your model, and please use that. However, even if that is the case, there are questions you need to ask within that gold standard model or non-gold standard, whatever models are available out there that you are considering, such as how similar is the relevant anatomy and physiology of that model to humans.

If there are limitations, which there almost always are, are those limitations acceptable? We certainly know of scenarios or examples where that is indeed the case. Probably the most common one currently in the industry is the use of the NSG mouse tumor xenograft cancer model as an efficacy model for things like CAR-T or TCR products, etc. This is well accepted as an efficacy model for developing targeted cancer therapies, even though the NSG mouse is completely lacking an immune system that is arguably quite relevant to the function of these types of products, but is nonetheless supportive of the product's primary mechanism of action or MOA. And that's one of the real key questions here to always keep in mind. Even if there are limitations to that model, does it enable you to study and demonstrate your product's primary MOA?

Secondly, how good are the endpoints within a given model that you're considering, and how good should be based on things such as the relative consistency or variability of that model. How much noise are you going to have to deal with? How easy, therefore, will it be to detect a discernible treatment effect? What magnitude of effect will your product have to have for you to demonstrate efficacy? And critically, how relevant are the available endpoints within a given model to clinical outcome, or thought of another way, patient quality of life? As these will be critical for ultimately arguing that risk-benefit profile, and trying to move into first in human studies.

Now, if we move to the other side of the balance, which is perhaps not the primary focus of pharmacology model selection, but still nonetheless important, looking specifically at product engraftment or integration and persistence, some key questions to ask for guiding your selection include: what is my product's expected engraftment or integration profile in humans? And similarly, what is my product's expected persistence profile in humans? So a given model for pharmacology must support sufficient engraftment or integration to allow you to measure therapeutic impact, obviously, but from a persistence perspective also needs to be able to allow you to measure that therapeutic effect for a sufficient amount of time to demonstrate durable efficacy, and how we define durable efficacy completely depends on what your target indication is clinically and what your product is meant to do in that context.

In addition, it's critical to understand whether or not that model enables full functionality of your product. And within that, from a prioritization perspective, again, does it support activity of the primary MOA of your product? If there are secondary MOAs that are hypothesized or known for your product and if not supported by that model, can they potentially be supplemented with other data so that you could still potentially go forward with a given model choice?

When we're thinking about transgene containing products specifically, there also is the important question of homology for that given transgene. Will that transgene, once expressed, be functionally active within the model system? Can it bind a cognate ligand or receptor needed to carry out its function, and that can be looked at both initially by just straight up sequence comparison, but also needs to be explored functionally in a variety of ways.

And then in the context of in vivo gene therapies specifically, another critical question for pharmacology studies, and arguably safety studies, is are the correct cell types or tissue and organ types transduced or integrated by your gene therapy? We are well aware of some differences in the AAV field, for example, where transduction profiles in a mouse do not at all represent what you see in primates. And so there needs to be careful consideration there, of course, for in vivo gene therapies of whether a model system is appropriately representative of what cell types will be impacted or transduced by your product.

So now, what if there are multiple models to choose from? What do you do? What sort of consideration should you be thinking about? Sean is now going to walk us through this fairly common challenge as a scenario.

Sean O'Farrell: Thank you, Nate. So what we've got for you here in this presentation is a few "what if" scenarios and this is the first of these, and you'll see a few of these pop up later in the presentation as well. So staying in the context of pharmacology model selection, as Nate mentioned to you, you can be in a scenario, for instance, where there are several potentially suitable preclinical in vivo models to choose from.

You could imagine yourself or your company in a situation like this, which is entirely arbitrary, and not really based on a disease model in itself, but is more here for illustrative purposes. So what we have here is we have 4 hypothetical preclinical models, 2 in mice, one in zebrafish, and one in a rabbit, and we have assigned different scores to these, based on 5 preclinical model attributes that are hopefully, you will agree, quite important for these models.

So if I just take you through these one by one: recapitulation of human disease is a very important component of these models. So put very simply, how well is the human disease, the human target indication reflected in animals? For instance, things like tumor growth, injury, wound healing, things like that would be quite important to consider, and how well those are reflected. And you can see here that our models score quite differently across the board for that first attribute.

Moving down, the next key piece is the clinical relevance of endpoints, which usually you can translate quite well. But in this instance we've received sort of quite mediocre scoring across the board for these models with zebrafish performing quite poorly on this front. These scores are all out of 3, by the way, on a per row basis.

The next sort of endpoint consideration is actually the readout consistency of your endpoints. Is there going to be a high degree of variation in the data? Or are you going to get some quite tight data points? And you can see here again the board is split. So previously the rabbit scored quite well on recapitulation of human disease, but the endpoint readout consistency only got a 1 out of 3, whereas some of the mouse models have done better as well as the zebrafish.

The next point, then, is the actual permissiveness to use your human product. So can you actually make the human product you intend to make and put it in an animal model without having to add any tweaks, any changes? And again, a lot of these models score quite poorly in this arbitrary situation, whereas one mouse model scored incredibly well, which hadn't been scoring too well so far.

And finally, there's the durability of therapeutic as well as disease effect in these models. Again, where there's some pretty different scores across the board, and then, if you were to find yourself in a situation where you wanted to sort of discern which model might be best, and you sort of add these scores together, and as was intended for this slide, they all score quite similarly. So you may be in a situation where there is no frontrunner model. They all have their strengths and weaknesses. So what do you do in that kind of situation?

So some of the next steps in the nonclinical path that you could consider for a situation like this would be twofold. First of all, you could conduct pilot nonclinical studies to evaluate the magnitude and consistency of the therapeutic effect of your drug product candidate. And this one's quite an interesting point, because what I would encourage you to think about is actually, if we look at this table right here, this is telling us one thing, but you might find that in your hands some of these parameters change a bit, right? Maybe you will get better readout consistency in your own hands. Maybe there is a possibility to have a more clinically relevant endpoint, etc. Maybe you have better durability of effect depending on the mouse house that you're in, for example. So these things can change in your own hands. We would always encourage you to consider that.

And, secondly, is considering the potential just to strengthen the risk-benefit argument by actually collecting data in multiple nonclinical models. So if you find yourself in a situation like this one, you may say, "Okay, they're all scoring quite similarly, maybe we could just run those 2 mouse models, for instance, side by side and have a more robust package that way." So there are methods to get around this. But this is a nice "what if" that we do encounter quite often with clients, so hopefully, that's at least some food for thought.

So if we move on to the next slide, please, Nate, we're going to now talk you through a "what if" scenario that's perhaps even more extreme than the one that we just showed you. It's one that's maybe pretty familiar with some of you: when there are no relevant in vivo models, or at least not ones that are rapidly identifiable. What do you do then? What are the considerations that you could maybe take on board?

So you may find yourself in this kind of situation, right? So, just going from top to bottom, you may be in a situation where it's very likely, for example, that you would have to go to clinic without much in vivo efficacy data. However, in our experience, the best recipe for regulatory success on this front is to actually robustly support that position by evaluating all your options. So what do we mean by this?

So if we go to the top left of this graphic and we see literature evaluation, that's typically where you probably start. And let's say, we look at the literature, and we do find, we go down this graph, and we do find maybe there's more than one model possible. We've decided this doesn't sound too bad. Let's do some feasibility pilot studies. But we find out in those studies that the primary mechanism of action is not supported, or there's no engraftment of your cell therapy. So you really are kind of stuck with no in vivo model then.

Conversely, if we go back to the top left and we do our due diligence, we research the literature, and we really find that there's no relevant models for drug product efficacy testing, we always end up in the same green square in the middle, here, and there's sort of 4 points that you then could consider that might help you put together a nonclinical line of argumentation.

So first of all would be literature evaluation and any pilot data that you may have, and what we mean by this is actually presenting that in perhaps a more concise way to really show that you've quote unquote "done your homework." I know this may not apply to all of you, but you may find yourself in a situation where there's actually data from similar products available which you could leverage to your advantage and to support your nonclinical position.

Thirdly, perhaps most importantly, you might say, is actually to have in vitro efficacy data to support your position, right? So if there's no possibility to have any in vivo data, a strong in vitro package is almost certainly required in our experience.

And fourthly, which actually will bring me to the second half of this slide is to consider homologous modeling. So what do we mean by that? If we consider ourselves in a position where there's no relevant in vivo model, and we maybe find ourselves in a position where we do need to do some in vivo work, we need to make some sort of animal equivalent product to test in vivo. And here are some of the things that in our experience can be considered when you find yourself in that situation.

So here we're really thinking about developing a surrogate animal derived product. So again, if the slide is being presented by myself, you will be put through the paradigm of CAR-Ts. Unfortunately, that's my background. So for this scenario here, let's imagine ourselves in a position where we're developing a CAR-T that targets a cancer testis antigen, like NY-ESO-1 on tumor cells. And we've really drilled in and we found there's no really relevant in vivo model to test this example product.

There's a number of variables that we can then consider in our nonclinical argumentation. So first of all, is actually the prevalence of the antigenic target in animals, so is it expressed at all? If not, is there maybe a similar antigen that could be used?

Secondly, is the degree to which the animal product can be made. And I know for some products that can be incredibly difficult, and that might be a show stopper at this point. But, for instance, with CAR-T, if you take a moment to think about it, it's not too bad, right? You could get immune cells from the periphery of the mouse, peripheral blood, or the spleen. You could change all the CAR components to their murine equivalents. Right? You could have murine CD8 transmembrane domain, murine CD28 costimulatory domain, murine CD3 zeta, etc. So that's not too bad.

Thirdly, perhaps most nuanced out of this entire list is the animal product homologue performance testing. So how well does that animal homologue or surrogate actually perform? Is it in any way similar to the human product? Hopefully, yes.

And then finally, just to show you the other side of the coin, and maybe to highlight to you that there's some advantages to homologous modeling, is the clinical representativeness of the tumor model in animals, because as we'll get into a little bit later, in NSG mice, the tumors grow sort of under the skin typically, whereas in homologous modeling situations you may find yourself in a position where you can actually recapitulate the tumors at the intended target site for your clinical application. So you could, for instance, have chemically induced skin carcinogenesis or the DSS colitis model.

So these are some of the things to think about in this "what if" situation. Probably a lot there for you to take in. But I will now pass you back to Nate, who's going to take you through some safety model considerations. So back to you, Nate.

Nathan Manley: Thanks, Sean. Yes. So turning our attention now to safety model selection, now that we've successfully selected our pharmacology model, in this case for safety models, the balance is somewhat shifted regarding disease state versus product engraftment or persistence. For safety modeling, ability to support maximal product engraftment or integration, and persistence is key, while disease state is not necessarily a must have for all nonclinical safety studies.

There are instances where it is still necessary which are generally captured by asking the following questions, so 1: is disease state required to support the expected degree of engraftment or incorporation of my product? And just as an example of where this is relevant is when you are thinking about developing a say, neural progenitor cell based product that will be implanted into the central nervous system. It's fairly well established in the literature that those cells don't survive too well when you put them in an uninjured, naive brain. They effectively need some sort of post-injury or disease state niche to enable maximal engraftment, and not only that, but potentially to promote some degree of desired differentiation, if that's part of their therapeutic mechanism. And so in that case, disease or injury state is very important for enabling the expected degree of engraftment of that product.

Secondly, as a key question, are there any theoretical safety concerns directly linked to or driven by your product's MOA? And 2 examples here: one, first going back to the NPC or the neural progenitor cell example, we do know of some instances in the literature where post-CNS injury plasticity or disease state plasticity within that host microenvironment can actually drive ectopic tissue formation by progenitor cells such as NPCs, which would be a safety finding of potential concern that you would not pick up if you were studying those cells in a non-disease or uninjured, naive host environment.

Second example, that is probably much more well known within the industry is within CAR-T cells specifically, and their potential to induce cytokine release syndrome which can really only happen upon their stimulation by target antigen, and so putting them into a naive animal where there is no antigen dependent stimulation would not give you any sense of their potential for cytokine release syndrome, or what kind of cytokine repertoire and magnitude of release they produce upon detection of that target antigen.

So, moving back over to the engraftment and integration/persistence side of the balance, which again, here on the safety side, is really our key driver for model selection. Key questions to guide selection include: what is the best system to support engraftment or incorporation of my product? Where, again, the goal here is maximum amount, and the reason for that is because it then therefore maximizes one's ability to detect treatment related toxicities. If you have maximized the ability for your product to be present and persist within your model system?

And then, secondly, critically, what is the expected persistence of my product in humans? Can my chosen safety model accurately reflect that? And so, for example, on the far end of the spectrum for products that are expected to be permanently engrafting or integrating in humans, you're going to need a model that can support a long in-life duration for your safety study. And so you need to consider that accordingly.

Okay, so a last topic within model selection before we move to our second topic of the webinar is, what about large animal studies? When do I need to consider whether my program might require one or more large animal studies as part of nonclinical development?

In the case of cell and gene therapy products there are generally 3 main reasons that large animal studies are required which include, first, dosing specifically, that a clinically relevant dose from a scaling perspective is not achievable in a small animal system. And a fairly common example of this is when you're talking about a cell plus scaffold based product that is going to be administered directly onto the surface of a tissue or organ. And the relevant surface area of a small animal cannot proportionally scale to the equivalent size that you would use in humans. So that would be one consideration that might warrant the need for large animal studies.

Secondly, is delivery of your product. Almost always, if you are planning to implement a novel clinical administration device that has never before been used in humans for that purpose, you will likely need to do a large animal study to demonstrate feasibility and provide some degree of safety assurance of that novel device.

Thirdly, what if there are just known important differences in relevant or target anatomy or physiology between small animals and humans that will limit the utility of that small animal data? And there are various examples of this out there. Some of which include use of rabbits for studying ocular disorders, because the architecture of the rabbit eye has more similarities to humans than rodents, although even more true in the case of NHPs, and pigs for various things. They're often used for cardiac studies and also even for studies related to the spinal cord.

Now, one other important consideration in the context of large animal studies is specifically for in vivo gene therapy products. There may also be a need for large animal studies driven by one or both of the following: 1 being transgene biology which we touched on previously. If there's really just no or insufficient homology of that transgene to small animal systems, that may be something that needs to then be addressed via large animal studies, unless you're going to go the route of homologous modeling, as described by Sean.

Or, secondly, and I think perhaps most commonly, issues with cell and tissue targeting. And this comes up in the case, for example, of novel engineered viral capsids for AAV programs or lentiviral programs with novel surface engineered proteins on them that will alter their tropism in what will be expected to be a new kind of distribution or targeting profile that typically will warrant a need for large animal studies as well.

So these are all considerations that can help guide an understanding of whether that will be required or not. Ultimately, as will kind of be a reoccurring theme throughout, and perhaps forever in gene therapy, it depends. It depends specifically on your product and your target indication and your development strategy and always, always, always will have to also require buy-in from regulators to really understand necessity or not.

Dose Determination

Okay. So now, with key concepts of model selection covered, we're ready to move into our second stage, Stage 2 of nonclinical development and second webinar topic for which I'll turn it back over to Sean.

Sean O'Farrell: Thank you, Nate. So we now want to talk to you about dose determination, specifically enabling clinical dosing based on the evidence that you may have collected during your nonclinical studies.

So there's kind of 3 parts to this story to the dose determination piece. So the first message we'd like to try and get across to you really, is that nonclinical data can really help justify a product's clinical dosing strategy and indeed overall clinical strategy. So a dosing strategy should be well supported by using multiple lines of evidence. As hopefully most of you are aware, this includes, in addition to nonclinical pharmacology and safety data, a few other things.

So number one is, for instance, your understanding of your product's mechanism of action. Secondly, is the expected magnitude of effect needed to impact patient outcome. Now I know that's quite a big sentence there on its own. So if we take a moment to examine that, a great example that's quite relevant is, for instance, CD19 CAR-T. So, for instance, if we're trying to treat a lymphoid tumor, these tumors can be large. They're disseminated throughout the body. You might need quite a high dose of CAR-Ts in order to eliminate these pretty large tumors that have been refractory to multiple lines of treatment.

Conversely, as hopefully, some of you are aware, a lot of these CD19 CAR-Ts are being tested in autoimmune diseases where perhaps the magnitude of effect needed is lower because you're not looking to eliminate large tumors. You're actually looking to eliminate a small number of autoreactive B-cells that are producing autoantibodies, I should say. So that's a bit more of a dive into that second point.

Thirdly, and again, this relates to a previous slide of mine, is any clinical dosing experience with similar products can really help. Again, I'm aware that doesn't really apply necessarily to everyone.

So let's dive into the nonclinical piece. So in terms of pharmacology data to support a clinical dosing strategy, the objective really would be to support the potential benefit of the starting dose as well as informing your proposed dose escalation, which is quite typical, for instance, for cell therapy products. And the strategy would be to conduct dose finding efficacy studies to identify the minimum and the optimal therapeutic doses.

We can't just do pharmacology on its own. Unfortunately there is safety as well. So in terms of supporting clinical safety, when you go back to your nonclinical studies, you could, for instance, look at supporting the safety of the highest planned clinical dose and the strategy to support that would, for instance, include conducting dose finding safety studies that, for instance, let's say, bracket the clinical doses, sort of capture them, but also include a maximum feasible dose for the safety model.

So we've mentioned to you here that nonclinical data can help justify a dosing strategy. It can actually do a lot more than that. Besides enabling clinical dose levels, nonclinical studies should ideally mirror as well as inform some key clinical dosing parameters, such as the route of administration. Right? So for CAR-T, or for most immune cell therapy products, route, you know, intravenous. There is the delivery device, if that applies, there's the formulation of the drug product as well, in-use stability. So how long are those cells still functional for once they're thawed, for instance, and the dosing frequency.

So nonclinical studies, while they do tell you a lot about how your product works, how safe it is, there is always that final piece to tie it all together where you're actually using those data to support the clinical route that you're setting out in a regulatory submission.

So let's dive into dose determination a little more. So the big question you might be asking yourself at this stage is, when should dose finding studies be performed? And really, we believe that this is most commonly and most effectively done during Stage 2 of nonclinical development, and this should be in sync with CMC.

If we just go back to that timeline that Nate showed you earlier, there is that middle stage that we've just highlighted here for you, which is dose finding and model refinement where you may end up getting some data, some beautiful data hopefully, like the data here below, where you've got 3 different dose levels in blue, all with different degrees of effects. And you can kind of pick your optimum dose. Maybe that middle blue one, for example.

So let's consider 2 things. Let's firstly, consider what you need to go into dose finding, and secondly, what you might get out of it. So in terms of what goes into it, the key preceding activities. And as Nate mentioned to you earlier, there is this balance between CMC and nonclinical, and the cross talk and trying to achieve the milestones together so that it's most efficient. And again, this applies here.

So there's multiple nonclinical things that can be done to inform dose finding study, things like efficacy and safety models being selected, pilot data with your chosen endpoints, locking of the route of administration and the dosing frequency. And the CMC parallel activities that were usually really beneficial to have done by this point would include things like at least a path to the phase 1 process lock. Some candidate assays for identity, purity, and maybe even potency, at least being done regularly or being considered. And finally, the intended drug product formulation.

Now let's say you've ticked all of those off, or most of them. You do your dose finding. What do you get out of it? The key outputs would be things like minimum and optimal efficacy doses, efficacy endpoints confirmed. Maybe you'll get some statistical powering for your pivotal study. That would be quite nice, right? So maybe you don't have to use as many NSG mice as you thought, for example. Maximum tolerated dose in the safety model. And maybe, should you be able to do it, some pilot biodistribution data.

So the main point out of all of this, what we really like to get across to you is that if you do some heavy lifting in Stage 2, your Stage 3, even though that in itself is still a heavy lift, because it's the IND enabling study. Hopefully, your Stage 3 is as low risk as possible. So you can really pick your model that you want. You can go in. You can kind of almost predict what data you get. And hopefully, that puts you firmly on the path towards your first in human trial.

So finally, before I hand back to Nate, we just want to explore the dose finding piece a little bit more and really considering using preclinical/nonclinical data to justify a human dose. And again, because it's me, there's going to be the CAR-T paradigm. So what we're looking at here is looking at this through the lens of a solid tumor CAR-T. And really, considering that actually, the nonclinical dose levels can vary quite greatly on a sort of per kilogram of body weight in mouse versus human, quite greatly. However, this usually is not a problem so long as robust argumentation to support that difference can be put forward.

If we just consider for one moment, I know a lot of you will be very aware of this already, but just to get everyone on the same page. If we compare the NSG mouse to the human, there's some pretty strong differences, and there's a little bit of overlap as well. So if you look at disease induction, and where that disease happens, in the NSG mouse, we're growing human tumor cells in the lab. We're injecting them subcutaneously, growing things like liver tumor cells under the skin, which is an artificial location, whereas in humans, the tumors are actually developing at the target organ, and unfortunately metastasizing as well to distal sites in some instances.

By contrast, the route of administration of CAR-T in NSG mice and human is intravenous. So that's a nice overlap to have. And then, finally, another big difference is really the immune status of the recipient. Right? NSG mice are heavily immunocompromised and the efficacy of a CAR-T, for instance, I would say, is arguably entirely dependent on the CAR-T's ability to lyse tumor cells, whereas in humans, immunocompetency will return a period of time after lymphodepletion, and you may get additional effector cell activation that can then potentiate the efficacy.

So there's some differences to take into account. But then let me actually put some data in front of you. I know we don't have a lot of that in this deck, unfortunately. And these are actually some fold differences of nonclinical dose levels over human dose levels for some products that are currently going through Phase 1, 2, or 3. So we've got, again, I tried to keep this in the paradigm of solid tumors. We've got hepatocellular carcinoma, gastrointestinal cancers and ovarian and pancreatic cancer.

And you can see here that the dose levels on a per kilo basis vary quite greatly, don't they? So if we look at the HCC example in gray, the liver cancer example, the first human dose is actually something like 30 fold lower than the dose that the mice got, whereas that maximum dose is actually quite close. By contrast, if we move to the GI cancers, we're looking at approximately a hundred fold difference in mice versus humans. And in the ovarian and pancreatic cancer model, this is even more pronounced, where we've got 2,000 fold difference with that first dose level versus the mice maximum dose.

So that statement I've put in front of you here, that the dose levels may vary greatly is actually quite true. However, there's ways to present that argumentation in your regulatory submission. If we just considered the next steps in the nonclinical path, we would always really advocate for focusing on the robustness of the nonclinical argumentation. So what are the strengths of the package? What's the promising data? Things like focusing on the high, albeit arbitrary safety margin, for instance, right? So if we look at that red group over there, we can say, "Okay, we've got to use really high doses. But we also know that they're safe." So there's 2 sides to the coin.

And also acknowledging the efficacy limitations of the nonclinical model system. In addition to that, moving to that second point, we could consider things like why the product might work more effectively in humans. And maybe even if we really wanted to push the envelope on this, we could consider some in vitro data. At least in the context of CAR-T, I have seen sponsors do that. So, for instance, seeing tumor cell lysis at lower effector to target ratios may actually enable you to say, "Oh, in humans we think this would work better, because in vitro we've seen, we still get, I don't know, 50% tumor lysis at 1 to 1, for example."

So these are the kind of things to consider. Again, one size doesn't necessarily fit all. But this is a typical situation that you may find yourself in if you're developing an immune cell therapy product. So that was the dose justification piece. I'm now going to turn it back to Nate to talk to you about safety and take you through to the end of the presentation.

Pivotal Safety Study Design

Nathan Manley: Great thanks, Sean. So yeah, we have now officially moved into Stage 3 of nonclinical development and are ready to perform our final FIH enabling nonclinical safety studies. Key objectives for our pivotal nonclinical safety study include providing evidence that the clinical dosing strategy will be safe, optimally, even with some safety margin above highest planned clinical dose, as Sean illustrated previously, using appropriate dose extrapolation methods based on your route of administration.

Secondly, providing support for proposed clinical drug product release testing. And in order to do this, test article being used for the safety study, should ideally pass your intended release specs. And for gene edited products specifically, any confirmed off target editing or translocation events should be present at representative levels in that test article chosen.

And thirdly, primary purpose, of course, identify treatment associated acute or long term toxicities, the specific endpoints of which need to be informed by product biology and ideally pilot safety studies that may have indicated any potential concerns. The outcome of this is that you then will have results that may inform patient safety monitoring and a corresponding action plan as needed.

So walking through some specific study design attributes for our pivotal safety study. Going through these one by one: typically pivotal studies should be conducted in compliance with GLP regulations to ensure overall study integrity both from an execution and data perspective. Model, we kind of already talked about where support of the product is primary, disease state modeling, secondary and less essential for one of the reasons already discussed.

Test article selection, as I alluded to, needs to be representative of your clinical product, both with respect to process and product attributes, including release tests. Dose levels typically should include a maximum tolerated or maximum feasible dose in the chosen model that should enable the starting dose and optimally provide a safety margin over the highest planned clinical dose. Cohort sizes, very species dependent. There's some typical numbers there, but it can vary, and we are seeing shifts in regulatory or health authority mindset to kind of try to refine and reduce the amount of animals needed for nonclinical studies. So that is shifting.

Interim sampling time points really should rely on pilot study persistence data to identify timing of peak product level detection and changes thereafter to build out appropriate kinetic time course. And then, lastly, which is something we can come back to if there's curiosity about this in the Q&A section, in-life duration of your nonclinical pivotal safety study. It depends. But it's really largely depends on the expected persistence of your product in humans, and whether there are potential for long term safety concerns, such as tumorigenicity. And again, this is something we can come back to if people are curious about more details there.

So within each of these, it's very common that sponsors will look for opportunities to streamline, given that this pivotal safety study can be one of the more expensive and longer lead time events on the overall nonclinical path. And indeed, we have seen some successful strategies to both maximize efficiency and reduce cost of pivotal safety study designs within each of the design attributes listed here. However, any attempts to streamline or reduce must be robustly supported scientifically and ideally de-risked via regulatory buy-in.

So before we kind of move to closing remarks, I do want to just present one more "what if" scenario in the context of a pivotal safety study. And that is specifically: what if we have a safety study that we would like to utilize as pivotal, but it was conducted at an academic center using earlier process material specifically in a non-GLP setting with some gaps in record keeping or documentation. The process used to generate the test article used some manual operations that are later upgraded to more automated or controlled and even used a different grade of raw materials for some of those unit operations as well, and the study itself didn't include a full battery of safety readouts that you might typically see for a full scale GLP study.

This happens from time to time. Is it possible to use this as your definitive nonclinical safety study? Potentially. Some mitigation strategies to consider and to determine the likelihood of this being acceptable include whether there's opportunity to focus on the strengths of the overall presented nonclinical package. If the safety data is somewhat lacking, how strong is the efficacy data, because those are both important sides of the risk-benefit calculus.

Certainly, where possible to provide drug product analytical testing data to further the argument that the material going into this earlier study is still sufficiently representative, supplement with in vitro safety studies, since they're cheaper, easier, and ideally, can be done much later in development and utilize fully representative test article, and where relevant use available published nonclinical or clinical data from similar products. All of this can be used to bolster that argument.

It's not a guarantee. And so something that does have to be considered carefully and ideally de-risked through an early engagement prior to a final regulatory submission.

Alright. So, lastly, what about biodistribution? Kind of been skipping it. And for all the traditional small molecule developers in the room, this is the cell and gene version of pharmacokinetics. This is an essential piece of nonclinical data package for most cell and gene products.

Model selection must be guided by an understanding of product biology that, like safety studies, the species must be supportive of maximal product engraftment or integration and persistence. And you must consider the potential impact of disease state on product dissemination.

Ideally, pilot studies during Stage 2, preferably can be performed to identify key tissues that need to be included in the definitive biodistribution study and can ideally be done during or as part of dose finding. So you can explore the potential for dose related effects on product distribution, and you may even be able to utilize some sort of genetic modification or tagging system in your pilot studies to allow you to have a more streamlined and reduced end to gather that pilot data.

Moving into the definitive biodistribution study, you must use representative test articles. So get rid of those tags. And a quantitative sensitive method for detection. Specifically, that method must be demonstrated as fit for purpose, fully quantitative and capable of detecting rare events. Very commonly we see qPCR based platforms used for that approach.

Integration with CMC and Regulatory Activities

So to bring it all home, and before we turn over to some questions that have been coming in, returning to kind of our overall view on nonclinical development path. We've kind of outlined for you these 3 stages in which we like to think about nonclinical development and the activities that ideally occur within them, and as alluded to at the beginning and throughout, there are critical CMC and regulatory milestones that need to align with what's happening on the nonclinical side.

Looking across at CMC, for example, moving into Stage 1, there needs to be some early version candidate process that can generate material to go into those POC studies, but ideally, before you move into Stage 2 and are really determining dose and refining formulation of your drug product, perhaps you should have a baseline process that's getting closer what you'll finally use, and ideally towards the end, and with some time to still perform some degree of pilot work, you then lock that manufacturing process and have your analytical testing plan in place so that you're able to then generate material that will feed into Stage 3.

And in thinking about this also from a regulatory perspective, that you can 1, have an early engagement such as an INTERACT with FDA, or a scientific advice meeting with MHRA or EMA to show them what you're thinking about from a model selection and POC data perspective. And then a second early engagement, such as pre-IND or pre-CTA to showcase all of the Stage 1, 2 data and vet the designs of your pivotal nonclinical studies before their execution, and those then occur thereafter prior to submission of that final FIH application, whether that be IND, CTA or something else.

So with that, I would like to close the presentation, and we'd like to now open up our Q&A session. So thank you. We're already seeing some questions coming in. And Sean, I'll leave it to you to decide which one you'd like to have us address first.

Q&A Session

Sean O'Farrell: Sure. Yeah, no. Thank you everyone for taking the time out of your day to join us today. So we're getting a number of great questions coming in through the chat, quite a few actually. Now, hopefully, we can answer most or all of them. So I'll start rapid fire. So yeah, one word answers, no, I'm joking.

So first question is, what additional data/endpoints need to be evaluated for gene editing therapies?

Nathan Manley: Tricky question, but let's go for it. Great. So additional considerations for gene edited therapies. So, of course, there is the whole off target analysis, translocation analysis, analytical pipeline that must be developed to understand overall genotoxicity risk of gene edited products. This is an entirely in vitro exercise, right, but also feeds into other nonclinical study considerations such as whether you need to do an in vitro autonomous cell growth assay, also commonly called a cytokine or growth factor withdrawal assay. This is commonly required for gene edited products given that gene editing has some potential to introduce genome instability, and that also will influence the needed duration and endpoints for your pivotal safety study.

So gene edited products may require somewhat of a longer in-life duration, but it has to be considered in the context of other aspects of your product. Is it short, transient lived like an NK cell? Or is it meant to be permanently engrafted, or something in between? Those need to come together to consider how long that in-life duration needs to be, but almost certainly at the end of that, there is going to need to be tumorigenicity assessments, typically by histopathology when you're talking about gene edited products.

So that's a quick answer. There's some other things to consider as well. So certainly can answer some more details around that if there's some follow up questions on that, but maybe we'll keep it moving. Sean, what should we do next?

Sean O'Farrell: Sure. Yeah. Now we have a question on, very good one actually, is, how often do we see combined tox studies used as the pivotal safety study? So here is a yes, often, absolutely, Nate.

Nathan Manley: Agree. Yeah. And so that's a nice example of where sponsors look to streamline their overall nonclinical program. And you can do that if the model selection fits, right? If you can use the same model in both cases, you can potentially combine that in one study. Also, if the dosing considerations fit, which they don't always. Sometimes you're using a lower dose to demonstrate efficacy and provide that bridge to your starting clinical dose and potential for therapeutic benefit of that clinical dose versus having a maximum tolerated dose from a safety perspective. You can still do that as one study, and may need to have a couple of doses, and you've got to squeeze biodistribution in there as well somewhere, unless you're going to do that as a standalone. So yes, we often do see that combined as long as it works with the models and dosing considerations, and all the necessary sampling to cover all bases.

Sean O'Farrell: Brilliant. And then sticking to the in vivo study questions, there was a question here from someone who said, "Great overview," so thank you very much for that feedback. Is there any reason to conduct pivotal nonclinical safety studies in both male and female NSG mice? I'm not familiar with clinical data in the CAR-T space where sex influences CAR-T efficacy or toxicity. So how do we account for that in nonclinical studies? I think, typically, you try to look at both in even numbers, right?

Nathan Manley: Yeah, that's right. Even if there is no expected difference in treatment of the human population, if you indeed intend to treat both males and females, then generally, your pivotal safety studies need to include both genders of the model species.

Sean O'Farrell: Great. Thank you. Oli, I'm not sure. There's 2 minutes left on our clock, do we have time for one or 2 more questions?

Oliver Ball: One more Sean! One more.

Sean O'Farrell: One more, one more. Okay? So for in vivo gene therapy products, how do we navigate the first in human dose selection if the vector is novel, right? So example, novel pseudotyped lentiviral virus. Would it be appropriate to base it on the NOEL in a monkey model, even though that's not a disease model given non-human primates cannot be engrafted with tumors? Pretty technical one there. But any thoughts on that, Nate?

Nathan Manley: Yeah. Yeah. So I think there, we're going to have to use multiple lines of reasoning to get to a dose justification. I definitely agree that the no observed adverse events or toxicity limit within the primate would be a piece of that, as well as some of the other things that Sean touched on. As far as understanding of mechanism, in vitro data around how the product works and any relation to similar products that are out there. So it's going to be kind of a composite argument or rationale around dose justification. But you're right. That primate data is a piece of it, even though it's non-disease state.

Sean O'Farrell: Great. I think we'll probably leave it there, right? Given the time.

Nathan Manley: That's right.

Closing Remarks

Oliver Ball: Yeah, thank you. If anyone else has got any questions that haven't been answered, just email us, and we can have a more personal conversation to answer those questions. But thank you to everybody who's dialed in and listened and submitted questions. I, for one have learned a lot from today. So hopefully, you found it useful. You will be able to access this on demand, as I mentioned earlier on, so we'll send some more information about how to access that after the webinar finishes.

It only remains for me to thank Nate and Sean for their efforts in putting this fantastic webinar together, and we look forward to seeing you at the next one. Thank you all for joining today.

Nathan Manley: Thank you, and I'll just leave you with this last slide, saying that beyond what we covered today, there's a lot more that the Dark Horse nonclinical development team can help with ranging from strategy to technical study design and oversight to regulatory, all sorts of filings and so forth. So if any of these resonate with you, please reach out, we'd love to support your programs and help them advance forward.

Oliver Ball: Thank you all.

Sean O'Farrell: Thanks everyone.