22 | Sep | 2023
Genotoxins are chemicals or drugs or any entities that cause damage to chromosomes, DNA or RNA. The damage can result in mutations, single or double stranded DNA breaks and impaired transcription and translation. If the damage occurs in somatic cells, the consequences can include the development of tumors, cell death and inflammation but if the damage occurs in germ cells, it can cause heritable diseases, reproductive issues and birth defects. Drugs with genotoxic potential cause damage that may or may not be repaired by cellular mechanisms so if the repair mechanisms are not able to adequately repair the damage, mutations are generated that may have disease causing potential.
Due to the significant potential impact of genotoxic damage, it is critical to test new therapies for genotoxic stress potential. Since the endpoints of genotoxic testing are defined, several relatively simple bacterial and mammalian cell models are available1 One of the earliest genotoxic tests was the bacterial Ames assay which assesses genotoxic potential by measuring mutations in specific strains of Salmonella bacteria that carry a mutation in the gene required to synthesize the amino acid histidine. The bacteria are cultured in media containing histidine and then exposed to the candidate drugs. The mutagenic potential of drugs is evaluated by determining if they cause reverse mutations and allow the bacteria to metabolize histidine in the culture and the number of bacterial colonies is a gauge of high, medium or low mutagenic potential2.
Currently, two assays are popularly used to assess genotoxic stress – the Comet assay and the Micronucleus assay. The Comet assay uses single-cell gel electrophoresis assay to assess genotoxicity. The assay principle measures single- or double-stranded DNA breaks caused by drugs as cleaved DNA fragments migrate out of the cell when current is applied (ie. electrophoresis) while the undamaged DNA remains in the cell and forms the head of the comet. The denatured undamaged and cleaved DNA are stained with a DNA intercalating dye and visualized using fluorescence. While the Comet assay is simple and rapid and can be run on almost any eukaryotic cell, it does not shed any light on the mechanism of genotoxicity. The micronucleus test is also widely used to assess genotoxicity as micronuclei are essentially extra-nuclear bodies that include damaged chromosome fragments that result from chromosomal aberrations or genotoxic stress of specific drugs3. The chromosomal fragments from the micronuclei are not included in the nucleus after mitosis or meiosis so the genotoxic potential of drugs can be determined by counting the number of micronuclei. In many cases, the Comet assay and Micronucleus assay are both performed to assess the potential of drugs to cause DNA damage as well as chromosomal aberrations4. An interesting study from 2013 compared the Comet assay and Micronucleus for sensitivity and found that the Comet assay required higher doses of the test drugs and is less sensitive4. Nevertheless, both assay types provide valuable data on genotoxic stress. Research into the underlying mechanisms of genotoxicity is limited but some work has been done on drugs such as dacarbazine that is a chemotherapeutic approved to treat melanoma and Hodgkin’s lymphoma5. Dacarbazine is known to cause DNA methylation that impact transcription and translation.
At this time, the field is focused on using these assays to determine if specific chemicals, drugs or environmental toxins can cause DNA damage using simple endpoints but it is likely that more complex assays using next-generation sequencing will be broadly adopted to assess genome-wide genotoxic stress and understand mechanisms and hotspots for DNA damage6.
30 | Aug | 2023
Animal models have been the cornerstone of cancer drug development for decades and different types of tumor mouse models have been used extensively to study cancer biology and evaluate single and combination therapies. However, mouse models of cancer have also been widely acknowledged to have limited translational value and in many cases, do not accurately recapitulate tumor biology. This is especially true in the space of immuno-oncology where there are fundamental differences between the mouse and human immune systems. It is important to note that both simple and complex mouse models have a role in oncology drug development and the selection of the model is dependent on the scientific question that is being answered. For example, mice bearing subcutaneous tumors are useful for screening multiple drug assets for efficacy using simple endpoints such as tumor killing1. Once promising assets are identified, more complex models are needed to understand the drug mechanism of action and off target effects.
There are several types of more complex mouse models that can be broadly segmented as transplanted models, carcinogen induced models and genetically modified models. In the past several years, there has been an increased focus on transplanting patient tumors into mouse models. Patient derived Xenografts or PDX models have become the mainstay of oncology drug development primarily due to the availability of patient tumors via biopsy and surgical excisions. The patient tumors can be implanted into animals that have compromised immune systems so that the mouse model does not reject the human tumor – while this model is useful to study tumor growth and development in an in vivo setting, it is not useful to evaluate therapies that target immune cells such as checkpoint inhibitors. Several research model providers have developed humanized mice where components of the human immune system are introduced into immune-compromised mice such as the NSG or NCG models. Human PBMCs (peripheral blood mononuclear cells) isolated from human donors can be injected into the mice to mimic the human in vivo immune response to a xenografted tumor. One such model was reported where colorectal cancer xenografts were implanted into NSG mice that had been injected with human PBMCs2 and the effect of a combination of nivolumab (anti-PD1 therapy) and regorafenib (a multi-kinase inhibitor) was evaluated2. Interestingly, the model was most predictive in an autologous setting where the tumor tissues and PBMCs were from the same patient as the allogeneic model showed nonspecific graft-vs-host issues2. These results suggest that humanized models have a limited role in evaluating response to anticancer therapies and there is an unmet need for robust allogeneic humanized mouse models. Another type of transplant-based mouse model are syngeneic, where the mice with an intact immune system are injected with mouse tumor cells derived from mice with the same genetic background. Essentially, syngeneic models are mouse focused where a mouse tumor is evaluated in the context of a mouse immune system. While this model can be a useful proxy for the human state in some situations. Syngeneic models are reliable and cost-effective and can be used for short-lived efficacy studies. However, there are limited number of syngeneic cell lines and models and in many cases, limited translation to human disease.
Genetically modified mouse models (GEMMs) have been developed for decades and the first reported GEMM was in the 1980s3. The development of GEMMs has expanded rapidly as more advanced gene editing methods have been developed such as Cre-loxP, CRISPR-Cas9, RNA interference etc3. As gene editing methods have become more precise with less off-target effects, GEMMs have become more advanced and recapitulate several hallmarks of the disease state. However, developing GEMMs is an expensive and time-consuming exercise and in many cases, requires detailed knowledge of disease drivers. The genetic engineering required to build a relevant GEMM can be complicated with no guarantee of success. However, once a GEMM is successfully developed, it can be used to study disease development and progression, identify biomarkers for diagnostic use and prognostic monitoring and can be used to evaluate anticancer therapies. SEMMs or somatically engineered mouse models are another type of engineered model where somatic cells in the organ of interest are genetically engineered to express oncogenes or tumor suppressors4.
While there are several types of mouse models of cancer available, selecting the best model is not easy and requires a deep understanding of disease biology4. Multiple types of models may be used in a specific anticancer therapy development program that is dependent on the stage of drug development and the scientific questions that are being asked.
09 | Aug | 2023
The primary organs that are impacted by drug toxicity are the liver, intestine and kidney that are the primary sites for waste generation and elimination. Drugs administered through various routes (oral, intravenous, intramuscular, etc.) are distributed throughout the body via the vasculature and are metabolized primarily in the liver and intestine before excretion via the kidney or rectum. Since the liver is typically the first organ exposed to a drug in the vasculature, it is the most vulnerable to drug induced toxicity and this effect is known as drug induced liver toxicity (DILI). DILI has a low incidence rate but is the reason for most cases of acute liver failure1. In severe cases, a liver transplant may be the only therapeutic option. DILI can be segmented into intrinsic and idiosyncratic types2 – intrinsic DILI is typically dose dependent and is based on the properties of the drug to cause damage to liver tissues. Intrinsic DILI is more predictable as information is available on the drug structure and function. Idiosyncratic DILI is less predictable and is not dose dependent and is believed to be cause by genetic variation among the human population.
Several models are currently available to detect DILI preclinically including animal models, 2D human hepatocytes, and 3D cell models that can include microfluidics. One of the major challenges with developing animal models to evaluate DILI is that the mechanism of toxicity of drugs is not always clear. Animal models to evaluate DILI caused by specific drugs have been developed – one example is the mouse model for acetaminophen induced DILI2. Acetaminophen is a widely used pain medication which is known to cause liver failure with chronic use and was one of the earlier models of DILI. Typically, to measure DILI, drugs can be injected into rodent models at different doses to evaluate the extent of liver injury that is measured using specific biomarkers and evaluating changes in liver histology. This approach, while straightforward, does not address the question of how the drug causes liver injury2. This is critical information that is needed for smarter drug design and improved next-generation therapeutics.
Increasingly, cell-based models are being used to study DILI as these systems are completely human and are increasingly becoming more complex and therefore, more predictive of the in vivo state. Cell-based models range from 2D cells to complex organ on chip systems. Human hepatocytes are considered to be the gold standard for evaluating hepatotoxicity in an in vivo setting, but it is difficult to source primary human hepatocytes. The HepG2 cell line has been used to form 3D spheroids, but those spheroids are generated from one cell type and do not represent the 3D microenvironment. An alternative source is induced pluripotent stem (iPS) cells that are differentiated into hepatocytes. Co-culture of hepatocytes with endothelial cells, stellate cells, and Kupffer cells can better recapitulate the native environment of the liver and have been used to evaluate DILI3.
Biomarkers to measure DILI can be broadly divided into two types – biochemical markers and genetic markers. Biochemical markers of DILI are typically measured in serum samples and range from common markers of liver damage such as glutamate dehydrogenase or cleaved K18 (keratin 18) to specific circulating microRNAs or miRs. miR-122 was shown to have some clinical relevance as a marker for DILI4. While several biomarkers for liver injury have been reported and evaluated, it has been a challenge to identify a comprehensive biomarker to reliably predict DILI across the board. The ideal biomarkers to measure DILI should be sensitive, reproducible and be truly predictive of DILI as opposed to transient variation in expression levels. Several biomarkers have shown significant variation in circulating biomarker levels across patient cohorts and in some cases, within the same individual sampled at different times4. Genetic markers of DILI are being explored primarily in the context of idiosyncratic DILI and the focus has been on HLA variants5. A recent publication has shown the correlation between specific HLA alleles and sensitivity to specific drugs but so far, no single HLA allele has been identified as a marker to predict DILI. GWAS (genome wide association studies) datasets are being used to identify non-HLA related genetic markers to predict DILI but so far, no significant biomarker has been identified. The challenge with identifying genetic markers is compounded by the low prevalence of DILI and the variation across populations. Nevertheless, the availability of GWAS data will continue to fuel the search for genetic markers to predict DILI in clinical trials.
10 | Jul | 2023
The success rate of new therapies in the clinic is low as it is estimated that only 3.3% of cancer therapies were approved between 2000-20151. One of the main reasons for failure of many anti-cancer therapies is inter- and intra-patient tumor heterogeneity in morphology, gene expression pattern, metastatic potential, and mutational and epigenetic profiles. To understand these heterogeneities and identify candidates that are likely to fail early, more physiologically relevant preclinical cancer models are needed. 3D cell culture models are increasingly being used to evaluate new anti-cancer therapies largely due to the availability of cancer cell lines, primary patient tumors and patient tumors xenografts.
Patient-derived xenografts (PDX) are some of the most well-established models that are developed by direct implantation and expansion of primary human tumor samples into immunocompromised mice. PDXs retained the tumor native architecture so successful PDX models provide physiologically relevant source material for cell-based assays. While data from PDX models have translational relevance, they have some challenges – generating PDX in immunocompromised mice is not guaranteed and the process is time consuming and expensive.
Patient-derived Explants (PDEs) are ex vivo models where fresh tumors from biopsies or surgical resections are directly used for drug studies PDEs are generated using little to no tissue disruption and include tumor cells, stroma, immune cells, and vasculature, so they are an accurate microcosm of the native tumor environment2. PDEs facilitate the interrogation of molecular and histological tumor characteristics in a single sample to construct a more complete picture of the tumor. However, PDEs can be extremely fragile and are liable to disintegrate rapidly and degrade over time so optimal culture conditions are necessary to obtain sufficient data. PDEs have several advantages and limitations compared to other 3D cell models. Since explants are generated from fresh tissue, they are more predictive of patient response, and the data generated from the explants can be correlated with the individual patient response. PDEs are a very useful model to study changes in immune cells in response to checkpoint inhibitors that are the primary drug targets for most tumor indications. PDEs have limitations primarily in terms of fresh tissue availability and the culture time frame. PDEs are not suited for longitudinal studies as they tend to start degrading in about 3 days, so there is a tight timeline to generate as much data as possible. Due to the short culture time, it is difficult to measure direct tumor killing effects of immunotherapies that can take several weeks to induce cytotoxicity. Despite these limitations, PDEs have a unique role in preclinical drug development of novel cancer therapies as they are the only model that truly represents the native tumor state.
Patient-derived organoids (PDOs) have become an established platform for preclinical validation of cancer drug assets. Primary tumor cell lines have been used to develop organoids that can be grown in a matrix that mimics the in vivo basement membrane. PDOs can be generated from small amount of patient tissues and can be grown and expanded to support drug screening and mechanism of action studies. Organoids cultured directly from patient samples can grow in days compared to PDX growth in animal models that can take several months. Additionally, PDOs are more efficient than PDXs in capturing the heterogeneity, polarity, cell-cell interactions, and structure of the native tumor3. However, PDOs have some limitations in that they do not fully recapitulate the tumor microenvironment and lack vasculature. To overcome those limitations primary tumor cells can be co-cultured with immune cells and cancer-associated fibroblasts4. Another limitation is that PDOs they may not represent the genetic heterogeneity of tumors, and it is possible that one clonal cell population that has a growth advantage will dominate the organoid. Despite limitations, PDOs are promising tools for disease modeling, gene therapy, understanding tumor growth and metastasis pathways, drug screening, and personalized and regenerative therapies, and evaluating the mechanism of action of single or combination therapies5.
22 | Jun | 2023
Biologics drugs encompass a wide range of therapies including monoclonal antibodies, vaccines, protein and peptide therapeutics, cell therapies, viral vector gene therapies, nucleic acid (DNA and RNA) based therapies etc. These therapies are typically administered as injections or infusions and in specific cases via inhalation. Historically, new drugs have been primarily small molecules but increasingly new drug modalities such as cell and gene therapies and mRNA-based vaccines have increased from 11 to 21 percent of the drug development pipeline, which is the fastest growth seen in the sector1. Along with pipeline growth, the approval of biologics therapies has increased. In 2021, CDER (Center for Drug Evaluation and Research) approved 50 new drugs of which 34% were monoclonal antibodies and biologics2.
The growing pipeline and drug approvals requires increased high-quality manufacturing capabilities. It is important to note that manufacturing biologics for clinical trial and commercial use is challenging compared to chemically defined small molecule drugs. The manufacturing process requires high quality input material such as producer cell lines, sterility across the process and multiple testing points3 and scaling up a biologics manufacturing process can result in quality issues and insufficient supply to meet demand3. Due to the complexity of scale up manufacturing, drug developers are increasingly turning towards CDMOs or contract development and manufacturing organizations instead of building manufacturing facilities in-house.
CDMOs typically offer end-to-end services from drug development through manufacturing and filling and packaging the drug products. Due to the rapid growth in the biologics pipeline, the growth rate for biologics CDMO is expected to almost double from $9.9B in 2020 to $18B in 20264. However, it is important to select the right CDMO and there are some key considerations for this selection process. It is critical to ensure that the CDMO has the right mix of personnel talent to support manufacturing of the drug modality of interest, and build a product development process that is scalable. Good CDMOs have a mix of top-tier scientists, technicians, process engineers and quality control personnel as well as other core functions. Another critical consideration is how much importance the CDMO places on quality assurance and compliance with regulatory requirements. Most CDMOs have product development and manufacturing capacity and processes in place but well reputed and experienced CDMOs will understand the critical importance of robust quality control systems that monitor and document every step in the manufacturing process. Along with quality control, CDMOs that have expertise in global regulatory requirements especially if the drug is going to be launched in multiple markets5. From a practical point of view, it is important for a CDMO to be engaged with the drug developer and communicate frequently especially if there are supply chain issues, manufacturing delays or process development challenges.
Due to the various considerations, biopharma companies often prefer to partner with a single CDMO partner who fits their needs. The partnership between a drug developer and CDMO is typically a long engagement so finding the right partner who is financially stable with an open transparent culture is critical for success. A recent survey of 50 drug developers highlighted the top 3 reasons to outsource to a biologics CDMO as risk mitigation, speed and access to a portfolio of skills6. Mitigating process development and manufacturing risks typically require the CDMO to have robust infrastructure, scientific expertise, robust quality control systems and regulatory compliance expertise. Available manufacturing capacity and personnel experience are key contributors to achieving program timelines.
In summary, selecting a CDMO requires going beyond manufacturing capacity and processes to ensure that a biologics drug manufacturing program is successful and achieves the timelines, scale and quality required for clinical trials and commercial use.
13 | May | 2023
Temporal lobe epilepsy (TLE) is a chronic brain disorder where recurrent seizures occur in the temporal lobe. TLE can cause psychological issues, loss of short-term memory etc. and significantly impacts quality of life. Its incidence is reported to range between 0.04 to 0.1% of the global population and therefore is considered to be one of the more prevalent neurological diseases2. TLE can result from multiple causes including, traumatic brain injury, cancer, stroke, infections or scarring in the hippocampus region1. Current treatment paradigms include antiseizure medications, surgery and deep nerve stimulation3, but in some cases, seizures may not be fully managed with available therapies. Consequently, there is an ongoing need to develop improved therapies to manage TLE.
Mouse models of TLE that use pilocarpine or kainic acid to induce seizures have been used to study TLE4 and test therapies, but there are fundamental differences between rodent and human brains in terms of anatomy, physiology and function. Consequently, there is limited translatability from rodent data to human patients. Nonhuman primates (NHPs) are more physiologically relevant models to study TLE due to similarities in structure, function and neurochemical activity with the human brain. Interestingly, epilepsy can develop naturally in NHPs likely due to genetic factors or due to injury or infection, but can also be induced via a wide range of stimuli. Depending on the stimuli that is used, NHPs can develop generalized or focal epilepsy5. Focal epilepsy is induced via alumina gel, pilocarpine, kainic acid or electrical kindling, which uses an implanted stimulation electrode to induce seizures5. Relatively simple NHP epilepsy models have been used widely for the development of anti-seizure therapies. However, an unmet need is the development of new therapies for treatment refractory epilepsy, that need to be evaluated in more complex models that use a combination of stimuli to induce more refractory seizures. One such example, is the combination of pilocarpine and PTZ (pentylenetetrazol), where pilocarpine is used to induce an epilepsy phenotype and low doses of PTZ is used to trigger limbic seizures that are more frequent and severe6. In this model, available therapies reduced seizure intensity and frequency at varying degrees but did not completely suppress the seizures6. This data suggests that complex models that mimic treatment refractory epilepsy could be used to screen for more efficacious therapies.
Epileptic seizures are commonly detected using EEGs (electroencephalograms) where electrodes are positioned around the head to detect changes in brainwave activity. Apart from EEG analysis, diagnostic imaging such as PET, CT scan, MRI etc. are used to identify regions where there are changes in brain activity. Epilepsy patients typically undergo long term EEG scans where data is collected frequently over several days7. Manual analysis of this large dataset can take a long time, is prone to errors and needs to be done by a trained reader or experienced neurologists. Therefore, this type of analysis can be a significant bottleneck for the timely diagnosis and management of epilepsy. However, artificial intelligence (AI) can be a valuable aid for this analysis and can help reduce the error rate and time. In 2017, the Cleveland Clinic partnered with Google Inc. to develop a deep learning neural network to analyze a huge dataset (20 terabytes) from epilepsy patients7. The collaboration resulted in the development of a temporal graph convolutional network (TGCN) from the EEG data of 995 patients. This model combines spatial data over a set time period7 and showed impressive sensitivity and specificity7. Recently, a group at University College London developed an AI algorithm to identify areas of abnormal brain dysplasia that could lead to epileptic seizures using MRI data from 538 patients8. The algorithm was able to detect brain abnormalities in about 67% of the cases.
It is clear that AI is being actively used as a tool to identify and monitor epileptic seizures in human patients. However, it is important to reverse translate these AI algorithms to NHP epilepsy models so that AI and machine learning platforms can aid in the preclinical development of new therapies for treatment resistant epilepsy.
12 | Apr | 2023
Preclinical drug development involves the use of several in vitro and in vivo models to screen and validate new therapeutic modalities. The most widely used in vitro models are cell based and two-dimensional (2D) cell culture models are commonly used to identify and screen new therapies. However, cell culture monolayers have limited translational value as they do not fully recapitulate complex tissue architecture and function. Organoids are more physiologically relevant cell-based models as they are three-dimensional (3D) cell clusters that self-organize to form functional tissues and mini-organs1. Organoids are typically derived from stem cells that have the ability to proliferate and differentiate into multiple cell types. Stem cells are derived from 3 sources – embryonic stem (ES) cells, adult stem (AS) cells or induced pluripotent stem (iPS) cells. The use of ES cells raises ethical and regulatory issues while AS cells are limited to specific tissues like the intestine. However, the development of iPS cells has revolutionized the field of organoid development from various tissues and are also the source of patient-derived organoids.
Patient-derived organoids (PDOs) are referred to as “disease in the dish” that contain all the genetic drivers for a given disease state. PDOs are considered to be better models compared to organoids generated from healthy tissues that are manipulated or stimulated to induce the development of the disease phenotype. PDOs facilitate the understanding of genetic and disease development differences in patient populations, which can be an advantage and a challenge. There can be multiple underlying mechanisms for a given disease, and PDOs allow granular analysis of disease development in different patient segments, which is very useful information for personalized therapies. Conversely, having several PDO populations poses analytical and statistical challenges as it can be tricky to analyze several PDOs derived from one tumor indication. However, PDOs are ideal for precision oncology where therapeutic regimens are customized for individual patients. Another important application of PDOs is to support the understanding of drug-gene interactions at the individual patient level2. This gives information on whether a patient can metabolize and distribute a drug sufficiently or whether there are adverse interactions between two drugs in a specific patient.
PDOs derived from human tumors are steadily becoming an established platform for preclinical validation of cancer drug assets. Currently, PDOs are available for several tumor indications including liver, prostate, breast, colon and pancreatic, and the list of indication specific PDOs is expected to grow. PDOs from tumor tissues start with the culture of small pieces of tumors in a hydrogel or scaffold and specialized media to support the growth of 3D constructs. The cultured PDOs can be bio-banked to support cancer research and are very valuable research models to study disease biology or altered signaling due to the presence of one or more disease drivers.
However, PDOs have some limitations in that they do not fully recapitulate the tumor microenvironment and lack vasculature. Several strategies are being used to overcome this challenge including complex co-culture systems with stroma, plasma growth factors and immune cells. Recently, Xilis, a precision oncology has started developing PDOs using their MicroOrganoSphere or MOS technology that encapsulates the native tumor microenvironment in droplets3. The company combines organoid development methods from the Hubrecht Institute in the Netherlands and the MOS technology developed at Duke University. Xilis’ platform supports the culture of the tumor organoids in the patient’s own microenvironment in a scalable manner and is being promoted as a complete system to test therapeutic responses and drug interactions. This advancement allows the identification of optimal therapies to slow growth or induce tumor killing in 14 days3. The scalability and rapid turnaround time make Xilis’ technology attractive to pharma companies and investors4 to test new therapies or combinations, and has the potential to change how therapeutic regimens are designed for cancer patients.
1Corro C, Novellasdemunt L, Li VSW. A brief history of organoids. American Journal of Physiology-Cell Physiology 2020 319:1, C151-C165.
2Busslinger GA, Lissendorp F, Franken IA, van Hillegersberg R, Ruurda JP, Clevers H, de Maat MFG. 2020 The potential and challenges of patient-derived organoids in guiding the multimodality treatment of upper gastrointestinal malignancies. Open Biol. 10: 190274.
15 | Mar | 2023
Nanobodies are unique single domain antibodies that are expressed in camels, alpacas, llamas and other camelid animals. These tiny antibodies are about 10% the size of regular antibodies and have a mass of about 15 kDa and due to their small size, they were named nano-antibodies or nanobodies. They were discovered in the 1980s and were found to contain only a single variable domain of the antibody heavy chain1. The single variable domain contains an antigen binding site and is considered to be the smallest functional antibody fragment discovered so far. Nanobodies have been found to have desirable biophysical characteristics such as prolonged shelf life, resistance to heat, chemical or proteolytic degradation, effective tissue penetration and low immunogenicity2. Additionally, there are reports that nanobodies can refold into functional conformations after heat denaturation2 but this is currently under debate. Nanobodies have a unique structure where they form a finger-shaped loop that can penetrate the antigen binding site or active site of an enzyme target. In contrast, conventional antibodies form a cup shaped structure that may not bind directly to the target site on the antigen3. Initially, scientists developed nanobodies by immunizing llamas and other camelids and then screening their sera for target nanobodies. However, this method had limited success and was very expensive and time consuming. Additionally, access to large animal facilities for immunization and collection were limited. However, scientists at Harvard developed a yeast-based system to express nanobodies thus avoiding the need to immunize large animals4. Nanobodies have relatively simple monomeric structures that are not post-translationally modified allowing for scalable expression in bacterial or yeast systems at milligram per liter levels. The low-cost manufacturing process that produces reproducible levels of nanobodies is highly desirable for therapeutic antibody manufacturing2. Due to the small size of nanobodies, they can be delivered using multiple methods including aerosols, which can help broaden patient access to the therapies.
Nanobodies received a lot of interest as a therapeutic modality when the first nanobody therapeutic was approved in 20195 by the European Medicine Agency (EMA) and FDA. Caplacizumab was the first nanobody based therapeutic that was initially developed by Ablynx, which was then acquired by Sanofi5. The FDA approved caplacizumab for the treatment of acquired thrombotic thrombocytopenic purpura (aTTP), a rare clotting disease5. aTTP causes a disruption in the clotting cascade resulting in the formation of large multimers of the von Willebrand factor protein that bind to platelets forming clots or thrombi that result in emboli and other complications. Caplacizumab binds to the von Willebrand factor protein at a specific site and prevents the formation of the large multimers that cause clots and emboli.
The global SARS-CoV2 pandemic has triggered frantic efforts to find a cure for the disease that has resulted in millions of deaths. Currently, three monoclonal antibody based therapeutic regimens have received Emergency Use Authorizations (EUA) from the FDA for use in mild to moderate cases6. However, these antibodies are effective during a short time window in the early stages of infection and administering the therapies have significant logistical challenges7. Therefore, there is a strong clinical need for effective therapies that can be manufactured rapidly in a cost-effective manner and have optimal stability with minimal lot to lot variation. Nanobodies are a viable option as a therapeutic for SARS-CoV2 and recently, a group of US and EU scientists developed nanobodies targeting the SARS-CoV2 spike protein8. They developed a biparatopic nanobody which recognizes two distinct regions in the spike protein that can be delivered using an aerosol directly into the lungs of infected patients to inhibit viral infection. The researchers engineered nanobodies that neutralized the spike protein binding to the cell receptor via a unique mechanism. The nanobodies bound to the inactive SARS-CoV2 spike protein to induce a conformational change that resulted in premature inactivation of the spike protein. In other words, the nanobodies caused a change in the spike protein structure which prevented the virus from binding to and infecting cells8. These results suggest that engineered nanobodies could be the therapeutic answer to manage the pandemic that has ravaged the world.
15 | Feb | 2023
Artificial Intelligence or AI has been touted as the groundbreaking approach to more efficient and cost-effective drug discovery. At its core, AI is a combination of several computational techniques that require programming and training, that can rapidly analyze enormous datasets. However, it is important to note that output from an AI platform will only be good as the algorithms and size and quality of input datasets1. Given that the drug development consists of several steps where each step generates large amounts of data, AI applications will help streamline the process and potentially cut time and costs. An added benefit is that AI will help minimize human inefficiency and errors that will help standardize the drug design, screening and validation process2 and AI can also help weed out drug candidates that are likely going to fail in downstream validation. This allows drug developers to focus on viable candidates that have a higher likelihood of success.
Recently, several startup companies have been developing cutting-edge AI methods and the pharmaceutical industry has been quick to leverage the available expertise via large dollar collaborations. One example is Sanofi who has announced a couple of large AI collaborations – in January 2022, Sanofi expanded a partnership with Exscientia to deliver up to 15 new targets in oncology and immunology for an upfront payment of $100 million3. If the candidate search shows clinical and commercial success, the deal could net Exscientia up to $5.2 billion. Additionally, Sanofi recently signed a deal with Atomwise, an AI company that has a proprietary platform for structure-based drug design, for $20 million upfront and up to $1.2 billion if the program shows success4. Not be outdone, Merck has teamed up with Absci in a deal that is valued up to $610 million to use their Integrated Drug Creation platform to identify 3 disease targets along with therapies for those targets5. Amgen has teamed up with Generate Biomedicines, an AI company that is generating a lot of interest, to identify multi-specific drugs across various disease indications6. Similar to other AI deals, Amgen has committed to paying $50 million upfront and up to $1.9 billion in milestones if the targets achieve success6.
These collaborations seem to follow a similar pattern where pharma companies essentially fund small AI companies to refine and test their programs upfront with the promise of huge payouts if the AI platform generates viable candidates that show clinical and commercial success. This suggests that AI driven drug discovery is considered to be in its early days, especially since there are no data as yet to show that AI methods do result in more effective and cheaper drugs. Indeed, a poll by a pharma trade magazine showed that about a third of respondents believe that AI will peak after about a decade7.
One area where AI seems to have a more widespread effect is diagnostics. The most commonly used diagnostic method is pathology based where tissue samples are histologically analyzed manually. Manual diagnosis is time consuming and introduces human error due to subjective analysis of specific tissue sections. AI based methods have the potential to speed up accurate diagnosis, reduce human error and provide insights into disease biology8. Digital pathology has made significant strides in recent years and complete digital pathology workflow systems that have been approved by the FDA are available9. Advances in digital pathology-based diagnoses have been seen in the cancer space and this has helped pathologists provide more accurate diagnoses as well as assess biomarker expression for targeted therapies9. It is evident that AI will continue to advance precise diagnostics in order to support targeted therapies and precision medicine.
31 | Jan | 2023
Mouse models have been extensively used to study the onset and development of neuronal diseases, and evaluate response to therapies. These models of neurodegenerative disease have been generated using multiple approaches including genetic engineering, pharmacological stimuli and seeding of disease cell lysates1.For example, several types of transgenic models of Alzheimer’s disease (AD) that focus on beta-amyloid (APP) or tau pathologies have been developed to study the pathophysiology of Alzheimer’s disease as well as other types of dementia. Parkinson’s disease model can be broadly segmented into 2 types – 1) pharmacological models where chemicals such as 6-hydroxydopamine are used to damage and destroy dopaminergic neurons or 2) transgenic models that have mutations in genes that are known to be associated with Parkinson’s disease. Despite the decades of work and billions of dollars spent on these models, it is evident that mouse models of neurodegenerative diseases are not fully representative of the disease state and do not recapitulate the overall disease phenotype1. There are several critical differences between human disease and modeling the disease in mice that limits the translatability of mouse model data to human patients such as the difference in biomarker endpoints and the physiological differences between mouse and human brains. The lack of physiologically relevant animal models that recapitulate an acceptable level of disease pathophysiology is one of the main reasons that no curative therapies have been developed for neurodegenerative diseases such as AD, Parkinson’s or ALS.
Despite the challenges, mouse models are critical tools for preclinical drug development, so, in an effort to improve the translatability, scientists are developing chimeric mouse models. Chimeric mice, as the name suggests, are developed by transplanting human cells into the mouse brain. The transplanted human cells are typically derived from induced pluripotent stem cells (iPSCs) that can be genetically modified if required2. A 2019 study from a Belgium research group demonstrated that ES cell derived cortical pyramidal neurons when injected into the mouse brain cortex with EGTA (to facilitate integration) were able to not only integrate but also migrate through the mouse cortex while remaining viable and functional2. A percentage of transplanted neurons were shown to respond to visual stimuli. This finding suggests that transplanting stem cell derived neurons into mouse brains could be a model to study neuronal plasticity and could even be a cell therapy-based strategy to reverse brain damage2.
There is a growing body of work on the development of chimeric mice to model specific disease states including Alzheimer’s disease. One of the earliest reports on the development of chimeric AD models was in 2017 where researchers transplanted iPSC derived neurons into the brains of AD mice3. Unfortunately, this strategy had limited success as the neurons died before the development of neurofibrillary tangles. A recent paper advanced the work by transplanting astrocytes derived from iPSCs of AD patients into a transgenic Alzheimer’s disease model4. The iPSC derived astrocytes expressed either ApoE3 that is not associated with AD or ApoE4 that is strongly associated with late onset of AD. The transplanted astrocytes integrated into the mouse brain and were shown to acquire human astrocyte specific morphologies that are different from rodent astrocytes. More interestingly, the transplanted human astrocytes responded to the A-beta deposits in the mouse AD model where some of the astrocytes became hypertrophic and others atrophied. Astrocyte hypertrophy is considered to be a defense against AD pathology while atrophy is a loss of function associated with aging and neurodegenerative disease. The presence of both hypertrophy and atrophy in the AD mouse brain suggests that a chimeric model could provide valuable insights into early AD development and this information is critical to develop therapies that can significantly delay, halt or even reverse early AD development. While these are early days, the data suggests that chimeric mice might be the next generation of mouse models to study the onset and development of complex neurodegenerative diseases such as Alzheimer’s disease.