Deepcell Dives into Cell Morphology for a New Level of Visibility

Deepcell, a biotechnology company spun out of Stanford University in 2017, focuses on quantifying and understanding morphology – part of the cell’s phenotype.

Deepcell CEO Maddison Masaeli/courtesy of Deepcell, Inc.

The recent boom of sequencing technologies has increased our understanding of things inside of cells: transcriptomes, epigenomes and genomes alike. Deepcell, a biotechnology company spun out of Stanford University in 2017, focuses on quantifying and understanding morphology – part of the cell’s phenotype.

Deepcell uses artificial intelligence to image and sort single cells, based on their morphology, for downstream applications like sequencing or clonal analysis. CEO Maddison Masaeli, Ph.D. co-founded Deepcell based on research from Euan Ashley’s laboratory at Stanford, where she worked as a postdoctoral scientist.

“He [Ashley] had never had an engineer as a postdoc before,” Masaeli said with a laugh. “After I left, he hired a few engineers – having had that interesting experience working together. I had an amazing time there, learning a lot from clinicians and all the cutting-edge research that was going on.”

One of Masaeli’s abstracts on her work from Ashley’s laboratory explored using microfluidics to sort induced pluripotent stem cell-derived cardiomyocytes based on size and shape. Another paper from her time at the University of California at Los Angeles, published in the journal Scientific Reports, explored using a combination of high-speed imaging, stretching and machine learning to sort cells based on physical properties.

For Masaeli, it became apparent that this technology she and others had developed was meant to “become a product in the hands of many, many investigators.” Cell morphology, as she described, is commonly used as a qualitative metric in basic research where scientists frequently check to see “how their cells look.” Likewise, Masaeli noted how in the clinic, cell morphology is used as a diagnostic tool; for example, an acute myeloid leukemia diagnosis can be confirmed by studying the morphology of bone marrow cells.

“There’s a lot of deep information and insights in the cell morphology, which is one of perhaps the most important phenotypes of the cell,” Masaeli explained. “I think that cell morphology doesn’t just show you what the cells are, but perhaps can also give you some sense of what they’re doing.”

By automating the sorting process and using imaging coupled with artificial intelligence through deep neural nets, Masaeli said that implicit bias or lack of resolution from the human eye can be avoided. “If the size differences are subtle, it is difficult to have a unified approach to assess size with the human eye,” she explained. “You can only do that if you are looking at the same sample, and you have different populations in the same sample – something smaller, something larger. But if one sample is two microns larger than the other sample, it’s very difficult for a human to identify that.”

To sort cells based on their morphology, the Deepcell platform starts with a single cell suspension, where cells are floating inside a gentle buffer. After loading the sample onto the platform, a microscope rapidly images each cell from different angles using modified brightfield imaging that can visualize intercellular organelles. As the cells are moved downstream, the images are passed to a pre-trained neural net model that analyzes the data and passes back a sorting “decision” in real-time.

The time from image to analysis, Masaeli said, is a millisecond. “We had to innovate a lot in the area of being able to run these deep neural nets in that timeframe, sometimes on par with, but sometimes even faster than autonomous driving!”

Masaeli also described two workflows of the platform: supervised and unsupervised. “In the supervised regime, the model is trained to identify a certain cell of interest within a certain background,” she said. “In the unsupervised regime, we have to teach the algorithm to learn how to differentiate between cell types. You’re asking the algorithm to identify unique groups of cells based on the way they look.”

Because the cells are alive throughout the entirety of the sorting process, Masaeli said that the sorting can enrich a specific subpopulation of interest for clonal expansion or sequencing. The company has presented results on the identification of non-small-cell lung carcinoma (NSCLC) cells from a whole blood suspension, where over a 10,000 fold enrichment of the NSCLC was seen following sorting. Additionally, the transcriptomes of sorted cells and non-sorted cells were compared and found to be highly correlated – showing that the sorting process did not significantly change mRNA composition.

Because the Deepcell platform relies on deep neural nets as the ultimate cell classifier, Masaeli said that the model learns continuously from the images that are collected. Currently, the platform has around 1.5 billion images. Many of these images are annotated, where cellular identity, functionality, or other interesting characteristics (eg. do the cells come from a breast cancer patient versus a healthy patient?) are labeled to help the model learn.

As with any artificial intelligence model, there are risks in introducing bias, lack of data diversity, and batch effect. To combat these potential problems, Masaeli explained that each sample is run across multiple systems and cartridges. Additionally, she said, similarly classified samples are collected from different sources to increase sample diversity. Batch effects are corrected for through a combination of standardized sample processing since the data from Deepcell sorting is derived from identical instruments and downstream data processing.

“Lastly, the way that we finally check whether our models are accurate or whether we have somehow biased them is that we can take the cells the model identified as target cells and do biological validation,” Masaeli said. “That’s what we do rigorously on a regular basis.” Biological validation, in this case, can be sequencing the cells sorted by the platform (for example, T cells) to ensure that they are, in fact, the cells of interest.

Looking forward, Masaeli noted the multitude of academic collaborations that the company is undertaking with researchers at Stanford, the University of California at Los Angeles, and the University of Zurich – studying melanoma microenvironments or malignant cancer cells.

Deepcell is also working with the Tabula Sapiens project – a human cell atlas project funded by the Chan Zuckerberg Initiative that sequences (at the single-cell level) organs from human subjects. During the early stages of Deepcell’s founding, Masaeli was excited by this project as a means to get more information about single cells.

“It has been super exciting for us to really visualize the amount of heterogeneity in biological samples,” she said. “They [the Tabula Sapiens Project] have been excited to add a new analyte, add new insights, to understand samples. It’s come full circle, because we weren’t really considering seriously, back then, to become part of this huge endeavor. And here we are.”

MORE ON THIS TOPIC