L7 Informatics Manages and Supports the Entire Lab Workflow from Sample Processing to Data Analysis and Insight Generation Employing Workflow Orchestration

The Result Is An Automated, Highly Efficient, and Cost Effective Lab Management Process

This month’s “Company Spotlight” provides a closer look at L7 Informaticsthe provider of a scientific process and data management (SPDM) solution to streamline and optimize lab processes and accelerate insights delivery for research, diagnostics, and therapeutics with the overall goal of better patient outcome. I spoke with Vasu Rangadass, CEO of L7 Informatics, about L7’s Unified Platform L7|ESP™ which facilitates workflow orchestration, ultimately leading to streamlined processes, increased efficiency, and reduced cost, all within a platform that supports collaboration and compliance. L7 Informatics is headquartered in Austin (TX), has about 180 employees, and has received ~$62.2M in total funding.

Following is my interview with L7 Informatics’ President and CEO, Vasu Rangadass, Ph.D..

 

EB: Tell us more about L7 Informatics – what business and market needs are you addressing, and with what products and services?

VR:  L7 with its product and services is addressing the life sciences market, anything in the biology and sciences sector. To answer it more succinctly, while we initially focused on the research biology sector, we quickly realized that our platform could easily support any aspect of life science. The life sciences field is very broad; think basic research, drug development, diagnostic test development, clinical trials, or even aspects outside of classical life sciences, including agriculture. L7 with its platform supports all the needs individual contributors in those areas have throughout their journey of science, all the way from research to discovery to development of diagnostics or therapeutics, and beyond. For example, we also support the diagnostics and healthcare sector where those diagnostics and therapeutics are used to help physicians with treatment decisions, making for better patient outcomes. With our L7|ESP (Enterprise Science Platform) platform we support the entire value chain of science, including healthcare. This includes organizing, tracking, and quality controlling each and every entity from a sample all the way to results and insights and next steps such as the next experiment, a diagnostic test result, a drug candidate, or the final treatment decision. L7|ESP controls and streamlines the various workflows associated with these processes and increases the return on investment (ROI). Instead of looking at it with a siloed Laboratory Information Management System (LIMS), Electronic Laboratory Notebook (ELN), Manufacturing Execution System (MES), or Laboratory Execution System (LES), L7|ESP is looking at it from a holistic point of view and trying to address the scientific problem from both the technology and business sides.

Specifically, L7|ESP:

  • Improves scientific problems that need to be resolved, both from a technology side and an application side
    • Addresses the challenges of sample and data management
    • Addresses the challenges of workflow management
  • Improves scientific problems related to efficiency
    • Addresses the need to execute faster, better, and more economically

 

The overall outcome is a higher return on investment.

One of the domains that we are active in is the domain of genomics. We are talking advanced therapies like cell and gene therapies (CGT), RNA medicines, molecular diagnostics, or the development of new seeds with genomically modified better traits. In a broader sense, it’s all about the biological sciences of genomics and its implications (see the L7 Precision Medicine workflow as depicted in Figure 1).

Figure 1: The L7 Precision Medicine workflow as supported with the L7|ESP platform.

 

EB: You offer L7|ESP as a software solution – with L7|ESP standing for Enterprise Science Platform? What makes L7|ESP unique?

VR:  What makes L7|ESP unique is the way it was built from the start. It was designed from the ground up like an operating system or a platform for science. We did not go about solving a particular laboratory process problem, or a particular area in chemistry, biology, agriculture, diagnostics, or in pharma. Rather, we took a step back and realized that when looking at all these different segments, there are many LIMS offerings from hundreds of LIMS companies, and there are also hundreds of electronic lab notebooks (ELNs) offerings on the market. We said, let’s build one platform that solves any science workflow/process problem.

“This is what makes L7|ESP unique; it is not designed to solve a particular type of science problem. It is designed to solve any science workflow problem.”

And it does not matter whether it’s in biology, chemistry, diagnostics, research, or in manufacturing of drugs. Because, at the end of the day, these are all scientific processes or scientific workflows that need to be executed, monitored, controlled, and done so repeatedly.

 

EB: When thinking of other players in the market, how is L7 positioned in that market and how do you differentiate yourself from them? Why should one consider L7 over some of them?

Because of L7 Informatics with its L7|ESP platform and the multitude of Apps within, L7 is competing with many players in the market and this predominantly because we see that the future lies within a single data/process platform to run those workflows across all scientific processes – call it Industry 4.0 or fully digitally transformed businesses. Running research and clinical processes in multiple systems, or in other words, performing assay development or product development in one system, managing manufacturing in another system, and running tests with yet another system does not make any sense! Having to transfer the knowledge and insights gained and to be managed across the entire value chain from research to drug development to clinical trials, to manufacturing, and eventually to commercialization slows down and complicates the entire process if managed via individual freestanding solutions. The result of this is high drug development costs, which the industry is facing across the board. Not having an optimized, efficient, one platform process increases the cost and time at which we can bring new therapeutics and diagnostics to market.

“The IT component, with its broken enterprise architecture, is a huge contributing factor to the sky-high average cost of developing a new drug which is approximately $2.3 billion. A fully integrated data platform will help speed up the development of diagnostics and therapeutics, in addition to reducing the overall development cost.”

 

EB: What excites you about L7 Informatics, why did you join the company? What keeps you going every day?

VR:  My Ph.D. is in AI and machine learning, and still today, I struggle with the way enterprise organizations collect and manage data. To be able to use AI and machine learning successfully, lots of well-organized and structured data is required. I am a firm believer in bringing better health to this world, but at a faster pace and lower costs, and not the way it is done today. Today, everything is expensive and slow. To address both the cost and speed components, automation is required, in addition to predictive modeling and self-optimization systems. But again, to do that, you need clean data. And to get to clean data and a state where things work better, faster, and cheaper, one needs to put all the technologies together into one platform.

“Current enterprise architectures are too siloed.”

My vision and passion are to optimize the way we bring medicines, diagnostics, and overall better health to the larger population. That includes better medicines and more targeted treatments, and all at lower cost. It’s all around better health or more precise health and how we can help companies bring that to the market. While there is a lot of innovation happening, innovation can get killed if you are not economical and cost-effective.

 

EB: The L7|ESP platform is populated with a large library of content, such as workflows, data models, and analytics, and includes a series of apps running on top of it. What is the flexibility for the end user to pick and choose a set of standard content and apps for their specific use?

VR:  With designing and building L7|ESP, we’ve taken a lot of concepts from how the smartphone industry evolved. All the content is stored in what we call the L7|HUB, which is like an app store. L7 customers have their own app store – or in other words, their repository of portable content, including workflows, data models, and analytics packages – that is connected to the L7 app store where we load all the content. Customers can pick and choose what content they want to download into their repository. Content includes equipment, connectors, protocols, their own scientific experiments. This allows for tailored and optimized product content because not everyone is performing the same science experiments.

Our goal is to build a library of scientific protocols, instruments, equipment, reagents, connectors, again, to reduce the cost of implementation of software, but also to dramatically increase the productivity of each organization and their scientists. But more importantly also to make sure that all the workflow data is captured in a structured way. If the tools do not support easy capturing of structured data by directly talking to the instrument, processes cannot be efficient because scientists do not have the bandwidth to do this consistently across the entire experimental workflow. Hence, we enable through the content store/repository (i.e., L7|ESP Apps, L7|ESP Content, or L7|ESP Connectors) rapid creation of new experiments or new workflows without having the scientists to do the actual work of implementation or having IT personnel to create those workflows. We see this as a high-value content strategy.

Another critical dimension is collaboration support among research scientists, but also individuals working in a diagnostics laboratory. Sharing scientific experiments, including protocols, data, results, and insights is an absolute must, but there are still limitations to do all this easily in today’s science environment. While researchers can share results or the data, there is no easy, standardized way to share methods/protocols that were used to run the experiments and create the data. Still today scientific protocols are predominately shared via sharing a word/PDF document, but that is not the approach one should have to take ever. Rather you want the protocol that immediately can be executed to produce the same type of result. The analogy is very similar to sharing a song. One does not want to share the sheet music; one rather wants to share and receive the MP3 file that one can immediately listen to.

“The field of science is still quite backwards.”

We have advanced in the science of instruments, and we can investigate cells at the subcellular and even molecular level, we can split and grow them, we can transform them and grow them into induced pluripotent stem cells (IPSCs) or organoids and do all kinds of cool experiments, but as scientists, we cannot easily share two scientific experiments among each other. The L7|ESP platform has an integrated scientific language that allows individuals to share experimental information, not just on paper, but in an executable format. Of course, you need to have L7|ESP on both sides to play the “music of science” just like you need an iPhone or a computer to play the music or open a media file. This is what L7|ESP is all about and what we are bringing to the market.

 

EB: Let’s talk about the scientific content. Is your platform supporting, for example, the annotation of a gene panel analysis when reporting the results? Does L7|ESP include annotating some of the findings, or is that something that the end user can do by linking their internal data to it?

VR:  It depends on where you are on the scientific spectrum. For example, if you use an ELN, we allow users to do the experiment, get the data, and then annotate it. But if someone runs a high throughput setup with thousands of samples in a few hours, then people write business rules to make sure that they are annotating it electronically through business rules, all within L7|ESP, because you don’t have time to manually do it. A good example is a massive scale COVID testing that we did. To efficiently do that, you need to run a statistical analysis tool with the output providing with high confidence whether a patient has COVID or not. This cannot be performed manually. Rather, the solution is a backend analytics program linked to a 384-well plate reader and then annotating all the 384-well plate samples. This is all fully automated. It really depends on the specific experiment and what you are using it for. L7|ESP supports both low-scale, manual, and high-scale, automated workflows, so both approaches.

What you can do in L7|ESP is to write execution pipelines that integrate instruments, equipment, ERP systems, or a knowledgebase like ClinVar that has variant to disease information. And that is the beauty of L7|ESP, it is not just supporting the wet lab experiment side of things, it’s going beyond that and includes the support of dry lab components of scientific experiments just as well. Therefore, L7|ESP can talk to any system from sample (wet lab) to data and insights (dry lab) following strict business rules. At the end of the day, automatic annotation means I need a physician or some scientist to say, this is how you need to annotate it and our technology supports that, but of course, someone needs to write the rules for specific use cases, such as for example, a diagnostics test pipeline. We provide a low-code/no-code environment that allows, what I call a citizen developer or a master scientist, to set up the rule set without having to depend on a computational researcher/programmer to do it.

“L7|ESP can talk to any system from samples (wet lab) to data and insights (dry lab) following strict business rules.”

 

EB: From an infrastructure perspective, what is required to run L7|ESP? Does it run exclusively in the cloud, or can it also be deployed behind the firewall or as a local install?

VR:  L7|ESP is cloud agnostic and therefore can be run on any cloud, e.g., Microsoft Azure, Google, or Amazon cloud. We also have L7|ESP running behind the firewall, on premise, or in the private customer cloud. The reason why we are cloud agnostic is because we work a lot in regulated environments, and in regulated environments you cannot easily upgrade the software, which also means, we can only upgrade when the customer wants to upgrade. Hence, it does not make sense to provide L7|ESP as a SaaS-managed system. We let the customer decide when they want an upgrade and not force them into an upgrade. Since lots of our customers use the cloud, we simply share our upgrade via Docker containers – operated via a Kubernetes system – into those various cloud environments.

“L7|ESP is cloud agnostic and therefore can be run on any cloud.”

 

EB: How does L7 Informatics prioritize product extensions/updates? Are they market-, business-, or customer-driven? How many product updates/releases do you have per year?

VR:  All of the above. We do learn a lot from our sales cycles and the feedback that we receive. In addition, of course, we always talk to existing and prospective customers which guide us in understanding what kind of product enhancements they would like to see implemented, or new applications they are developing to support new processes and businesses. Lastly, we also have an internal product strategy team that is actively looking into what research and business areas we need to invest in, what new features and capabilities we should be adding, and how we can make the product easier to use from creating new experiments to running them, to collecting data and to analyzing them. So, we are constantly working on making L7|ESP better for our customers.

Product updates (i.e., point releases, like 3.1 to 3.2) happen twice a year, with major releases (i.e., whole number releases, like a change from 3.x to 4.x) only happening about every two to three years.

 

EB:  Who are some prominent L7 Informatics customers and how do they utilize L7|ESP for their internal purposes? How has your platform improved their overall processes?

VR:  One prominent customer that comes to mind in the molecular diagnostic sector and that has actively implemented the L7|ESP platform to support all their workflow needs is QIAGEN. They are active in the companion diagnostics IVD sector. They have been using paper-based records for capturing all the scientific data – we are talking large amounts of scientific data of 100 pages or more. Using L7|ESP we completely turned an extremely manual process into an electronic and fully automated one with individual workflow contributors pushing the pipeline forward when certain components are finished, and metrics are met. We achieved this by integrating all workflow components, including all instruments (e.g., sequencers), reagents, sample processing information, proprietary algorithms for data analysis, and insights capture.

Another good example of how our customers use L7|ESP is Quest Diagnostics. They have highly automated laboratories and processes, and L7|ESP is fully integrated with their robotic systems and this in real time. The robots are moving samples from one station to another and this in a completely automated touchless fashion. As you can imagine, they are processing thousands of samples every month which results in the automatic generation of high-volume scientific data and reports. This is a highly complex process that we fully support.

“L7|ESP can support both high volume, automated robotic processes or semi-automated and human quality-checked processes with a human allowing the process to move forward with the push of a button.”

 

EB:  We live in a time of data explosion, complex clinical processes with many stakeholders being the norm, clinical whole genome sequencing being established, multi-omics data applications soon requiring clinical support, and ChatGTP or other AI approaches in the near future or already being applied to EHR clinical decision-making.

VR:  My PhD is in AI and machine learning and people keep forgetting that the contextualization of the different pieces of data is the most important aspect when it comes to AI and data sciences. For AI applications to work one needs lots of contextualized and structured data. It is very difficult to contextualize data after the fact. Anytime you try to contextualize data after the fact, the result is an expensive, prone to failure approach. Automating the execution of any workflow is the cleanest way of contextualizing any data set. Without contextualization, machine learning algorithms will be trained with wrong or bad input data which results in a subpar algorithm which will provide bad predictions/answers. So, the feeder data to train an AI/machine algorithm or ChatGPT kind of tool – whether it’s generative AI or predictive AI – must be optimal. Hence, getting the most accurate and contextualized data is the most important aspect for these AI and machine learning algorithms to work. AI tools will always give you an answer, but you want an answer that is not far from the truth.

This is where L7|ESP comes into play. L7|ESP helps contextualize all data from wet to dry lab, including which vendor reagents were used, what equipment/instruments were used, who are the scientists that performed what experimental step, whether the equipment was calibrated and when exactly, what were the quality metrics at certain workflow steps, and so on. All the data and metadata is collected from any system that is integrated with L7|ESP. Clearly, there is a lot that goes into making a successful scientific experiment and it can even get very complex in high-throughput clinical molecular diagnostics labs. And sometimes an experiment works and sometimes the experiment fails, or similarly high-throughput labs can be challenged by bad reagents or instrument challenges that will have an impact on the entire pipeline. To be and stay in control of the lab data generated, ideally all those various data points are collected in a contextual way, because only that way one can troubleshoot an issue quickly. Because L7|ESP runs those workflows, all the data generated is contextualized and is therefore ready for research and clinical applications, including test outcome predictions, assay comparisons, and process troubleshooting – if there is a need for that.

That, of course, includes metadata. For example, if I run a PCR experiment and use a particular set of QIAGEN reagents with a Fluidigm instrument – this is contextualizing all the information. It is not just about the results of the experiment; it is also about how the experiment was performed. And then in some cases it failed because I am using a reagent from vendor X (e.g., lot B), but in other cases it was successful because I was using a reagent from vendor Y. Clearly, one can then go back and use this information to understand that lot B from vendor X was consistently bad, maybe contaminated. It is not just the science, the experiment, and the sample, there is so much that goes along with any experiment. It is important to feed an AI engine with all the contextualized data, not just the scientific data.

 

EB: So, if you were to think about the whole ChatGTP and AI approach that is taking off right now, independent of L7|ESP, but still thinking about the entire process, what are some of the pros and cons resulting from that?

VR:  I think it is a little too early. I know ChatGPT is already out of the cage. It is running wild. You cannot ignore it for sure. There are lots of approaches that one can take and optimizations to help businesses. We are talking to a few customers that are providing us with examples of actual use cases where it will help the business. I want to make clear that I am not a big believer of using technology for the sake of technology. I am always a believer of saying technology needs to add value, either in terms of bringing a drug to the market faster or making it more efficient to save money. It must make some economic sense to use the system, not just, “Oh, it’s cool technology.”

Certainly, we at L7 are also playing around with ChatGPT. From a generative AI perspective, it creates some interesting possibilities, and hopefully in the future one can say, “Hey ChatGPT, generate an experiment using the Illumina sequencer NovaSeq X, but for RNA extraction use the QIAGEN QIAcube HT platform in combination with reagents from this vendor A.” With L7|ESP you already do not have to manually create such a pipeline. It will learn from how you had previously done your experiments and hopefully come 99.9% close to creating that experiment. In L7|ESP, we have defined a meta language for representing experiments. So, we can teach ChatGPT, not just English, but other meta languages that are proprietary to L7 Informatics. And then we can teach it to rapidly create new experiments without having a scientist create them.

Link to the original article.


Brigitte Ganter, Ph.D. – enlightenbio Guest Blogger

Brigitte Ganter, is currently Sr. Director of Product Marketing at L7 Informatics. Prior to her work at L7, Brigitte had founded enlightenbio LLC where she oversaw as General Manager all aspects of the company’s activities with particular focus on product management and product marketing services, including market research reports. Brigitte holds a PhD from the Swiss Federal Institute of Technology, Zurich (ETH Zurich) and conducted her post-doctoral work at Stanford University. Her broad industry experience includes positions of increasing responsibility predominantly in the role of Director of Product Management at several biotechnology/technology startups (Iconix Biosciences, Ingenuity System [now a Qiagen company], DNAnexus).