L7 | CHATS
thought leadership
Agentic AI in Life Sciences: What’s Real, What’s Hype, and What It Actually Takes
by James Ryan, Chris Burke, and Sean Hinds | posted on April 23, 2026
Ask ten people in the industry today what an “agent” is, and you might get twenty different answers. What gets called “agentic” today is, honestly, a harness that collates and curates information for an LLM, gives it some memory, and logs what it did. That’s genuinely useful for augmenting expert work: process document analysis, deviation investigation, report drafting, and ad-hoc data querying. But it’s not the autonomous digital workforce people are being sold (yet, anyway).
Gartner¹ published three research notes on agentic AI in March 2026, and, taken together, they describe an ecosystem that’s maturing fast but is earlier than most vendors want to admit. For pharma and life sciences leaders, trying to decide where to spend attention and budget this year, the honest read is more useful than the hyped one.
The Protocols: One Worth Adopting, One Worth Watching
Two standards have emerged for how agents talk to tools and to each other: MCP (Model Context Protocol) and A2A (Agent2Agent). The short version:
MCP is real and useful today. It’s the standard for connecting an agent to tools, data sources, and APIs. If you have systems an agent needs to read from or act on, MCP is how you expose them. Anthropic donated MCP to the Linux Foundation in late 2025, and adoption across the industry is broad enough that standardizing on it is a low-regret decision.
A2A is a bet on a future that hasn’t arrived. It’s designed for agents from different systems to discover each other and coordinate across organizational boundaries. That’s a real problem eventually (CDMOs talking to sponsors, CROs talking to regulatory partners), but most organizations don’t have that problem yet. Gartner’s own data suggests most software engineering teams have minimal or no A2A in their work. Our recommendation: understand what it’s for, implement systems with it in mind, but don’t let its absence block anything you’re doing now.
The practical test: if you’re being sold something that requires A2A to deliver near-term value, ask hard questions about the use case.
What Agents Actually Do Well (And What They Don’t)
The “agentic AI will monitor your execution flow for out-of-spec conditions and alert humans” pitch is ubiquitous right now. It’s also something software solutions have done for decades without any LLMs involved. Deterministic systems belong in the critical path of regulated decisions. You don’t want an LLM deciding whether to release a batch.
Where LLM-based agents genuinely earn their keep in life sciences today:
- Reading unstructured process documents and turning them into structured workflow definitions. This is real, and it’s work that used to take weeks of consultant time per process.
- Summarizing and investigating deviations. Not detecting them (SPC does that fine), but pulling relevant context from batch records, maintenance logs, and prior CAPAs, and drafting an investigation report for human review. That’s real labor displaced.
- Ad-hoc analysis against structured data. When your APIs and schemas are legible to an LLM, scientists and analysts can ask questions that previously required a data engineer. We’ve seen this internally: our own development velocity on deterministic content has gone up substantially because our APIs are structured in ways LLMs can reason about.
- Drafting, not deciding. Agents are good at producing first drafts of structured outputs (reports, summaries, proposed CAPAs, annotated workflows) for human review. And the final call in a regulated context isn’t theirs to make: that’s not a capability question, it’s an accountability one. Agents can help better inform the accountable humans.
None of this requires autonomous decision-making. All of it requires that the underlying data and workflows are structured, well-documented, and accessible in ways the agent can reason about. The quality of the substrate matters more than the cleverness of the agent.
Foundation-First vs. Iterative, and Why Substrate Matters Either Way
A framing we often see is “build the governed data and execution layer first, then deploy agents.” There’s something to that, but it risks becoming waterfall dressed up in 2026 vocabulary. The organizations most likely to extract real value are the ones that deploy scoped agents into the platforms they already have, in contexts where being wrong is cheap, and expand governance alongside capability rather than ahead of it.
That means starting in low-risk contexts: discovery research, internal knowledge work, and process document analysis, places where an agent’s mistake produces a draft to be corrected rather than a deviation to be investigated. Learn what governance actually gets exercised in practice. Expand toward regulated workflows as both the tooling and your organization’s comfort with it mature. In 2026, LLMs don’t belong in the batch-release decision path.
What matters either way is substrate quality. An agent operating against a pile of disconnected systems, inconsistent schemas, and undocumented APIs will produce worse results than an agent operating against structured, orchestrated workflows, even if the LLM is identical. Whether you’re building that substrate ahead of your agents or alongside them, it’s the work that determines how much value you get.
What to Actually Do in 2026
If you’re a pharma or life sciences leader trying to figure out how to approach agentic AI this year, our honest advice:
- Standardize on MCP. It’s the one protocol bet that’s low-regret today.
- Deploy agents into what you already have, where you can. The highest-value early use cases (document analysis, deviation investigation, ad hoc querying, draft generation) don’t require replatforming: they require exposing your existing systems to agents through well-structured interfaces.
- Start with augmentation, not automation. Draft, summarize, investigate, propose. Let humans decide.
- Invest in substrate quality. The limiting factor on agent usefulness is almost always the state of the data and workflow definitions they’re reasoning over, not the model. Structured, orchestrated processes give agents something coherent to work with; fragmented ones don’t.
- Be skeptical of anyone who claims that agents will monitor your regulated workflows autonomously. Broader analysis of patterns and possible areas of attention, perhaps, but autonomous monitoring of regulated processes is not an LLM-based agent use case.
The organizations that get real value from agentic AI over the next few years won’t be the ones that deployed the most agents fastest. They’ll be the ones that understood what agents are actually good at, deploy them there, and leave the regulated critical path to the deterministic systems that belong there.
Where L7 Fits
We built L7|ESP® as a workflow orchestration platform for life sciences long before anyone was talking about agents. It unifies LIMS, ELN, MES, scheduling, and inventory functions into a single system where scientific processes are explicitly modeled, data is structured, and compliance requirements are enforced at the workflow level rather than bolted on.
That turns out to be useful for agentic patterns, for a reason that’s less about agents and more about structure: LLMs work better when the thing they’re reasoning over is contextualized, well-organized, and well-described. Our APIs are LLM-legible because they were designed to be process-aware from the start. We’ve used this internally to accelerate our own deterministic content development, the content that actually runs customer workflows, and to enable ad-hoc data analysis that previously required engineering support. When we help customers codify their processes into L7|ESP, we’re doing the substrate work that makes agentic patterns viable, whether or not agents are on the immediate roadmap.
L7|SYNAPSE is where we’re pulling agent-oriented capabilities together: MCP-based integration, contextual retrieval over L7|ESP data, and the harness pieces we described earlier. It supports MCP today and will support A2A as that protocol’s value becomes concrete. It’s not a replacement for thinking carefully about where agents fit in your workflows; it’s a place for the pieces that do fit to operate against a substrate that makes them more reliable.
Maybe the definition of “agent” will settle out, maybe it will change entirely, maybe it will disappear in favor of a new slick concept. More important is the substrate question, whether your scientific data, workflow definitions, and APIs are contextualized and coherent enough for anything to reason over reliably. This matters regardless of who is doing the reasoning, human or LLM or otherwise. That’s the work we’ve been doing with customers for years, and it’s why organizations building on L7 are well-positioned for whatever “agentic” turns out to actually mean.
¹ Gartner, “When to Use MCP Versus A2A for Building Multiagent Solutions,” Cary Pillers and Steve Deng, March 23, 2026 (ID G00845015), Gartner, “Best Practices to Mitigate Security Risks With Agentic Coding Tools,” Aaron Lord and Manjunath Bhat, March 24, 2026 (ID G00847413), and Gartner, “How to Adopt Anthropic’s Claude Code at Scale,” C.A. Swan, March 17, 2026 (ID G00850617).