The Syntax Tax

The Syntax Tax

Most organizations have at least one piece of enterprise software that could do more than it is being used for. Salesforce is the obvious one. The platform can be bent into almost any workflow application, yet most teams use it as a CRM with dashboards. The same pattern shows up in R and MATLAB for statistics, Galaxy and CLC Workbench in bioinformatics, Aspen and COMSOL in process modeling. Each of these tools has a long tail of capability sitting behind a learning curve most domain experts never cross.

The barrier has never been conceptual. A BD director can articulate “win rate by agency, by NAICS, by team composition” without trouble. So can the biologist who wants differential expression with batch correction across a four-arm study, or the process engineer who needs a sensitivity analysis on feed composition between the reference case and an upset case. Knowing what you want has never been the problem. Getting the tool to produce it has been, because the tool speaks a language the domain expert does not.

The way that gap has traditionally been closed is through a specialist partnership: a Salesforce developer, a bioinformatician, an R consultant, a process modeling engineer. The domain expert describes what they want. The specialist translates it into syntax the tool understands. The two iterate until the output lines up with the question. It works, and the specialist’s training is the reason it works. It also means the work happens in a serial handoff, and each round trip is time the domain expert spends waiting and specialist time spent on translation rather than architecture.

What the assistants actually do

AI coding assistants do not write perfect Apex, or perfect ggplot, or perfect Aspen plug-ins. What they do is compress the first translation step. The pattern library is already in them. A domain expert can describe what they want in the vocabulary of the problem and the assistant produces a working first draft in the vocabulary of the tool. When the draft is wrong, iteration is conversational: “that is close, but it needs to exclude opportunities under fifty thousand dollars,” or “re-run with the batch effect nested inside the treatment arm,” and the next draft adjusts.

The floor has moved. Reaching into the long tail of these tools used to require either a specialist engagement or years of platform expertise. It now takes clear domain knowledge and the willingness to iterate until the output matches the question.

Where the work shifts

The Salesforce case is concrete. A BD team running federal proposals can now build, in a single afternoon, the kind of integration that used to require a scoped engagement: pulling patterns out of past opportunities so win rates are visible, matching active solicitations against those patterns, and tying project delivery data back to the original win conditions so “what is worth pursuing” becomes a question with an answer. Once that draft exists, the Salesforce architect has a working artifact to review, harden against edge cases, and fold into the broader platform. That is a better use of their time than writing the boilerplate first pass.

The same redistribution plays out elsewhere the pattern holds. A lab scientist can sketch their own transcriptomic analysis and hand a reproducible version to the bioinformatics team for review and production hardening. A process engineer can wire up sensitivity sweeps against their own model before bringing the modeling team in to pressure-test assumptions. An analyst can get a figure in front of the team in R without waiting for a statistician, freeing that statistician to spend time on the harder interpretive questions.

In each case, nobody’s expertise stops mattering. What moves is the part of the work the expert is doing: away from boilerplate translation, toward the judgment calls that are the reason they were hired in the first place.

The constraint that is left

When syntax stops being the bottleneck, domain knowledge becomes it. That is the right constraint to have.

The domain expert holds the BD process, or the biological question, or the plant’s operating envelope. With the translation layer compressed, that expert can work directly with the tool instead of queueing to describe it through a handoff. The specialist on the other side of the handoff stops spending hours writing boilerplate and starts spending them on architecture, review, and the edge cases that an AI-generated first draft does not catch.

The organizational implication is quiet but permanent. Custom automation against enterprise tools used to be a capital project that had to be justified, scoped, and staffed before it could begin. Today it is closer to an operational activity that starts with a conversation and gets formalized once it proves useful. The work still has to be done well. It just no longer has to begin with a six-month engagement.

Technical skill did not stop mattering. The translation between domain and tool stopped being a bottleneck, and specialist hours moved up the value stack to the work that actually needed their judgment. That is the change.