Churn risk
Rank every customer by probability of churn this quarter. The CRO doesn't guess where to spend retention budget — the top decile typically contains 30-50% of all churners.
We design, train, deploy, and maintain supervised machine learning models that turn your operational data into specific predictions — churn risk, demand forecasts, anomaly flags — your team can act on before the event. Built on your data. Deployed in your stack. Maintained against drift.
Engagements start at $5,000 for the build. Variable from there based on data availability and complexity. Hosting and dashboards are optional, never required.
A supervised learning model does one thing well. It learns the relationship between inputs you already collect (customer tenure, last-order date, support-ticket count) and a single output you want to predict (will this customer churn; how many units will sell; is this transaction worth flagging).
Unlike a chat assistant — which produces different responses each time, calibrated to a fuzzy range of acceptable outputs — a supervised model produces a single calibrated prediction with a confidence interval, repeatable for the same inputs, auditable against actuals once ground truth lands. It’s the right tool when you need to compare your model against the world and improve it over time.
Most middle-market firms have the data to answer these questions today — what they lack is the assembled training table and the model on top of it. A first model ships in roughly four weeks; subsequent models compound on the data infrastructure the first one created.
Rank every customer by probability of churn this quarter. The CRO doesn't guess where to spend retention budget — the top decile typically contains 30-50% of all churners.
A per-SKU, per-week order quantity that meaningfully outperforms category averages on the long tail. Forecasts ship with calibrated confidence intervals so the buyer can read uncertainty directly.
Score every transaction, sensor reading, or expense report on its similarity to past anomalies. The reviewer's time goes to the cases that matter; routine cases close themselves.
Predict customer-level price sensitivity from order history, segment, and competitive context. Margin compression where it doesn't move conversion, margin protection where it does.
When will a contract close, a payment land, a piece of equipment fail. Survival models on your historical data produce calibrated time-to-event predictions for capacity and cash planning.
Route inbound — leads, tickets, claims, applications — to the right person, queue, or workflow based on patterns learned from your historical routing. The triage that used to consume a senior operator's morning.
Most of the work in any supervised learning engagement is data engineering, not modeling. A firm with a clean training table is two-thirds of the way through.
We pull your historical data into a single training table — customer attributes, transaction history, support records, behavioral logs, exogenous signals. Most of the engagement work lives here, not in the modeling.
Feature engineering, model selection (gradient-boosted trees, survival models, isolation forests — whatever fits the problem), and validation on data the model has not seen. You receive an honest accuracy report with named failure modes.
The model ships in the simplest shape that fits your operating workflow: a scheduled batch job that writes scores to your database, a real-time API your app can call, or a custom dashboard if your team will read the predictions directly.
Models drift. We monitor predictions vs. actuals, retrain on rolling windows of fresh data, and govern the upstream features the model depends on so nothing breaks silently when an upstream system changes.
The build fee is what it costs to design, train, validate, and deploy. The monthly retainer (when chosen) covers the maintenance, monitoring, and ongoing tuning that keeps the model honest. Self-host any time — the code is yours.
From $5,000
Build · variable with complexity
We deliver a trained model, the code that ingests data and produces predictions, and a runbook so your team can operate it. You self-host on your infrastructure. Ideal when you have engineering capacity and prefer to own the keys.
From $5,000
Build · variable with complexity
$75 / mo
covers retraining + monitoring + serving
Same build, but we keep the model running. Monthly retainer covers retraining on fresh data, drift monitoring, and serving via API or scheduled batch. Ideal when you'd rather your team consume predictions than operate the pipeline.
From $5,000
Build · variable with complexity
From $150 / mo
scales with dashboard complexity
When the operator wants to sort, filter, and drill into the predictions directly — not consume them through someone else's tool. Custom dashboard tuned to your model's outputs and your team's workflow. Higher monthly because the dashboard ships with the same care as the model.
Final scope and pricing agreed in the audit. Data availability is the most common scope variable, not model complexity.
Tell us the question your team is currently answering with a spreadsheet hunch. We send a fixed-price scope within 48 hours. The brief is yours either way.