Predictive models · supervised learning, deployed

The questions a spreadsheet can’t answer.

We design, train, deploy, and maintain supervised machine learning models that turn your operational data into specific predictions — churn risk, demand forecasts, anomaly flags — your team can act on before the event. Built on your data. Deployed in your stack. Maintained against drift.

Engagements start at $5,000 for the build. Variable from there based on data availability and complexity. Hosting and dashboards are optional, never required.

What it is

Generative AI writes the next sentence. Supervised models predict the next number.

A supervised learning model does one thing well. It learns the relationship between inputs you already collect (customer tenure, last-order date, support-ticket count) and a single output you want to predict (will this customer churn; how many units will sell; is this transaction worth flagging).

Unlike a chat assistant — which produces different responses each time, calibrated to a fuzzy range of acceptable outputs — a supervised model produces a single calibrated prediction with a confidence interval, repeatable for the same inputs, auditable against actuals once ground truth lands. It’s the right tool when you need to compare your model against the world and improve it over time.

Read the long-form essay →

Where it earns its keep

Six recurring questions a supervised model answers cleanly.

Most middle-market firms have the data to answer these questions today — what they lack is the assembled training table and the model on top of it. A first model ships in roughly four weeks; subsequent models compound on the data infrastructure the first one created.

person_remove

Churn risk

Rank every customer by probability of churn this quarter. The CRO doesn't guess where to spend retention budget — the top decile typically contains 30-50% of all churners.

inventory_2

SKU demand forecasting

A per-SKU, per-week order quantity that meaningfully outperforms category averages on the long tail. Forecasts ship with calibrated confidence intervals so the buyer can read uncertainty directly.

warning_amber

Anomaly detection

Score every transaction, sensor reading, or expense report on its similarity to past anomalies. The reviewer's time goes to the cases that matter; routine cases close themselves.

trending_up

Pricing & elasticity

Predict customer-level price sensitivity from order history, segment, and competitive context. Margin compression where it doesn't move conversion, margin protection where it does.

schedule

Time-to-event

When will a contract close, a payment land, a piece of equipment fail. Survival models on your historical data produce calibrated time-to-event predictions for capacity and cash planning.

label

Classification

Route inbound — leads, tickets, claims, applications — to the right person, queue, or workflow based on patterns learned from your historical routing. The triage that used to consume a senior operator's morning.

How we work

Four phases. Each depends on the previous.

Most of the work in any supervised learning engagement is data engineering, not modeling. A firm with a clean training table is two-thirds of the way through.

  1. 01

    Ingest

    We pull your historical data into a single training table — customer attributes, transaction history, support records, behavioral logs, exogenous signals. Most of the engagement work lives here, not in the modeling.

  2. 02

    Train

    Feature engineering, model selection (gradient-boosted trees, survival models, isolation forests — whatever fits the problem), and validation on data the model has not seen. You receive an honest accuracy report with named failure modes.

  3. 03

    Deploy

    The model ships in the simplest shape that fits your operating workflow: a scheduled batch job that writes scores to your database, a real-time API your app can call, or a custom dashboard if your team will read the predictions directly.

  4. 04

    Maintain

    Models drift. We monitor predictions vs. actuals, retrain on rolling windows of fresh data, and govern the upstream features the model depends on so nothing breaks silently when an upstream system changes.

Engagement options

Three ways to engage. Same model, different operating shape.

The build fee is what it costs to design, train, validate, and deploy. The monthly retainer (when chosen) covers the maintenance, monitoring, and ongoing tuning that keeps the model honest. Self-host any time — the code is yours.

01

Build & Handoff. Yours, end-to-end.

From $5,000

Build · variable with complexity

We deliver a trained model, the code that ingests data and produces predictions, and a runbook so your team can operate it. You self-host on your infrastructure. Ideal when you have engineering capacity and prefer to own the keys.

  • Trained model + validation report
  • Code: ingestion, training, scoring
  • Deployment to your environment
  • Runbook + handoff session
  • 30-day post-launch support window
Discuss a build
02 · MOST RECOMMENDED

Hosted Predictions. We run it.

From $5,000

Build · variable with complexity

$75 / mo

covers retraining + monitoring + serving

Same build, but we keep the model running. Monthly retainer covers retraining on fresh data, drift monitoring, and serving via API or scheduled batch. Ideal when you'd rather your team consume predictions than operate the pipeline.

  • Everything in Build & Handoff
  • Hosted scoring API or scheduled batch
  • Quarterly retraining on rolling data
  • Drift monitoring and alerts
  • Same-business-day incident response
Quote a hosted build
03

Hosted + Dashboard. Predictions your team will actually read.

From $5,000

Build · variable with complexity

From $150 / mo

scales with dashboard complexity

When the operator wants to sort, filter, and drill into the predictions directly — not consume them through someone else's tool. Custom dashboard tuned to your model's outputs and your team's workflow. Higher monthly because the dashboard ships with the same care as the model.

  • Everything in Hosted Predictions
  • Custom dashboard tailored to model outputs
  • Drill-down into individual prediction rationale
  • Team access controls + audit trail
  • Quarterly UX iteration based on usage
Discuss the full stack

Final scope and pricing agreed in the audit. Data availability is the most common scope variable, not model complexity.

Brief a 30-minute call.

Tell us the question your team is currently answering with a spreadsheet hunch. We send a fixed-price scope within 48 hours. The brief is yours either way.