When Will I Get My Next Client? A Mathy Look at Coach Assignments (with Charts)
At Dancing Dragons, we source clients for coaches and assign clients to coaches via a ranking / priority system.
The guiding goal is simple: a 5:1 long-run ratio of clients to coaches.
That is not a per-coach cap, and it is not a “right now” snapshot target.
But “on average” hides the interesting part.
If you take a snapshot, some coaches have 0 clients, some have 6–7 concurrent assignments, and many are in the middle.
This post builds a small, explicit math model for that snapshot and uses it to answer three practical questions:
- When will I get my next client?
- How many clients will I have at a time on average?
- How long will a client stay on average?
The goal is not perfect prediction.
The goal is a model that is simple enough to reason about, close enough to reality to be useful, and concrete enough to turn into dashboards driven by the assignments ledger (see src/server/databases/neondb/tables/assignments-table.ts).
5:1
Target clients : coaches
-10%
Assignment probability per extra current client
2 & 10
“Camel hump” tenure modes (sessions)
Birth–Death
Process model for load over time
🚨 The Real Problem: long-run ratios can still feel uneven in snapshots
- The platform can be at the right long-run 5:1 client:coach ratio while some coaches are temporarily at 0–2 clients.
- The platform can be at the right long-run ratio while other coaches are temporarily at 6–7+ clients.
- Our policy is not a hard cap: we downrank / deprioritize coaches with more current assignments to create a natural throttle.
💡 The Model: diminishing assignment odds + client churn = a steady-state distribution
We’ll model each coach’s active-client count K as a stochastic process.
- Birth: a client gets assigned to you (so K → K + 1)
- Death: a client unassigns / churns (so K → K - 1)
We then connect that model to:
- Your policy: each additional current client makes you 10% less likely to get a new one in the same timeframe
- Your reality: client tenure is bimodal (“camel humps” near 2 and 10 sessions)
Step 1: Write down the policy as a weight function
Let k be how many clients a coach currently has.
Your policy can be expressed as a simple multiplicative weight:
w(k)=0.9k
Interpretation: if two coaches are identical except one has 3 clients and one has 0, the 3-client coach has weight 0.93≈0.729, so is about 27% less likely to receive the next assignment (all else equal).
This is a clean “soft fairness” rule: it doesn’t hard-cap anyone, but it continuously pushes the system toward balance.
Step 2: Turn “assignments + churn” into a birth–death process
We treat your current client count K(t) as a birth–death Markov process with:
- Birth rate λk: how quickly you get assigned a new client when you currently have k
- Death rate μk: how quickly clients leave when you currently have
One minimal model consistent with your policy:
λk=λ0⋅0.9k
For churn, the simplest assumption is independent per-client churn (each client has its own chance to leave):
μk=k⋅μ
This says: the more clients you have, the more “churn surface area” you have.
What this buys you
This specific birth–death model has an explicit stationary distribution (a steady state).
That’s the math object behind “some coaches have 0 and some have 7” even when your overall system is stable.
In steady state, the probability of having k clients is:
π(k)∝k!(μ
You don’t need to memorize this.
The important part is the shape:
- The k! in the denominator pushes down high k (overload is possible but increasingly rare)
- The 0.9k(k−1)/2 term adds an extra “fairness squeeze” that makes the tail lighter than a plain Poisson
Step 3: What the snapshot distribution looks like (illustrative)
Below is an illustrative steady-state snapshot distribution consistent with a mean near 5.
It’s not claiming your exact numbers.
It’s showing the kind of shape the policy naturally creates: mass around 4–6, a non-trivial left tail (0–2), and a thin right tail (7+).
📊 Distribution of active clients per coach (snapshot)
0
5%
1
20%
2
30%
3
30%
4
10%
5
3%
6
1%
7+
1%
Interpretation: even with a “5:1 target”, it can still be normal to see a meaningful share of coaches at 0–2 clients at any moment, depending on demand and churn.
The probability you have 0, 1, 2, 4, 5, or 6 clients (made explicit)
The clean way to phrase this is:
At a random point in time (a snapshot), P(K=k) is the fraction of coaches who have exactly k active clients.
Here is a simple illustrative distribution (the same one in the chart above), written as explicit probabilities:
| Active clients k | Probability P(K=k) |
|---|
| 0 | 5% |
| 1 | 20% |
| 2 | 30% |
| 3 | 30% |
| 4 | 10% |
| 5 | 3% |
| 6 | 1% |
| 7+ | 1% |
If you want to compress the tail, that’s:
- P(K=0)=5%
- P(K=1)=20%
And if you specifically care about 5 and 6:
- P(K=5)=3%
- P(K=6)=1%
📈 Probability mass function P(K=k): the same numbers as a clean “math graph”
k = 0
5%
k = 1
20%
k = 2
30%
k = 3
30%
k = 4
10%
k = 5
3%
k = 6
1%
k ≥ 7
1%
This PMF chart is exactly the same distribution as the table and the earlier snapshot bar chart; it’s just a more “mathy” visualization.
The birth–death math that generates P(K=k)
The shortest way to make P(K=k) “real math” is the birth–death recursion.
We define the coach-load process K(t) with:
λk=λ0⋅0.9k
μk=k⋅μ
This produces the canonical birth–death chain:
🔁 Birth–death state diagram for active-client count
At state k, “birth” happens at rate λk and “death” happens at rate μk.
The recursion for the steady-state probabilities
At stationarity, the birth–death chain satisfies detailed balance:
π(k)λk=π(k+1)μk+
Which gives the recursion:
π(k+1)=π(k)⋅μk+
So you can compute the entire distribution from π(0) and the rate ratios:
π(1)=π(0)⋅μ1λ
π(2)=π(1)⋅μ2
And so on.
Finally, π(0) is determined by normalization:
k=0∑∞π(k)=1
This is the cleanest “math pipeline” for turning real observed rates into the explicit probabilities you asked for: P(K=0), P(K=1), P(K=, , , .
The queueing-theory identity that links demand, tenure, and average load
The birth–death model describes fluctuations and the shape of P(K=k).
Queueing theory adds the macro identity:
E[K]=λcoach⋅E[T]
Where:
- λcoach is how frequently a coach gets assigned clients (arrivals per unit time)
- E[T] is the average tenure of a client with a coach (in time)
This is why “camel hump” tenure matters operationally: if you move weight out of the early 2-session hump and into the longer-tenure hump, you increase E[T], which increases E[K] even if assignment demand stays the same.
Question 1: When will I get my next client?
There are two separate layers to “when”:
- Global demand: how many new assignments are happening per day/week overall
- Dispatch odds: how your current k changes your slice of those assignments
A practical approximation for the wait time
Let:
- N = number of active coaches eligible for assignment
- Λ = total platform assignment rate (new assignments per day)
- w(k)=0.9k = your weight given your current client count
If assignment chooses coaches proportional to weight, then a coach with k clients has an approximate assignment hazard:
h(k)≈NΛ⋅E
That converts directly into an expected waiting time:
E[time to next client∣K=k]≈h(k)1
What the 10% rule does (shape, not absolute time)
Even if you don’t know Λ precisely, you can see how your wait time scales:
h(k)h(k+1)=0.
So each additional current client increases your expected wait by about:
0.91≈1.11
That is, ~11% longer expected wait per additional current client, holding everything else constant.
A concrete, easy-to-interpret example (illustrative)
Suppose:
- N=100 eligible coaches
- Λ=12 new assignments per day (roughly 84/week)
- E[w(K)]≈0.60 (typical if is often around 4–6)
Then h(0)≈10012⋅0.60 assignments/day, which is ~1 assignment every 5 days when you have 0 clients.
And for k=5, w(5)=0.95≈0.59, so is about 59% of , i.e. ~1 assignment every 8–9 days.
⏱️ Expected wait time vs current clients (illustrative)
k=0
5d
k=1
5.6d
k=2
6.2d
k=3
6.9d
k=4
7.7d
k=5
8.6d
k=6
9.5d
k=7
10.6d
The absolute numbers depend on platform demand (Λ). The slope with respect to k comes directly from the 0.9^k policy.
Question 2: How many clients will I have at a time on average?
At the platform level, the “5:1” target is the statement:
E[K]≈5
But an average is incomplete without variability.
What you care about is something like:
- What fraction of time am I below 3?
- What fraction of time am I around 5?
- How likely am I to hit 7+?
The queueing identity hiding in plain sight (Little’s Law)
Even before you get fancy, there is a powerful back-of-the-envelope relationship:
E[K]=λcoach⋅E[T]
Where:
- λcoach is the average assignment rate per coach (clients arriving to a coach per unit time)
- E[T] is average client tenure with a coach (in the same time units)
This is why tenure matters so much.
If you keep the assignment rate constant but double average tenure, average concurrent clients doubles.
What the 10% rule does to the mean and the spread
The 0.9k policy is a negative feedback loop:
- If you are below average, you become more likely to get the next client.
- If you are above average, you become less likely to get the next client.
That feedback tends to:
- Keep E[K] near your intended target (given enough demand)
- Reduce extreme overload events compared to “pure random” assignment
- Still allow tails, because churn and arrivals are stochastic
In practice, you should expect:
- A stable center of mass around 4–6
- A persistent left tail (new coaches, churn spikes, eligibility filtering, temporary pauses)
- A thin right tail (brief overload windows before churn brings things down)
Question 3: How long will a client stay on average?
You described a key empirical fact: tenure is bimodal.
It’s not “one smooth hump”.
It’s more like a camel: one peak near 2 sessions and another near 10 sessions.
That means the average tenure is a mixture of two subpopulations:
- Clients who try coaching and bounce early
- Clients who commit and stay longer
A minimal model: a two-component mixture over session counts
Let S be the number of sessions a client completes with a coach.
Model S as a mixture:
S∼{Sshort
If the short group clusters near 2 sessions and the long group clusters near 10 sessions, then:
E[S]≈p⋅2+(1−p)⋅10
Example: if p=0.55, then E[S]≈0.55⋅2+0.45⋅10=5.6 sessions.
To convert sessions to time, multiply by cadence (e.g. weekly sessions implies ~5.6 weeks on average).
🐪 Tenure distribution (sessions): the “camel hump” (illustrative)
1
9%
2
36%
3
13%
4
7%
5
5%
6
6%
7
9%
8–9
15%
10
39%
11+
8%
Two humps: many relationships end after “trying it” (~2 sessions), and another large cohort ends around “completion” (~10 sessions).
Why this matters for “how many clients will I have?”
If you improve retention by reducing early churn (shrinking the 2-session hump), you increase average tenure E[T], which increases E[K] at fixed demand.
If you improve assignment throughput (increasing Λ), you also increase E[K] at fixed tenure.
How this connects back to the assignments ledger
In your system, the assignments table behaves like an event ledger: each row is an assign/unassign event.
To build these charts from real data, you typically compute three derived datasets:
1) Snapshot: active clients per coach
- Pick a snapshot time t (e.g. “now” or “end of week”).
- For each coach, count distinct clients whose latest assignment state at time t is “ASSIGNED”.
- Plot the histogram of those counts across coaches.
2) Arrival rate: Λ
- Count “ASSIGNED” events per day/week (filtered to real assignment events, excluding soft-deletes).
- Smooth with a rolling window to remove day-of-week seasonality.
3) Tenure: session-based and time-based
You can define tenure multiple ways; two common ones:
- Session tenure: number of completed sessions between first session and last session in an assignment relationship
- Time tenure: time between assignment start and unassignment (or last activity)
If you want the “camel hump” in sessions, you need session events (not just assignment events).
Practical takeaways for coaches
- If you have fewer clients right now, your odds are structurally better: the 0.9k policy actively pulls you toward the center.
- Your expected wait time scales by ~11% per additional current client: that’s the policy, algebraically.
- The system can be healthy and still feel uneven: snapshot variance is not a bug; it’s what stochastic matching plus churn creates.
If you want this to become a real dashboard
Here are three metrics that make this blog operational:
- K-distribution: histogram of active clients per coach (weekly snapshot)
- Hazard curve: estimated E[time to next client∣K=k] from observed assignments, stratified by k
- Camel decomposition: fit a two-component mixture to session tenure and track over time (the size of the early-churn hump)
If you’d like, I can generate a second post that uses real numbers from your database (and auto-generates the inline HTML chart heights from an exported JSON snapshot) so the charts match production exactly.
Book a Discovery Call