More People Benefit. Is That Enough?
Two worlds.
In World A, 1,000 people need kidneys. 999 of them wait the same amount of time. One rich person, who can afford a transplant consultant, waits half as long. Total waiting across the population: roughly 999 standard waits plus one short wait. Inequality exists, but it is tiny. Almost everyone has the same experience.
In World B, someone builds a free tool that lets 500 of those 1,000 people optimize their listing location. Those 500 now wait significantly shorter than anyone in World A. The other 500, who didn't use the tool (or couldn't act on its recommendations because they couldn't relocate), wait slightly longer than before, because more people are competing in the favorable territories.
World B has less total waiting. More people benefit. The aggregate outcome is better by any utilitarian measure. But World B also has more inequality. The gap between the best-off and worst-off is wider. The experience of being in the bottom half feels worse, even though the top half is doing better than anyone in World A.
Which world do you want to live in?
This is not a hypothetical. This is the question that tools like TransPlan force you to answer.
The formal paradox
Let's make it precise.
State A: 0.1% of transplant patients can game the system (the wealthy, the connected, the ones with consultants). The other 99.9% are functionally equal: they all face the same waitlist, the same information, the same odds. The system is unequal, but the inequality is so concentrated that almost nobody feels it.
State B: 50% of patients now have access to optimization tools. They use the analysis to list at better centers, in better territories, at better times. Their outcomes improve. The other 50%, who can't relocate, who don't have insurance that covers out-of-network evaluations, who can't take time off work, see their effective position on the waitlist get slightly worse, because the favorable territories are now more crowded.
State B has less total suffering. More kidneys go to more people faster. The aggregate is better.
State B also has a wider gap between the optimizers and the non-optimizers. The felt inequality is worse. The patients who didn't benefit don't know that total suffering decreased; they know that their wait got longer while other people's waits got shorter.
And here is the part that makes ethicists uncomfortable: both of these observations are true simultaneously. More people benefit, and inequality increases. The utilitarian metric and the egalitarian metric point in opposite directions.
Measuring the tradeoff
To reason about this clearly, you need to separate the concepts and measure them independently. Four metrics do most of the work:
Total harm (the utilitarian metric). Sum up all the suffering (waiting time, mortality risk, quality-of-life loss) across the entire population. Whatever policy produces the lowest total is best, regardless of how that total is distributed.
Gini coefficient (the inequality metric). How unevenly is harm distributed? A Gini of 0 means perfectly equal distribution. A Gini approaching 1 means all the harm is concentrated on a few people. The Gini doesn't care about the total; it cares about the shape of the distribution.
Worst-off person (the Rawlsian metric). What is the experience of the single person who has it worst? John Rawls argued that a just society is one designed from behind a "veil of ignorance," where you don't know which position you'll occupy. Behind that veil, rational people would choose the system that makes the worst position as good as possible.
Median harm (the typical-person metric). What does the middle of the distribution look like? This captures the experience of the average patient, as opposed to the aggregate or the extreme.
These four metrics often agree. A policy that reduces total harm, lowers the Gini, improves the worst-off, and lowers the median is straightforwardly good. The interesting cases are when they disagree.
Harm Distribution Simulator
Explore the tradeoff between reducing total harm and distributing it equitably.
10 of 100 people
Population (N=100)
Metrics
Total harm
Utilitarian metric
Gini coefficient
Inequality (0 = equal, 1 = max)
Worst-off person
Rawlsian maximin
Median harm
Typical person's experience
High inequality. The concentrated group (10 people) bears 500 units each while the majority bears only 1.00. And total harm isn't even lower — the worst of both worlds.
What the interactive reveals
Play with the sliders above. Here is what you will find:
Start with the default: 10% of the population bears 500 units of harm each, while 90% bears 1 unit. Total harm is relatively low. The Gini is extreme. The worst-off person is suffering 500x more than the typical person. This is State A: concentrated inequality, low total harm.
Now click "Equalize." Same total harm, spread evenly. The Gini drops to zero. The worst-off person is doing much better. But the median person's harm went up, because they're now bearing a share of what was previously concentrated on a few.
Now try increasing the concentrated group to 50% and dropping their harm to 50. Compare this to the equalized version. You'll find configurations where total harm is lower, the Gini is higher, and the worst-off person is better off. The metrics disagree with each other.
This is the transplant equity problem in miniature. There is no single number that tells you whether a given distribution of outcomes is "fair." It depends on which metric you prioritize, and that is a philosophical question, not a mathematical one.
What Rawls would say
John Rawls proposed the maximin principle: choose the policy that maximizes the minimum outcome. Behind the veil of ignorance, if you don't know whether you'll be the person who benefits from optimization or the person who doesn't, you should prefer the system where the worst position is as good as possible.
Applied to transplant: Rawls would ask whether making optimization tools widely available improves or worsens the position of the patient who is least able to act on them. If TransPlan helps 500 people but makes things marginally worse for the other 500, the Rawlsian question is about that other 500. Are they worse off than they were before the tool existed?
The answer is genuinely uncertain. If optimization tools shift demand toward favorable territories, the patients in those territories who were already listed there may face longer waits as competition increases. The patients who can't relocate at all gain nothing. The worst-off position may not improve, even as the average improves.
Rawls would not necessarily oppose the tool. But he would insist that the evaluation focus on the bottom of the distribution, not the aggregate.
What utilitarians would say
The utilitarian position is more straightforward: if total waiting decreases and total transplants increase, the tool is good. Full stop. The distribution of that benefit is a secondary concern. More people alive is better than fewer people alive, even if the gains are unevenly distributed.
This is a powerful argument. In a system where 17 people die every day on the waitlist, any intervention that increases the total number of successful transplants has a strong utilitarian claim. If optimization leads patients to list at centers that accept more organs (reducing organ discard rates), the aggregate effect could be more lives saved, not just redistributed wait times.
The utilitarian acknowledges that some people may be made worse off. But they argue that the net benefit outweighs the net harm, and that opposing the tool to protect the distribution effectively sacrifices lives for the sake of a principle about fairness.
Luck egalitarianism: who chose this?
There is a third framework that cuts differently from both Rawls and the utilitarians. Luck egalitarianism, associated with philosophers like Ronald Dworkin and G.A. Cohen, draws a sharp distinction between inequalities that result from choice and inequalities that result from circumstances beyond your control.
If you need a kidney because of a genetic condition, the luck egalitarian says society has a strong obligation to compensate you. If you need a kidney because of choices you made (the classic, oversimplified example: substance use), the obligation is weaker. The distinction is between "brute luck" (unchosen) and "option luck" (chosen).
Applied to transplant geography: whether you live in a favorable OPO territory is almost entirely brute luck. You were born there, or you moved there for work, or you stayed because your family is there. You did not choose your zip code to optimize your transplant odds. The geographic inequality in the transplant system is, by the luck egalitarian standard, a paradigmatic case of unchosen disadvantage that society should correct.
But TransPlan introduces a new kind of inequality: the ability to act on information. If the tool is freely available and someone doesn't use it, is that option luck? If they can't use it because they can't afford to relocate, is that brute luck? The framework starts to buckle when the inequality is not in information access but in the ability to act on information.
The capabilities approach
Amartya Sen and Martha Nussbaum developed the capabilities approach, which shifts the question from "what do people have?" to "what are people actually able to do?" The relevant metric is not whether information exists but whether people have the real freedom to act on it.
By this standard, making TransPlan free is necessary but not sufficient. If a patient in rural Mississippi receives a recommendation to list at a center in Houston, but cannot afford the trip, cannot get time off work, and has no childcare arrangement for the evaluation week, they have the information but not the capability. The tool increased their theoretical options without increasing their actual freedom.
The capabilities approach would say: the tool is a partial solution. A complete solution would require not just information but the material conditions to act on it. Transportation assistance. Lodging support. Insurance reform. Employer accommodation.
This is the most demanding framework, and arguably the most honest. It refuses to let the existence of a free tool settle the equity question.
The coercion gradient
There is a darker version of the equity question that none of the four frameworks fully resolve. When transplant demand exceeds supply by a factor of five (roughly 104,000 people waiting, roughly 20,000 transplants per year), the system is fundamentally one of scarcity. Every optimization by one patient comes, in some small way, at the expense of another.
This connects to a set of questions that each deserve their own treatment: whether we should allow organ markets (Iran is the only country that does), whether prisoners should be able to donate for sentence reduction (China has done this, not ethically), whether behavioral criteria like sobriety requirements for liver transplants are justified, and whether any of these frameworks survive contact with a future where transplant happens off-Earth with no donor pool at all.
Each of these pushes the coercion gradient further. Each tests whether consent can be meaningful under conditions of desperation. And each ultimately asks the same question the simulator asks: who bears the harm, and did they choose to?
No clean answer
If you have been waiting for the paragraph where I tell you what the right answer is, it isn't coming.
Different ethical frameworks disagree on what matters. The utilitarian says: maximize aggregate welfare. The Rawlsian says: protect the worst-off. The luck egalitarian says: compensate for unchosen disadvantage. The capabilities theorist says: measure real freedom, not theoretical access.
TransPlan, or any tool like it, sits at the intersection of all four. It reduces information asymmetry (good by any framework). It may increase aggregate welfare (good by utilitarian standards). It may widen the gap between those who can act on information and those who cannot (concerning by Rawlsian standards). It compensates for geographic brute luck but only for those with the capabilities to use it (partial by luck egalitarian and capabilities standards).
The interactive at the top of this article was designed to make one thing visceral: you cannot optimize for all four metrics simultaneously. The choice of which metric to prioritize is not empirical. It is moral. And reasonable people, looking at the same distribution, will disagree.
What I believe is this: the information should be free. The analysis should be public. The barrier to navigating the transplant system should not be wealth. Whether that is sufficient for justice is a question I don't think any single tool can answer. But it is a necessary condition, and it is a condition that the current system does not meet.
The tool shouldn't need to exist. But it does.
This is the final article in The Transplant Problem series. Start from the beginning: Who Gets the Organ?