My recent Fantasy Modeling post struck a nerve. Due to the fracturing of the modern internet, comments are everywhere (email, Mastodon, LinkedIn, even Twitter). I will round up the best that are not already on Substack, and some replies.
On LinkedIn, the Transport Modellers were highly defensive [Also LinkedIn truncated the full post, but no one reads to the … ] .
On the other platforms, people were much more antagonistic to modellers. Obviously comments are continuing, so I stopped collating when I stopped collating. If your comment on some other platform didn’t make it, please feel free to comment below.
I will admit to being a bit less precise in the wording of the problems than I should have been given the diversity of my audience (modellers/adjacent professionals/advocates/non-modellers, Americans/Australians/Europeans/others, developers/users/economists/decision-makers) who will all perceive things differently. There are a few inter-related problems:
Time-frames (I was primarily talking about long-term (30/40 year models), The underlining issue here seems to be the absurd time frames these models operate within, disregarding the adaptability of society to the infrastructure it encounters.
Transport determinism has been a leading approach in the planning and implementation of infrastructure projects. There are two sides of this. One the models are deterministic. They always produce the same outcome for the same inputs. There are of course scenarios, but there is in the end a single answer. The second side is the implicit assumption that transport systems directly influence the social and economic aspects of urban areas. Given the multitudinal variables involved, the calibration and validation of these models demand an unobtainable level of exactitude for what is asked of them. Compare with traffic simulation models, in particular, whose modellers insist on precision that strategic models cannot dream of.
Tools (Which of course vary, and whose accuracy deteriorates with time). It is possible in theory to explain much of today in the aggregate with some high error. That error is however sufficiently high that it should concern us about the embedded modeling assumptions. We can consider both
Software: How many different software packages can the industry justify? Competition is good, but software is a classic high fixed-cost, low variable-cost industry, and we might be better off with fewer rather than more suppliers. And of course since the client is almost always the public, it should be a really good Open Source (OS) package. All (most of) the public agencies should probably get together and fund ongoing development of a single OS package. I know there have been discussions along these lines. I haven’t seen evidence of it actually happening. There are plenty of OS packages that can serve as foundations (MatSim being one example), Zephyr Foundation hosts others, though the website hasn’t been updated for a few years.
Models: But the real problem is the models. The classic 1950s four-step model remains with us somehow. Agent-based models are an improvement (both theoretically and from a policy usefulness perspective - though not for forecasting) but are not yet standard. We need not just standardised open source software, but standardised open source models, with all of the details published for whoever wants to dismantle and critique them.
My general exasperation about this is “how hard can this be?” We were talking about this 30 years ago.
Despite the massive investments — I’d estimate in the billions of dollars globally in the seven decades since we began modelling, with hundreds of millions spent annually on the software, data collection and formatting, data sciences, and coding just to build models that can be applied for various projects — little progress has been made. The continuous tweaking, adding multipliers and constants, doesn't seem to deliver more accurate results. We could of course spend that money better, but we should also ask why, if our models are so good, why don’t they transfer between cities? Are people really different, or just their environment?
But even if you get the models (er. for marketing purposes we call them Digital Twins) right for today, and reduce the inaccuracy to an acceptable level, you still haven’t solved the computational irreducibility problem. We cannot know the future without living through it. The real world is sufficiently complicated that even a model that perfectly replicated today would be no better than a random number generator in 40 years.
Society's Adaptability and the Potential for Nowcasting
A few additional points.
We should not get too fixated on minor differences in proposed projects.
Transport models which do not endogenize changes in land use (and they cannot do that accurately either), will completely miss the adaptation process.
Society invariably adapts to whatever infrastructure it finds. Travelers mold and reshape behaviours, and patterns in response to changes in the built environment, while the infrastructure and land development themselves are mutually causal.
You build the bridge here, the town grows here. You build it there, the town grows there. In a 100 years, building the replacement bridge there instead of here won’t make sense, even if your model originally said there was better than here.
This observation brings to the forefront the idea of "nowcasting." Instead of getting bogged down with forecasting, it might be more fruitful to focus on the present, analyzing the current implications of a proposed infrastructure project or land use change. That is, we take today’s conditions and insert the project and nothing else. We should be asking: does this work according to the preferred values today? How long is the payoff justifying the costs involved, both monetarily and socially? Things that pay off sooner (have a higher rate of return) are generally better than those that promise pay offs far into the future.
In this context, we should recognise that the wisdom of 19th century UK parliamentarians, who had lesser dependence on data and more on ground realities and human insights, and successfully picked winners for the London Underground, might sometimes prevail over today's forecasting models.
Jobs that need not be done: or throwing the baby out with the bath water.
Clayton Christiansen and Horace Dediu propose “Jobs to Be Done (JTBD)” as a framework. JTBD encapsulates a business strategy and mindset focused on understanding the underlying reasons customers purchase products. This theory contends that consumers do not simply buy products, but rather "employ" them to accomplish specific tasks, like addressing an issue or satisfying a need.
What are the JTBD by models? In the Wikibook Fundamentals of Transportation I wrote:
There are a variety of reasons to model. Modeling helps
gain insight into complex situations by understanding simpler situations resembling them
optimize the use of resources in building or maintaining systems
operate system, particularly by testing alternative operational scenarios
educate and provide experience for model-builders
provide a platform for testing contending ideas and use in negotiations.
Particular applications in transportation include:
Forecasting traffic
Testing scenarios (alternative land uses, networks, policies)
Planning projects/corridor studies
Regulating land use: Growth management/public facility adequacy
Managing complexity, when eyeballs are insufficient, (different people have different intuitions)
Understanding travel behavior
Influencing decisions
Estimating traffic in the absence of data
I think we should differentiate the Interpolation functions of these models (looking at today) and the Extrapolation functions (looking at the future).
The latter only serve the JTBD of justifying ex post political decisions and providing the veneer of rationality over a subjective decision arrived at through other means. That is not an Engineering or Planning professional role, that is Public Relations and Marketing. Extrapolations far into the future is a job that need not be done (JNNBD).
The counter-argument will always be, “if I don’t do it someone else will.” First, that’s their ethical problem. At some point, someone will want the person who has ethics because no one else can be trusted to speak the truth. Further the profession can develop ethical rules about what uses of models are appropriate.
AITPM has a Code of Conduct, developed by the Transport Modelling Network of AITPM, which all sounds good, but it expressly permits long run forecasting, and describes the expectations for doing so.
Principle 1: Integrity and professional conduct
Act with the utmost integrity which encompasses the highest standards of professional conduct, honesty and ethics.
Maintain high standards of personal and professional conduct to meet industry expectations, reflect favourably upon the transport planning/modelling profession and serve as an example to others.
Meet “industry expectations”? What if industry expectations are not good enough. This is a mutual defense pact among modellers.
The modeling profession can do better by doing less.
Now onto the modeling feedback.
JK
I think this is my fave Transportist yet - thanks for exposing modelling for what it is in such a clear and concise way.
TS
You are pessimistic the modeling. Don’t be. For me models are attempts to try to understand what is going on, what will happen if something changes in the world. I would like to respect the professionals creating models us ing up-to-date software and know how and thus enhancing the understanding of the transport.
You are totally right in criticising the large crowd of ”modelers” who are incompetent and give a bad name to the profession.
I have a proposal:
- find out how the other professions create their models (eg. Economic forecasts., socioeconomic forecasts… ) could transport modeling benefit of those?
- gather real data of transport flows, create a ”simple ”model for producing growth factors which can be used to multiply the observed volumes
- Test how Monte Carlo simulations could benefit transport forecasts. Now the forecasts are 50/50%, which is never true, It would be better to know the forecast deviations (when the model is run, say 1000 times).
KV
I really enjoyed reading this and there is a lot of truth in what you say. When you argue for small incremental moves, how exactly would you achieve that? Or are you calling for a whole rethink of the modeling process (4 step/ABM)? If we are to bring in visioning...how do we quantify it? Or I am totally off base in trying to think of this within the traditional paradigms? Thanks for writing this.
Small incremental moves refers to favoring small additions to infrastructure (an infill station, a new interchange) rather than grandiose new lines or highways. Even more focus on operations (reorganizing buses and traffic signals etc) rather than concrete.
Obviously ABM is preferred to traditional aggregate models, but they don’t solve the problem of looking ahead too far, and I have not seen evidence yet (it may be too early) of any improved accuracy. Model forecast accuracy actually matters if we want to use models for decisions, but is impossible due to computational irreducibility and the issue is just ignored.
and you are right about computational irreducibility being ignored. Even NCHRP 08-110 (nap.nationalacademies.org/down…), a very good report otherwise, does not mention it. I wonder if its due to the lack of budget to do so? TMIP-EMAT can potentially be a tool to evaluate multiple future scenarios and can take a bite at the computational irreducibility question?
it is hopeless. At best you can compare alternative scenarios and determine that one move tends to outperform another. But given all the complexity in the world (technology and land use being the two most obvious here), long term forecasting should be abandoned. The future transport demand pattern is only solvable by running the computer that is the Universe for a few decades and seeing what happens.
—
The issue I am seeing with ABMs in practice is that agencies (and their consultants) find it too cumbersome to use and some want a traditional 4 step model. The argument is that its too complex and they are unable to make changes to the model. ActivitySIM (when it gets released) is supposed to overcome these issues but my understanding is that it is the CT-RAMP structure (but with Python).
we’ve been talking about ABMs at least since I started 34 years ago. I was playing with them (toys) in the early 1990s. 30 years on and they are still not widespread. And they still won’t solve the fundamental problems of forecasting.
SH
Thank you for this!
As summarized here we are trying to change things in Denmark. Uphill? Oh, yes. But we haven’t given up just yet.
https://concito.dk/en/concito-bloggen/groen-omstilling-kraever-ny-tilgang-til-infrastrukturbeslutninger
JFC
the approach of "predict and build" has shown it's limits recently but still has a lot of support. Others are pushing toward the "decide and provide" approach. What is your opinion on theses?
—> DH
—> there are excellent D and P approaches that are useful in prediction - issue is on selecting futures that offer up solutions to society that are denied by extrapolation of the past
“Decide and provide” is certainly better than “Predict and Provide”, in that the self-fulfilling nature that induces the demand it proposes to address is somewhat avoided. Obviously the decision-makers may still make wrong decisions, but they cannot hide behind the forecasts and the models, the subjectivity of their decisions would be laid bare.
JR
There are issues of course with modelling and how models are used poorly etc. But why shouldn't we study complex systems using models? We do that in other fields/sciences! Our tools are not perfect, but no one thinks that they actually are. A model should give you more information than you put into it, and four step models do that. I think the modelling community would love a right of reply to your post David!
Studying systems is fine. Trying to replicate the present to learn is good. Using travel demand forecasts in decisions is misleading at best. Making decisions with these models, and making decisions based on benefits forecast 40-years into the future is a problem. I think everyone has a right of reply on their own forum, or LinkedIn. [I am not offering up my blog for anyone who wants to disagree with me to have free access to subscribers.
For instance, How many toll road forecasts have widely missed the mark? How many were built that never should have been because investors relied on those forecasts? How many toll roads have gone bankrupt. (We can also see the misses on the rail side).
IP
David Levinson not to be the devil's advocate here but forecasting cash flow on toll roads and public transport alike still remains a very practical task that everyone involved needs some solutions to.
I have also worked with one of the world's busiest public transport systems where flow forecasting matters a lot in exploring capacity issues - and pouring more money and oversizing everything 'just in case' has not always been a viable approach under a limited budget.
Investors and politicians alike are hardly ever happy to rely with their billions of dollars solely on benchmarking and expert opinion (even though educated guess is known to provide something that ends up being much closer to the eventual outcomes than whatever the modellers might have come up with - that's what I have experienced with public transport), and accessibility measures, as much as they indicate in many ways the potential success or failure, are not easily translated into fiscal terms.
What in your opinion can we do methodologically about these things? Are better forecasting tools possible? Are better approaches possible?
Startups are funded based on hope, some expectations of founder experience, and a good pitch, not actual forecast numbers. Why should new build infrastructure be much different? [Given the similar apparent uncertainties involved] There are different kinds of contracts that don’t depend on forecasts, but allow the project promoter to collect revenue until they have paid back costs plus some return. Seems plausible.
David Levinson all startups are somehow funded, and nearly all of them inevitably fail. That in most cases happens with little to no systemic impacts whatsoever. Venture capitalism framework accounts for all of that and is very different to how investment in infrastructure is conceived.
We can plan transport infrastructure for being useful, and we can prove (with accessibility tools) that we are getting the best possible outcome with a specific option for a given budget - but this is hardly ever enough - even when we are talking about projects entirely paid for with public money.
Forecasting becomes a yuuuge focus and pain point when talking revenue collection and subsidies, even when gross-cost contract schemes are involved (buying y accessibility for $x/year), let alone anything in the realm of public-private partnerships or cash-flow-dependent schemes (for instance, net-cost contracts on public transport).
As a consultant, I am expected to be able to elaborate on these matters without relying solely on my own expert opinion. The only thing I can practically do today is... to go to strategic modellers and try getting something meaningful out of their tools, the way they are, with the best intentions (hopes) in mind.
(A) forecasting is impossible (with certainty). (B) people believe certain things. Make them have skin in the game. Forecasters only get paid if the project exceeds their projections if it goes ahead (and paid a stipend if it doesn’t). A consultancy that wants money upfront can sell their future earnings at a discount.
—> IP
—> this kind of framework would be great and would definitely get the industry rid of most egregious instances of malpractice.
But what about the forecasting tools themselves?
Modern strategic modelling tools (like PTV) seem to be extremely sophisticated to work with, the outcomes seem to be extremely volatile, in relation to both the parameters and the quality of input, and the outputs may sometimes be very hard to follow. At least, that's my feeling after reading my fair share of traffic modelling presentations and reports.
Is there, in your opinion, any direction for getting more credibility and trustworthiness with any improvement in those (assuming the abovementioned skin-in-the-game framework gets somehow set up)?
(Apologies for throwing so many questions at you all at once!).
I am not going to comment on particular software. But complexity and sophistication is not inherently a good thing in a model. Credibility comes off of success. Success comes from skill and luck. I don’t think anyone should be expecting to build things that don’t pay-off for 40 years, even if the model lets them. Let’s look at a 10 year pay-off period (which is still huge for the private sector). Does it pay off in 10 years under otherwise current conditions (land use, population, other network, technology etc.)? Does it pay off if the technology changes somehow? [E.g. WfH takes off, AVs, etc.]
DH
Good modellers know this and are cautious - talk I gave to AITPM and Planning Association couple of years ago. Be careful to blame it on models but to recognise the many who develop and use models with limited experience as to their role to complement good common sense. The fantasy is often the uninformed modeller!
TVV
...or the uninformed and uncritical user of model results...
…
I think it's time that transport planners in Australia get over the toll road forecasting debacles of the last decade, and stop blaming the models for all ills of the transport infrastructure world. My ex-colleague Tim Veitch ran their Zenith models for one of these projects and came up with forecasts that were a lot closer, a lot, lot closer to the actual outturn demand, than what was used for the decisions to finance these roads - and by using very similar 4-stage strategic models (but with quite different assumptions for, for example, willingness to pay). It wasn't the model per se, that was at fault, but the assumptions.
Models are tools, and modellers are generally skilled and highly dedicated operators. But for forecasting they rely on inputs, and responsibility for the assumptions needs to be shared across all stakeholders, and all experts involved in their use.
TAG is clear - both about the need for sensitivity testing and about the need for realism testing. Relying on a single coefficient for every variable based on a single survey at a single point in time, is no longer considered good practice, certainly not for major, expensive, transformational projects.
CF
These models frame projects as discrete entities to make them palatable to an overloaded decision maker. They are deemed to prioritise according to a finite budget envelope. They are not made to evaluate the indirect impacts of these projects. I witnessed the limit of this rationale in Istanbul 20 years ago….
TS
Agree with some of this David. Let’s be honest, we’re talking mainly about traffic modelling to support major road projects, and in this I agree that models have played an outsized role in justifying projects that really wouldn’t pass a strategic merit test if we had our priorities right. However, my sense is the problem lies more with the transport economic appraisal process (measuring the wrong things, with the wrong values) than the modelling per se. Most sensible modellers i know (yes, there are a few) are realistic about the accuracy and usefulness of their traffic models. I also don’t think you’d find many instances where additional government funds have been allocated to transport from other sectors on the basis of traffic modelling evidence, but I am not as informed on this topic.
TVV
I have responded elsewhere with my opinion, but I fully agree that in terms of the impacts of model results on good or bad transport infrastructure decisions, we need to separate the roles and responsibilities of a) the models and their developers, b) the modellers who use the model for forecasting, c) the model sponsors commissioning the model forecasts, d) the economist end users of the model outputs to calculate (dis)benefits of the investment and e) the decision-makers who use model results as part of their responsibilities. I have made this point many times in the past: it's a bad workman who blames his tools.
This is a nice breakdown. But the culpability of the misuse lies with b-e. And (e) the decision-makers don’t generally have the technical expertise (and maybe (d) the economists as well).
CCW
In my humble opinion from some experience, traffic models are most useful as a tool to investigate to understand the complexities of the happened world (base case or base scenario) with ‘what if’ scenarios changing specific dimensions to exercise and learn (modelled) effects. Experimentation, in other words. This contrasts to using a model as a “crystal ball” to “see into the future”. It seems a human trait (understandable, in an uncertain world) to ‘wish’ for the latter and believing it can provide a truth.
Yup. The next problem is the calibration (and validation) are generally not very good. Or if they are calibrated to match flows, they don’t match speeds and travel times.
TV
This is a provocative post, but inaccurate in my view. "Models are a fantasy" - clearly, models are not reality. I don't know of a modeller who suggests they are.
"models are almost always estimated based on a single survey at a single point in time” - sure, but the modelling underpinning investment decisions rarely produces a single forecast. Sensitivity testing is standard practice. I agree that practice could improve. We could do more backcasting, and more retrospective evaluation of past forecasts. We should also make more allowance for 'unknown unknowns'. However, there is no escaping the fact that the future is uncertain. e.g. estimating models based on longitudinal surveys would not have helped you foresee COVID-19, or, in my view, materially reduce uncertainty in a 2050 forecast. This uncertainty doesn't just complicate transport decisions - it complicates all decisions (e.g. defence).
The key is to make reasonable assumptions about the future (not necessarily assume that future == past), and to then properly grapple with uncertainty / risk. This is more about how we use models, than the models themselves. In this respect, I don't think modelling practice is as simplistic as you suggest, though agree we can improve.
Practice is more complicated than a 750 word post. I am familiar with this, having been a modeler in a previous life. But the relevant practice is the justification of immediate or future decisions. In the end, a single set of numbers is used for what in Australia is called the “Business Case”. The economists may do their own sensitivity analyses (varying interest rates e.g.), but a single number is what gets reported in the press release to support the project. This may be 6,200 passengers/hour by 2056 (growing from 880 in 2026) as is apparently the case for Sydney Metro Western Sydney Airport, according to this from Infrastructure Australia, the actual Business Case is not public), with a capacity for up 52,000 passengers/hour if the maximum number of trains were provided. It may be around 71,000 daily trips on SRL East in 2036, as in the Melbourne Big Build Suburban Rail Link materials
MB
Very interesting point. I have been in discussions with a wider global community in how these models may be of limited usefulness to the contexts of the Global South, especially LICs and LMICs
MR
Fantasy Modeling dominates US transport policy, enabling vast spending on road expansions that destroy our environment and communities.
must revise rules to demand reality checks on models for planning & environmental analysis
JW
Transport planning needs less predictive modeling and more values-led thinking
NB
The broader point about over-reliance is accurate, but the specific criticisms are misplaced. The problem with transportation models isn't the modeling frameworks used; it's the obsession with (and often regulatory required) far-away horizon year
It is both. They are not unrelated. A model that can replicate today (they cannot, but that is a different matter) will fall apart the farther in the future they are applied. This ought to be more widely understood than it is, both inside and outside the community.
JW
So how far into the future do you think we can trust models to predict?
Next Tuesday. (No, I lied, not even next Tuesday). Today’s models cannot replicate the base year adequately across multiple dimensions (travel time, link speed, trip distances, origins and destinations, and traffic or transit flows.) Fortunately there are new data sources from cell phones and connected vehicles that can be used to better replicate today (though largely a-causally, that is they can reproduce the present, but don’t provide any underlying scientific basis for future forecasting.
PS
I did my masters thesis in travel behaviour models, and this article is correct. It’s complete fantasy.
YDS
Thanks! that's super relevant to what we are trying to achieve. here in Israel we are moving away from modal split prediction and towards Modal Split Goals. Most transport consultants I work with are still figuring out how to use this new approach, but we're getting there.
FM
Agreed —- sometimes a model can’t anticipate human behavior and changes in human behavior. At times leaders must overcome inertia and resolve to try something new, and commit to helping communities learn how to use what you’ve deployed.
KM
Spot on. And it undermines the entire EIS framework for transpo. project analysis/approval which hinges on submission of "fantasy" modeling of ridership, new building construction, and projected commute models for decades into the future.
RS
We need more scrutiny of the pillars of conventional transport planning. The futile quest for certainty underpins most failures in public policy.
DC
Broadly agree … here. A really good crystallisation of the often fake science of the 4-step transport planning (or "integrated transport planning"/ ITP) process
I became passionately interested in this issue as a kind of hobby some years ago. I quickly formed some ideas about where everyone was going wrong. Before long I was admitted to the esteemed Transportation Modeling Improvement Program listserve, and posted my concerns. I had an interesting few months of participation but came away with minimal hope that the big obstacles to accurate modeling were going to be mastered.
Besides "endogenous changes in land use" there are also endogenous changes in land RENT or price. But it is important to understand it as economic rent.
And land rent changes in response to policy inputs or evolution of the economy or society, can vary quite widely from one jurisdiction to another, with the greatest variability being due to the extent of rationing of available land to the urban economy as a whole. Rationing the land supply results in a SYSTEMIC change in urban land rent, not just a change by some "factor".
When you have land available to the urban economy that is superabundant in quantity and the next competing use is rural, all that is necessary for developers to add that land to the urban economy, is to outbid rural land users. This land price is so cheap that it barely matters for the resulting "developed" housing or commercial premises. Then the existence of this land supply and the competitive developments that pass this land on at minimal capital gain, suppresses the potential capture of economic rent everywhere else in the urban economy. Land rents under these circumstances are "differential"; the base is the competing rural land price; and then "differentials" are added according to local productivity and location advantage relative to the exurbs.
This is what underpins house price median multiples of 3 or little more, everywhere they exist.
When urban land supply is rationed, usually by explicit "anti sprawl" regulations (but numerous proxies exist) urban land rent is derived "extractively". Instead of a value being added on top of the competing rural land-user's bid, every site in the urban economy has a price derived from "the maximum that can be extracted from the ultimate consumers of the site".
Under these conditions, policy changes and public investments are reflected in extractive site rents extremely aggressively. For example, if housing supply is mandated (even if assumed to be only to some extent) to be via intensification rather than "market price" driven "sprawl" the result is always a new housing unit that is priced at least at double that of the "differential rent" or median-multiple-3 market. The housing unit might be considerably smaller, stacked and packed, and of lower quality, but it will always cost in real terms, more than double the price of the McMansion that is the "median home" in the counterfactual situation.
What this means is a more dramatic "pricing out" effect that negates the intentions of the planners for mode shift that assumes co-location efficiencies. Hong Kong is not an outlier, in that its housing price median multiple is around 5 times higher than a median multiple 3 city while its density is 20 times higher - and its average commute is a monster, not the planners imagined elevator ride, stroll down the street, and the use of a subway for a few stops. The land price is literally thousands of times higher, with the price elasticity to allowed density running "the wrong way".
Everything that can capitalize into site prices will capitalize more aggressively in an "extractive rent" urban economy. Upzoning will cause site rents to rise so much faster than actual housing provision, that the outcomes will be more expensive housing, both the new high-density units AND the as-yet unredeveloped houses with gardens and yards. Grimes and Aitken (2010) should be better known for their assessment that "all the profit potential from redevelopment is captured in rising site prices" so that actual builders of new housing are squeezed between site acquisition costs and what end consumers can be gouged for housing.
My line for describing urban planners than do not understand this process, is that they are like rocket scientists who do not understand gravity. Economic rent is like the "gravity" in an economic system. What I have found is that the urban planning profession prefers to remain in a state of denial that their models have been failing for causes UNKNOWN for decades now; let alone taking the next step of trying to work out if there is a BIG reason why!
Ironically, if you look at upzoning and more liberal redevelopment and changes of existing urban land use allowed in the median-multiple-3 city, you will find that these have outcomes that are NOT thwarted by extractive economic rent and "pricing out" - the locations remain in competitive tension with those locations where prices are suppressed by the availability of superabundant cheap land.
Those attracted to Planning as a profession, hate the idea that participants in an economy might CHOOSE what THEY, the Anointed, do not approve of. Hence those location choices that attract participants by reason of the massive consumer surplus in their price, must be removed by regulatory force. The unintended consequences that follow, are an economics lesson in unintended consequences that really should have a book written about them. Maybe we are not even as clear as we should be about the unintended consequences of Central Planning "instead of markets" in the great Communist experiment (see Bertaud and Renaud, "Cities Without Land Markets"). But we desperately need to understand "unintended consequences" where land markets are allowed to amass economic rent throughout them, by "the land market" being rigged to do so.
If we rationed the supply of land for food production back to the amount that existed for each local economy before refrigeration and rail freight, we would find that extractive economic rent could return to food-producing land to such an extent that once again it would consume 50% of household budget. Maybe we wouldn't stand for it. We shouldn't stand for it in housing either. But it is not just a question of housing affordability - Britain's mysterious "productivity gap" for decades has to be due to their Town and Country Planning system, which was adopted some decades before other first world countries adopted similar policies of rationing urban land. The best economists have acknowledged this; most just still regard it as "a mystery".
...and we haven't even talked about the data collection issues!