
Transportation policy shapes how cities and regions grow, move, and thrive. Yet even the most carefully modeled policies can falter when they meet public opinion. Traditional approaches rely on mathematical optimization to identify “best” solutions under formal objectives and constraints—but these models often rest on rigid assumptions about behaviors and preferences. The result is a persistent gap between policies that look good on paper and those that communities actually support.
Our new study explores whether large language models (LLMs)—the same AI systems that power tools like ChatGPT—can help bridge that gap. LLMs are trained on vast troves of human language, giving them a rich contextual understanding of social values, trade-offs, and reasoning. Could such models serve as a new kind of decision-support tool, helping policymakers anticipate public preferences before a policy is implemented?
To test this idea, we built a multi-agent voting framework in which autonomous LLM “citizens” represent diverse communities within a large city. These agents participate in simulated referendums over transit policies involving three levers: sales taxes, transit fares, and driver fees such as congestion charges. Their choices are compared against the benchmarks of a standard travel demand model grounded in transportation economics.
The results are intriguing. Across cities and models, the LLM-based referendums produced collective preferences that broadly align with model-based predictions—except that the AI “voters” show a stronger aversion to taxes. GPT-4o agents tended to vote more consistently and decisively, while Claude-3.5 agents were more nuanced, yet both converged on similar priorities. Interestingly, their choices also shifted between cities: in Houston, for instance, GPT-4o agents favored lower taxes and higher driver fees than in Chicago.
These findings suggest that LLMs can mimic public reasoning in meaningful ways—offering policymakers a new tool to explore how communities might respond to complex, real-world trade-offs.
You may download a preprint here.
