AI’s “Waze” Problem: Traffic Algorithms, AI, and Human Value

From Traffic Reporting to Traffic Jamming

Gil Ben-David

In the early days of Waze, as the navigation app transitioned from a small user base to controlling a significant market share, it encountered a fundamental problem that offers valuable insights for our AI-driven future – when individual optimization creates collective problems at scale. This challenge reveals important lessons about algorithms and where humans fit in an increasingly AI-dominated landscape.

The Original Waze Problem: When Individual Optimization Fails at Scale

When Waze first launched, its algorithm worked beautifully with a small number of users. Each driver received optimal routing based on current traffic conditions. However, as the user base grew exponentially, a fundamental paradox emerged: when too many drivers were directed to the same "optimal" route simultaneously, Waze inadvertently created new traffic jams on previously clear roads.

This "selfish routing problem" demonstrates how individually optimal decisions can lead to collectively poor outcomes when scaled. What was optimal for individual users became suboptimal for the system as a whole. Waze had to evolve its algorithms to balance individual routing preferences with overall network efficiency – a much more complex optimization problem than simply finding the fastest route for each user.

The AI Parallel

This same problem is beginning to manifest across AI systems in multiple domains. One interesting example has to do with financial markets. AI trading systems optimizing for individual profit may end up creating market volatility or flash crashes when operating at scale and following similar strategies. When financial institutions and investment firms rely on AI models trained on similar historical data and using similar methodologies, they can inadvertently create synchronized market movements potentially amplifying both booms and busts.

We see another example of how AI optimization challenges traditional structures in the legal industry. As AI systems increasingly handle document review, contract analysis, and research – tasks that have traditionally formed the foundation of legal training – firms must reconsider their entire business models. This isn't merely about efficiency; it represents a fundamental restructuring of how legal services are delivered and how legal professionals develop expertise.

The examples above point to a more fundamental issue with current AI development: the datasets used to train large language models and other AI systems. When most major AI companies train their models on largely overlapping datasets (Common Crawl, Wikipedia, books, etc.), they inevitably develop systems that reach similar conclusions and exhibit similar biases and blind spots.

The Looming "Garbage In, Garbage Out" Crisis

While current AI models rely predominantly on human-generated content for training, we're rapidly approaching a tipping point where AI-generated content will constitute a majority of the data available online. This creates a dangerous feedback loop: new models trained on datasets containing AI-generated content will amplify the patterns, limitations, and errors of previous models. Each generation of AI effectively magnifies the weaknesses of its predecessors, creating a compounding "garbage in, garbage out" scenario. Without deliberate intervention, this recursive training could lead to increasingly homogenized, and more importantly, false outputs – essentially creating an echo chamber of artificial thought that drifts further from human-generated insights and novel perspectives. The consequences for fields requiring genuine innovation and diversity of thought could be profound.

The Cyber Security Imperative

This homogenization problem creates critical issues in cyber security as well. As organizations increasingly rely on AI-powered security tools, we face unprecedented systemic risk due to the convergence of security approaches that threatens our entire digital infrastructure. Security has always been a numbers game that favors attackers who need to find just one weakness. Security systems trained on similar data and using similar approaches may all fail simultaneously when faced with novel attack vectors outside their training distributions. The natural security advantage that comes from heterogeneous systems (where different technologies have different vulnerabilities) diminishes as AI security approaches converge.

The Security Ecosystem at Risk

The cyber security field has long recognized that diversity in defense strategies provides resilience against attacks. The biological parallel is clear - monoculture crops are vulnerable to catastrophic failures when faced with a single pathogen, while diverse ecosystems demonstrate resilience.

As AI increasingly drives security systems, maintaining this diversity becomes both more challenging and more essential than ever. Organizations that recognize this risk early will invest not just in the most advanced AI security tools, but in ensuring they have diverse, complementary security approaches that don't share the same fundamental weaknesses.

Where Does This Leave Humans?

As AI systems converge on similar conclusions from shared data, several human capabilities such as unique perspectives, human judgment, and creativity become increasingly valuable.

Genuinely novel human insights and unusual perspectives that fall outside the patterns dominant in training data become premium assets, as well as the ability to make unexpected connections or generate truly original ideas based on lived experience rather than statistical patterns becomes increasingly valuable. As more decisions are influenced by AI systems trained on similar data, human judgment about when to override algorithmic recommendations becomes crucial.

The selfish routing problem we witnessed with Waze offers an important lesson for our AI future. As systems scale, optimization challenges shift from individual performance to balancing system-wide outcomes. The most valuable AI systems won't be those that simply make the best individual predictions, but those that account for their aggregate impact.

Perhaps most importantly, this evolution suggests that human intelligence won't be replaced so much as redirected – toward higher-order thinking that involves questioning assumptions, bringing diverse perspectives, and providing wisdom that transcends what can be learned from existing data patterns.

In a world of algorithmic homogeneity, human diversity of thought becomes one of our most valuable assets – especially in cyber security, where uniformity creates vulnerabilities and diversity creates resilience.

Breaking the Mold: Your Security Advantage

The insights from the Waze paradox reveal a clear imperative for organizations serious about their security posture: algorithmic homogeneity creates shared vulnerabilities that sophisticated attackers will inevitably exploit. The question isn't if these shared blind spots will be discovered, but when.

This is where our approach differs. While most security providers rely on the same datasets, same detection methodologies, and ultimately create the same security gaps, we've built our services and solutions on the principle of deliberate diversity – creating layers of protection that don't share the same fundamental weaknesses.

Our team combines AI-powered tools with human expertise specifically cultivated to think differently from mainstream security approaches. We identify the blind spots that affect most security systems and develop compensatory measures that address these industry-wide vulnerabilities.

Don't let your organization become part of a predictable security monoculture. Contact our team today to discuss how we can help you implement truly differentiated security strategies that provide protection when everyone else's systems fail simultaneously. In cyber security, being different isn't just an advantage – it's your best defense.

Gil Ben-David is the founder and CEO of Cyfenders – a cyber-security services firm. For more than two decades, he has served as a consultant and in-house security expert to government agencies, Fortune 500 companies, financial firms, technology, and industrial companies.
Follow him on: