The Cursor Story: How Shipping Velocity Became an Existential Advantage

The Core Truth

In just 27 months, Cursor went from a basic code editor to the defining AI development environment by making one crucial bet: shipping velocity is more valuable than shipping perfection. They turned their ability to learn and iterate faster than anyone else into an insurmountable competitive moat. While competitors debated features in boardrooms, Cursor shipped, learned, and evolved in real-time with their users.

Cursor's changelog link:  https://www.cursor.com/changelog


The Evidence: How They Did It

The Great Migration: Betting Big While Moving Fast

The most audacious moment came just one month into their journey. On April 6, 2023 (v0.2.0), they made a decision that could have killed their startup:

"We've transitioned to building Cursor on top of a fork of VSCodium, moving away from our previous Codemirror-based approach. This allows us to focus on AI features while leveraging VSCode's mature text editing capabilities."

This wasn't a small iteration—it was a complete platform migration. But instead of disappearing for months to perfect it, they shipped the migration immediately and iterated in public. Within days: v0.2.1 (April 6), v0.2.2 (April 7), v0.2.3 (April 7)—each fixing issues with the new platform in real-time.

The principle: When you need to make big bets, ship them boldly and iterate quickly rather than trying to perfect them in isolation.

Radical Transparency: Turning Users Into Partners

Cursor didn't hide their experimental nature—they embraced it. When launching Copilot++ in November 2023 (v0.15.0):

"Copilot++ (beta): this is an 'add-on' to Copilot that suggests diffs around your cursor, using your recent edits as context... This is very experimental, so don't expect too much yet! Your feedback will decide which direction we take this."

When introducing Background Agent in March 2024 (v0.50):

"We're curious to hear what you think. While it is still early, we've found background agents useful internally for fixing nits, doing investigations, and writing first drafts of medium-sized PRs."

Throughout their 27-month journey, phrases like "We'd love your feedback," "Please let us know what you think in the Discord," and "Your feedback will decide which direction we take this" appear dozens of times. They turned transparency about feature maturity into trust and users into collaborators.

The Nightly Laboratory: Safe Spaces for Revolutionary Ideas

Starting in June 2023, Cursor built a parallel universe for wild experiments. On December 31, 2023 (v0.21.3-nightly), they shipped something audacious:

"Hold down command, press and release shift, and continue holding down command. This will trigger tha AI to rewrite code around your Cursor — you can think of it as a manually triggered GPT-4-powered Copilot++."

On July 10, 2023 (v0.2.46-nightly), they experimented with interface agents:

"This nightly build comes with experimental interface agent support! The goal: you write an interface specification, and an agent writes both the tests and the implementation for you."

Some experiments disappeared. Others graduated from nightly to beta to general availability. The nightly builds let them test paradigm shifts without betting the company on each one.

The Hotfix Culture: Rapid Response Over Perfect Prevention

Cursor didn't try to ship perfect software—they built systems for rapid response. Every major version spawned immediate fixes that show real-time learning:

v0.47.x series (March 2025):

  • 0.47.1: Improved performance, added back play button to apply code blocks
  • 0.47.2: Cursor Tab accepts work with single-line selections
  • 0.47.3: Fixes an issue with tool call errors on file edits
  • 0.47.4: Fixes an edge case where red diffs stick around in the editor

v0.46.x series (February 2025):

  • 0.46.1: Fixed HTTP2 and system certificate errors, resolved memory leaks
  • 0.46.2: Improved MCP reliability, added option to disable yolo mode for MCP
  • 0.46.3: Enhanced download reliability, fixed keybinding issue

This wasn't sloppy engineering—it was strategic. They optimized for learning velocity over initial perfection. Each hotfix was a mini-product cycle: identify, fix, ship, learn.

Progressive Rollouts: De-Risking Bold Moves

By March 2025 (v0.47.x), Cursor had developed sophisticated release strategies:

"Following this update, future updates should come as staged rollouts. This will mean greater guarantees of stability and more frequent updates."

They built multiple safety nets:

  • Staged rollouts to small user percentages first
  • Beta features users could opt into
  • Nightly builds for experimental features
  • Clear communication about feature maturity

This let them ship aggressively while minimizing risk—the best of both worlds.

Emergent Vision: Letting the Platform Reveal Itself

Cursor didn't start with a grand vision of AI agents. They let user behavior guide them toward bigger ideas, shipping capabilities and discovering what platform they were actually building:

March 2023 (v0.1.x): Basic AI chat and editing

"AI now requires login. Use an OpenAI API key for unlimited requests at cost"

June 2023 (v0.2.27): Codebase-wide context

"We've improved codebase context! In order to take full advantage, navigate to Settings, then 'Sync the current codebase'"

November 2023 (v0.15.0): Enhanced completions

"Copilot++ (beta): this is an 'add-on' to Copilot that suggests diffs around your cursor"

March 2024 (v0.48.x): Multi-conversation workflows

"Create new tabs (⌘T) in chat to have multiple conversations in parallel"

March 2025 (v0.50): Autonomous agents

"Background Agent for everyone... You can start using it right away by clicking the cloud icon in chat"

June 2025 (v1.0): Automated code review

"BugBot automatically reviews your PRs and catches potential bugs and issues"

Each step built on the previous one, but they never waited to have the full vision before shipping the next piece.

The Compound Effect: When Velocity Becomes Unbeatable

By June 2025, when they shipped v1.0, the scope was breathtaking:

"Cursor 1.0 is here! This release brings BugBot for code review, a first look at memories, one-click MCP setup, Jupyter support, and general availability of Background Agent."

This wasn't because they had better AI models or more funding—it was because they had compressed multiple product generations into 27 months of relentless iteration. Their competitors couldn't match their learning velocity. Every month Cursor shipped features, gathered feedback, and evolved, they pulled further ahead.


The Core Truth (Reprise)

Cursor's story proves that in fast-moving technical fields, shipping velocity isn't just a competitive advantage—it's existential. They turned their ability to ship, learn, and evolve quickly into their most defensible moat.

While their competitors built in stealth mode and planned perfect launches, Cursor was already three iterations ahead, learning from real users and discovering new possibilities. They didn't just build a better code editor—they built a better way of building products.

For startups: The question isn't whether you can build great features, but whether you can evolve faster than the market changes around you. In Cursor's own words, repeated throughout their journey: "Your feedback will decide which direction we take this." Make that your north star, ship relentlessly, and let your users show you what you're actually building.

The team that learns fastest doesn't just win—they often discover they're playing an entirely different game.

The Collapse of Collaborative Dialogue: A Crisis of Value Creation

We are living through a fundamental crisis in how humans create and share value with each other. What appears to be political polarization or social media dysfunction is actually something deeper: the systematic erosion of collaborative dialogue that has been the foundation of human flourishing for millennia.

The Science of Lost Connection

Recent research reveals the scope of this crisis. MIT's Sherry Turkle documents how digital communication has created what she calls "alone together"—physically connected but emotionally isolated (Turkle, 2017). Studies by the Pew Research Center show that despite unprecedented connectivity, rates of loneliness and social isolation have reached epidemic levels, particularly among young adults who've grown up primarily in digital environments (Anderson & Jiang, 2018).

Neuroscientist Matthew Lieberman's research demonstrates that our brains are fundamentally wired for social connection—that collaboration and empathy activate the same neural networks as physical needs like hunger and thirst (Lieberman, 2013). Yet the Harvard Study of Adult Development, tracking lives for over 80 years, shows that the quality of our relationships is the strongest predictor of life satisfaction and health outcomes (Waldinger & Schulz, 2023).

The disconnect is stark: we're biologically designed for collaborative meaning-making, but our asynchronous digital communication systems now actively discourage it.

Makiguchi's Framework: Beauty, Benefit, and Good

To understand what we've lost, we can turn to the educational philosopher Tsunesaburo Makiguchi, whose theory of value creation offers a profound lens for examining human communication. Makiguchi identified three fundamental types of value that humans create through interaction: beauty (aesthetic/emotional value), benefit (practical value), and good (moral/ethical value).

The philosophy of value creation stresses the autonomous capacities of learners. For Makiguchi, children were anything but empty vessels to be filled with the knowledge prescribed for them by adults. Children arrived in the classroom already possessing experience, knowledge and a capacity to learn. 

"The aim of education is not to transfer knowledge; it is to guide the learning process, to equip the learner with the methods of research. It is not the piecemeal merchandizing of information; it is to enable the acquisition of the methods for learning on one's own; it is the provision of keys to unlock the vault of knowledge. Rather than encouraging students to appropriate the intellectual treasures uncovered by others, we should enable them to undertake on their own the process of discovery and invention. [1934]"

In traditional human dialogue—the kind that built civilizations—all three forms of value emerge naturally:

Beauty manifests in the emotional resonance of shared stories, the aesthetic pleasure of collaborative discovery, and the inherent satisfaction of being truly heard and understood.

Benefit comes through practical wisdom exchange, problem-solving together, and the mutual learning that emerges when different perspectives combine constructively.

Good develops through the moral growth that happens when we genuinely encounter other viewpoints, build empathy across difference, and strengthen the social bonds that create ethical communities.

The Algorithmic Destruction of Value

Modern social media platforms systematically destroy Makiguchi's three forms of value. Research by the Center for Humane Technology shows how engagement-optimization algorithms specifically reward content that triggers negative emotional responses—anger, outrage, fear—while suppressing content that builds understanding or connection (Harris, 2019).

Studies by MIT's Sinan Aral reveal that false information spreads six times faster than truth on social platforms, not because people intentionally share misinformation, but because falsehoods are designed to be more emotionally provocative than nuanced truth (Vosoughi et al., 2018). The algorithmic preference for engagement over accuracy creates an information ecosystem that rewards the most inflammatory takes while drowning out collaborative, value-creating dialogue.

The result is what researchers call "context collapse"—the flattening of complex human experiences into bite-sized, context-free content optimized for viral spread rather than genuine understanding (Boyd, 2011). We've traded the collaborative meaning-making that creates Makiguchi's three forms of value for systems that extract attention and monetize division.

The Wisdom Crisis

Anthropologist Helen Fisher's research on human pair bonding shows that deep conversation—what she calls "intricate conversation"—is one of the primary mechanisms through which humans build trust and connection (Fisher, 2016). Yet studies by the American Psychological Association demonstrate that the average person now spends less than 30 minutes per day in meaningful face-to-face conversation (APA, 2019).

Every day, profound human wisdom disappears without being captured or shared. Research by the MacArthur Foundation's How We Get To Next project shows that traditional knowledge transfer—the passing of wisdom from elders to younger generations through story and dialogue—has declined dramatically in industrialized societies (MacArthur Foundation, 2020).

A grandmother's insights about resilience, learned through decades of hardship and joy. An immigrant's story of adaptation and belonging. A founder's real journey through failure and breakthrough. These stories contain what Makiguchi would recognize as the fullest expression of human value creation—beauty in their emotional truth, benefit in their practical wisdom, and good in their capacity to build empathy and connection.

But in our current information ecosystem, this wisdom has no place. It's too personal for news, too unpolished for social media, too deep for algorithmic feeds optimized for quick engagement rather than lasting value.

Value Creation Through Voice: A Research-Based Solution

The solution lies in what MIT's Rosalind Picard calls "affective computing"—technology designed to recognize and respond to human emotional and social needs rather than simply optimizing for engagement metrics (Picard, 1997). Recent advances in AI make it possible to preserve and surface the collaborative essence of human dialogue at scale.

Research by Stanford's Center for Compassion and Altruism shows that hearing someone's actual voice—as opposed to reading their words—activates mirror neurons and empathy responses in ways that text-based communication cannot (Doty, 2016). Studies by the University of Chicago's Behavioral Science Lab demonstrate that voice-based storytelling creates stronger emotional connections and better retention of complex information than any other medium (Schroeder & Epley, 2015).

This research points toward a solution: collecting, preserving, and sharing the lived wisdom of everyday people through voice-recorded conversations that create all three of Makiguchi's forms of value.

Here are two examples:

Story Collection involves trained interviewers having rich, meaningful conversations with people about their life journeys, relationships, challenges, and transformations. These conversations create:

  • Beauty through the emotional resonance of authentic human stories told in people's own voices
  • Benefit through the practical wisdom and insights that emerge from lived experience
  • Good through the empathy and connection that develop when we truly hear others' experiences

Voice Journaling creates a simple practice of reflection and self-discovery. People call a number, receive a thoughtful prompt, speak freely about their experience, and receive an AI-generated summary that helps them process their own thoughts and feelings over time. This creates value through:

  • Beauty in the aesthetic satisfaction of self-reflection and personal growth
  • Benefit through improved emotional awareness and decision-making capacity
  • Good through the moral development that comes from regular self-examination

AI as Value Preservation, Not Replacement

The key innovation lies in using AI to preserve and surface Makiguchi's three forms of value rather than optimizing for engagement metrics. Advanced natural language processing can identify and highlight moments of genuine insight, emotional resonance, and collaborative meaning-making within conversations.

Instead of reducing complex human experiences to viral soundbites, AI summarization can preserve the texture of collaborative dialogue—the moments where understanding emerges through exchange, where people build on each other's ideas, where genuine learning happens through respectful disagreement.

Research by MIT's Computer Science and Artificial Intelligence Laboratory shows that AI systems trained to recognize collaborative dialogue patterns can help surface the most valuable aspects of human conversation while maintaining their authentic, emotionally resonant qualities (Cao et al., 2020).

Building Social Infrastructure for Human Flourishing

Makiguchi understood that education—in its deepest sense—is about creating value through human interaction. This approach represents a new kind of educational infrastructure: a searchable, emotionally resonant library of human insight that serves all three forms of value creation.

Imagine searching for wisdom about career transitions and finding not expert advice, but the actual voices of dozens of people who've navigated similar changes—their fears, their insights, their hard-won understanding creating beauty through emotional connection, benefit through practical wisdom, and good through expanded empathy.

This isn't just a media project or a tech platform. It's social infrastructure designed around Makiguchi's insight that human value is created through the collaborative exchange of experience, wisdom, and understanding.

The Path Forward

Research across neuroscience, psychology, and sociology points to the same conclusion: humans are fundamentally collaborative meaning-making creatures. The current digital landscape has pushed us away from these collaborative instincts, but emerging technologies make it possible to restore what we've lost at unprecedented scale.

By intentionally capturing and preserving genuine human dialogue, we can begin to rebuild communication systems that create Makiguchi's three forms of value rather than destroying them. We can move beyond the engagement-optimization that has fractured human connection toward technology that genuinely serves human flourishing.

The question isn't whether technology will continue to shape human communication—it will. The question is whether we'll build systems that create beauty, benefit, and good through collaborative dialogue, or continue to drift toward platforms that extract attention while destroying the social bonds that make life meaningful.

Makiguchi believed that the purpose of education—and by extension, all human communication—is value creation (Makiguchi, 1930/2002). In an age of artificial intelligence and algorithmic feeds, this vision offers both a diagnosis of what's gone wrong and a blueprint for building something better.

The conversation starts now. The value we create together will determine not just our individual flourishing, but the kind of civilization we become.


References

American Psychological Association. (2019). Stress in America 2019: Stress and current events. APA.

Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center.

Boyd, D. (2011). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self: Identity, community, and culture on social network sites (pp. 39-58). Routledge.

Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2020). A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv preprint.

Doty, J. R. (2016). Into the magic shop: A neurosurgeon's quest to discover the mysteries of the brain and the secrets of the heart. Avery.

Fisher, H. (2016). Anatomy of love: A natural history of mating, marriage, and why we stray. W. W. Norton & Company.

Harris, T. (2019). The tech industry's psychological war on kids. Center for Humane Technology.

Lieberman, M. D. (2013). Social: Why our brains are wired to connect. Crown Publishers.

MacArthur Foundation. (2020). How we get to next: Traditional knowledge systems in the digital age. MacArthur Foundation Reports.

Makiguchi, T. (2002). A geography of human life (D. M. Bethel, Trans.). Caddo Gap Press. (Original work published 1930)

Picard, R. W. (1997). Affective computing. MIT Press.

Schroeder, J., & Epley, N. (2015). The sound of intellect: Speech reveals a thoughtful mind, increasing a job candidate's appeal. Psychological Science, 26(6), 877-891.

Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other. Basic Books.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.

Waldinger, R. J., & Schulz, M. S. (2023). The good life: Lessons from the world's longest scientific study of happiness. Simon & Schuster.

Complete Works of Tsunesaburo Makiguchi, (in Japanese) Daisan Bunmeisha, Vol 6, pg 285 (Cf. Bethel 1989, 168)


Thanks to Tee Ponsukcharoen for reading drafts and giving feedback.

The Compound Error Crisis: Why LLM Agents Are Failing Like Broken Robots (And Why Computer Science Warned Us)

A 6-axis robot arm reaches for a coffee cup. Joint 1 is off by 0.5 degrees. Joint 2 compensates but overshoots by 0.8 degrees. By the time the arm reaches the cup, it's 3 inches to the left and crashes into the table.

An LLM agent analyzes quarterly sales data. It misinterprets Q2 growth as 15% instead of 5%. This becomes the baseline for Q3 projections. The agent then builds a hiring plan based on the inflated projections. By step 5, it's recommending the company triple its workforce.

Both scenarios showcase the same fundamental problem: error propagation. Yet while computer science theory predicted this decades ago and robotics engineers have spent decades developing sophisticated error correction mechanisms, the AI community is deploying multi-step LLM agents with barely a whisper about compound failures.

The Computer Science Foundation We're Ignoring

Long before robots or LLMs existed, computer science established the mathematical foundations of error propagation. Wilkinson (1963) in "Rounding Errors in Algebraic Processes" proved that numerical errors compound predictably in sequential computations. His work on condition numbers showed exactly how input uncertainties amplify through algorithmic chains.

Goldberg (1991) in "What Every Computer Scientist Should Know About Floating-Point Arithmetic" demonstrated that even simple arithmetic operations suffer from cumulative precision loss. The IEEE 754 standard exists precisely because early computer scientists recognized that ignoring error propagation leads to catastrophic failures in computational systems.

The theoretical framework was clear: any sequential system without error correction will experience reliability degradation proportional to the number of operations. This isn't just theory—it's why financial systems use decimal arithmetic instead of floating-point, and why NASA's flight computers employ triple redundancy.

The Robotics Response: Engineering for Reality

The robotics community didn't just acknowledge these mathematical realities—they engineered solutions. The transition from theoretical computer science to physical systems revealed new dimensions of the error propagation problem.

Chatila and Laumond (1985) in "Position Referencing and Consistent World Modeling for Mobile Robots" showed that sensor noise compounds quadratically with the number of observations. This led to the development of simultaneous localization and mapping (SLAM) algorithms that explicitly model and correct for cumulative uncertainty.

LaValle (2006) in "Planning Algorithms" formalized the concept of configuration space obstacles created by uncertainty propagation. His work showed that without explicit error modeling, path planning algorithms become unreliable after just a few waypoints.

The robotics solution was systematic:

  1. Model uncertainty explicitly at every step
  2. Implement closed-loop feedback to correct accumulated errors
  3. Use probabilistic frameworks (Kalman filters, particle filters) to track confidence
  4. Design for graceful degradation when uncertainty exceeds acceptable bounds

LLMs: A New Class of Sequential System

LLM agents represent a fascinating convergence of computer science theory and robotics practice, but operating in the space of semantic computation rather than numerical calculation or physical manipulation.

The Computer Science Parallel: Like floating-point arithmetic, each LLM inference introduces uncertainty. Bengio et al. (2013) in "Representation Learning: A Review and New Perspectives" showed that deep networks accumulate representational errors through their layers. LLM agents simply extend this to the temporal dimension—errors accumulate across reasoning steps rather than network layers.

The Robotics Parallel: Like sensor fusion, LLM agents must integrate information from multiple sources (context, tools, memory) while maintaining coherent world models. Thrun et al. (2005) demonstrated that without explicit uncertainty tracking, integrated information becomes unreliable exponentially fast.

The Unique Challenge: Unlike numerical computation (where errors are well-defined) or robotics (where errors are measurable), LLM semantic errors are often undetectable until propagation makes them catastrophic. A hallucinated fact looks identical to a real fact until it causes downstream failures.

The Mathematical Reality: Why This Was Predictable

The error propagation in LLM agents follows well-established mathematical principles, but manifests in ways that make traditional solutions challenging:

From Numerical Analysis: Higham (2002) in "Accuracy and Stability of Numerical Algorithms" proved that error propagation follows condition number mathematics. For LLM agents, the "condition number" is effectively the semantic sensitivity of each reasoning step to input uncertainty.

From Information Theory: Shannon (1948) established that information transmission through noisy channels degrades predictably. LLM reasoning chains are essentially semantic channels where each step introduces noise, but unlike digital channels, we lack error-correcting codes for meaning.

From Control Theory: Åström and Murray (2021) in "Feedback Systems: An Introduction for Scientists and Engineers" showed that open-loop systems (like current LLM agents) are inherently unstable over multiple iterations, while closed-loop systems with feedback can maintain stability.

The mathematics predicted exactly what we're observing: sequential systems without error correction mechanisms will fail predictably as chain length increases.

The Counterargument: Why LLMs Might Be Different

Before accepting the doom-and-gloom narrative, we should examine why some researchers believe LLMs might escape the traditional error propagation trap:

Emergent Error Correction: Brown et al. (2020) in the GPT-3 paper suggested that large language models exhibit "emergent capabilities" that might include self-correction. Some researchers argue that sufficiently large models can detect and correct their own errors through their training on self-consistent text.

Semantic Robustness: Hendrycks et al. (2021) in "Measuring Massive Multitask Language Understanding" found that large models show surprising robustness to input perturbations. This suggests that semantic reasoning might be more fault-tolerant than numerical computation.

Context-Driven Recovery: Wei et al. (2022) in "Chain-of-Thought Prompting" showed that models can sometimes recover from early errors when provided with sufficient context. The argument: semantic systems might have self-healing properties that numerical systems lack.

The Scale Hypothesis: Kaplan et al. (2020) in "Scaling Laws for Neural Language Models" suggested that error rates decrease predictably with model size. If true, sufficiently large models might achieve error rates low enough to make propagation manageable.

Why the Counterargument Falls Short

However, empirical evidence suggests these optimistic views don't hold under systematic analysis:

Emergent Correction is Inconsistent: Kadavath et al. (2022) in "Language Models (Mostly) Know What They Know" found that while models can sometimes self-correct, this ability is unpredictable and doesn't scale systematically with task complexity.

Semantic Robustness Has Limits: Ribeiro et al. (2020) in "Beyond Accuracy: Behavioral Testing of NLP Models" showed that apparent robustness often masks brittleness to specific types of semantic perturbations—exactly the kind that propagate through reasoning chains.

Context Recovery Requires Perfect Context: The self-healing properties depend on maintaining perfect contextual information, but Liu et al. (2023) in "Lost in the Middle" demonstrated that long-context reasoning degrades significantly as context length increases.

Scale Doesn't Solve Systemic Issues: Ganguli et al. (2022) in "Predictability and Surprise in Large Generative Models" found that while individual error rates decrease with scale, systemic issues like hallucination and reasoning failures persist even in the largest models.

The Empirical Evidence: Measuring the Invisible Failure

The theoretical debates matter less than empirical evidence. Recent systematic studies provide clear data on error propagation in LLM systems:

Wei et al. (2023) in "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" found that reasoning accuracy degrades significantly in multi-step problems. Their analysis of GPT-3 on arithmetic word problems showed:

  • 2-step problems: 78% accuracy
  • 4-step problems: 58% accuracy
  • 8-step problems: 31% accuracy

This follows the exponential decay predicted by classical error propagation theory.

Press et al. (2023) in "Measuring and Narrowing the Compositionality Gap in Language Models" provided the most comprehensive analysis to date. Their key finding: "Performance degradation follows a power law as the number of composition steps increases." This matches exactly what Wilkinson (1963) predicted for sequential computational systems.

Huang et al. (2023) in "A Survey on Hallucination in Large Language Models" documented the mechanism: factual errors propagate through reasoning chains with 73% probability of causing downstream failures. Critically, they found that error detection decreases as chain length increases—the system becomes less capable of recognizing its own mistakes precisely when it's making more of them.

Three Perspectives, One Conclusion

The convergence is striking:

Computer Science Theory (1960s-1990s): Sequential computation without error correction is inherently unstable. Mathematical proof exists.

Robotics Practice (1980s-2010s): Physical systems confirm the theory. Engineering solutions developed through necessity.

LLM Empirics (2020s): Semantic reasoning systems exhibit identical patterns. The mathematics still holds.

All three fields arrived at the same conclusion through different paths: systems that chain operations without explicit error correction will fail predictably as chain length increases.

The Compound Interest of Being Wrong

The mathematics of error propagation are well-established in control theory. Kalman (1960) laid the groundwork in "A New Approach to Linear Filtering and Prediction Problems," showing how errors accumulate in dynamic systems.

For LLM agents, Dziri et al. (2023) in "Faith and Fate: Limits of Transformers on Compositionality" provided concrete measurements. They found that if each step in an agent workflow has a 90% accuracy rate, a 10-step process has only 35% reliability (0.9^10 = 0.35).

More concerning, Gao et al. (2023) in "Retrieval-Augmented Generation for AI-Generated Content: A Survey" showed that errors don't just multiply—they accelerate. In their study of multi-step reasoning:

  • Steps 1-3: Linear error accumulation
  • Steps 4-7: Exponential error growth
  • Steps 8+: Catastrophic failure rates above 80%

This matches exactly what robotics engineers discovered in the 1980s with manipulator arms and mobile robots.

This isn't theoretical. Shinn et al. (2023) in "Reflexion: Language Agents with Verbal Reinforcement Learning" documented systematic failures in production-like environments:

  • WebShop task: Agents successfully completed simple purchases but failed 68% of multi-step transactions due to context loss and error accumulation
  • HotPotQA: Multi-hop question answering saw accuracy drop from 82% (single-hop) to 34% (four-hop) as reasoning chains grew longer
  • Programming tasks: Code generation agents produced syntactically correct but functionally broken applications when initial architectural decisions were flawed

Yao et al. (2023) in "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" showed similar patterns across multiple domains, concluding that "the reliability of multi-step reasoning degrades faster than previously assumed."

Why the Silence? Understanding the Resistance

The reluctance to address error propagation in LLM agents stems from several sources:

Historical Perspective: Computer science developed error analysis because early systems failed catastrophically without it. Robotics adopted these principles because physical failures are impossible to ignore. LLM agents produce plausible-sounding failures that can be dismissed as "edge cases" or "prompt engineering problems."

Economic Incentives: The current AI boom rewards rapid deployment over systematic reliability. Christensen (1997) in "The Innovator's Dilemma" predicted this pattern: disruptive technologies initially prioritize capability over reliability, often to their eventual detriment.

Complexity Illusion: The sophistication of modern LLMs masks the brittleness of multi-step systems built on top of them. This is analogous to what Perrow (1984) called "normal accidents" in "Normal Accidents: Living with High-Risk Technologies"—complex systems fail in ways that seem impossible until they happen.

Domain Transfer Resistance: Each field (computer science, robotics, AI) tends to believe its problems are unique. The mathematical foundations are identical, but the surface differences create cognitive barriers to knowledge transfer.

Learning from All Three Domains: The Path Forward

The solution isn't to abandon LLM agents, but to apply the hard-won lessons from computer science theory and robotics practice:

From Numerical Analysis: Implement semantic condition numbers—metrics that quantify how sensitive each reasoning step is to input uncertainty. Demmel (1997) showed how to compute these for numerical algorithms; we need equivalent measures for semantic reasoning.

From Robotics: Deploy closed-loop verification systems. Instead of open-loop agent workflows, implement verification steps that validate outputs before proceeding. Siciliano and Khatib (2016) in "Springer Handbook of Robotics" provide extensive frameworks for fault-tolerant control that could be adapted to semantic reasoning.

From Information Theory: Develop semantic error-correcting codes. MacKay (2003) in "Information Theory, Inference, and Learning Algorithms" showed how redundancy can correct transmission errors. LLM agents need similar redundancy mechanisms for reasoning errors.

Specific Engineering Solutions:

  1. Confidence Tracking: Implement explicit uncertainty quantification at each step, following Gal and Ghahramani (2016) on Bayesian deep learning
  2. Redundant Reasoning: Use multiple independent reasoning paths and voting mechanisms, inspired by fault-tolerant computing principles from Pradhan (1996)
  3. Semantic Checksums: Develop verification procedures that can detect reasoning errors, analogous to CRC checks in digital communication
  4. Graceful Degradation: Design systems that recognize when uncertainty exceeds acceptable bounds and hand off to human operators

The False Dichotomy: The current debate often presents a false choice between "LLM agents are magic and will solve everything" versus "LLM agents are hopeless and will always fail." The reality is that they're engineering systems subject to well-understood mathematical constraints. We can build reliable systems if we apply the same rigor that computer science and robotics developed over decades.

The Stakes Are Rising

As LLM agents move from demos to production systems handling financial transactions, medical decisions, and infrastructure management, the cost of compound errors grows exponentially. We're not just building cool demos—we're building systems that could cause real harm when they fail.

The robotics community learned to design for reliability because physical failures were impossible to ignore. The AI community needs to learn the same lesson before our invisible failures become visible disasters.

Time to Build Better

Error propagation isn't an unsolvable problem—it's an engineering challenge that robotics has already addressed. The question is whether the AI community will learn from these lessons or repeat the same mistakes at scale.

The next time you see a demo of an LLM agent completing a complex multi-step task, ask the hard question: "What's the failure rate when this runs 1,000 times in production?"

Because in the world of compound errors, being impressive once means nothing if you're unreliable twice.

Beyond Net Worth: How Technology Could Enable Value Systems That Actually Value What Matters

How nested digital currencies could align economic incentives with community values


The Misalignment Crisis

Sarah runs a small organic farm outside Portland. She spends her mornings testing soil pH and her evenings calculating whether she can afford health insurance. Despite growing food that nourishes her community and stewarding land that sequesters carbon¹, she watches cryptocurrency speculators make more in a day than she earns in a year. Her neighbor Dave flips houses for profit, contributing nothing to local food security, but his "net worth" dwarfs hers.

This isn't just unfair—it's economically irrational. Our monetary system rewards financial extraction over value creation, speculation over stewardship. As economist Kate Raworth demonstrates in Doughnut Economics², current GDP measurements fail to capture ecological and social value creation, leading to systematic undervaluation of regenerative practices.

The Single-Metric Problem

The dollar trying to measure Sarah's farm value, Portland's housing market, and global semiconductor trade exemplifies what systems theorist Donella Meadows called "policy resistance"³—when systems generate the opposite of intended outcomes. One currency cannot effectively represent multiple, often conflicting forms of value.

Research by the New Economics Foundation shows that local food systems generate $1.90 in local economic activity for every dollar spent, compared to $1.15 for conventional food retail⁴. Yet financial markets systematically undervalue these multiplier effects because they occur outside monetized exchange systems.

Sarah's farm creates what economists call "positive externalities"—benefits not captured in market prices. Soil carbon sequestration, watershed protection, biodiversity conservation, and community resilience don't appear on balance sheets, despite their measurable economic value⁵.

Nested Currency Systems: Theory to Practice

Recent advances in blockchain technology and smart contracts enable what computer scientist Silvio Micali terms "algorand consensus"⁶—decentralized systems that maintain integrity without central authorities. Applied to supply chains, these technologies could create what we might call "value-differentiated exchange systems."

Consider Maria's coffee import business. Currently, she pays farmers commodity prices that fluctuate with global speculation, often disconnected from production costs or quality. The Fairtrade Foundation has documented how price volatility forces farmers into unsustainable practices⁷.

A nested currency system could separate different value streams:

  • Labor tokens maintaining stable purchasing power for farmer compensation

  • Environmental tokens rewarding measurable sustainability practices

  • Community tokens funding local infrastructure and education

  • Quality tokens recognizing superior products and craftsmanship

Each system maintains internal stability while enabling seamless conversion through automated market makers, similar to those used in decentralized finance protocols⁸.

Addressing Speculation Capture

The primary risk is speculation capture—wealthy actors accumulating tokens designed for community circulation. Behavioral economist Richard Thaler's work on "nudge theory"⁹ suggests design solutions that make speculation unattractive while preserving legitimate use.

Freicoin, launched in 2012, implemented demurrage (holding costs) that discouraged hoarding while maintaining transaction utility¹⁰. Estonia's e-Residency program demonstrates how digital identity verification can restrict token ownership to legitimate participants¹¹.

Existing Implementation Examples

These concepts aren't theoretical. Multiple real-world examples demonstrate viability:

Local Currencies:

  • Ithaca Hours has circulated over $100,000 since 1991, supporting 900+ local businesses¹²

  • BerkShares has facilitated millions in regional commerce with 400+ participating businesses¹³

  • Mountain Hours and Bay Bucks demonstrate scalability across different regional contexts

Supply Chain Applications:

Environmental Markets:

  • California's cap-and-trade program has generated over $17 billion for climate investments¹⁷

  • Nori creates marketplaces where farmers earn $15+ per ton of CO2 sequestered through regenerative practices¹⁸

Implementation Strategy

Research by MIT's Community Innovators Lab suggests successful alternative currency adoption requires addressing specific friction points rather than wholesale system replacement¹⁹.

Phase 1: Identify Pain Points Visit local farmers markets, credit unions, and community land trusts. Document specific challenges: seasonal cash flow variations, supply chain opacity, difficulty accessing capital for sustainable practices.

Phase 2: Build Minimal Viable Products Create simple digital tools addressing identified problems. A CSA management app with automated seasonal pricing adjustments. A supply chain tracker providing transparency that 73% of consumers report wanting²⁰.

Phase 3: Test Interoperability Connect successful local implementations. Enable value transfer between different regional systems while preserving local control and priorities.

Toward Regenerative Economics

The deeper promise extends beyond technology to what economist John Fullerton calls "regenerative capitalism"²¹—economic systems that enhance rather than degrade the conditions for life. When soil stewardship, community building, and ecological restoration generate appropriate economic returns, practitioners like Sarah don't choose between values and viability.

Current pilot programs demonstrate feasibility. The question isn't whether nested currency systems can work, but who will scale them first.


References

  1. Paustian, K., et al. (2016). Climate-smart soils. Nature, 532(7597), 49-57.

  2. Raworth, K. (2017). Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. Chelsea Green Publishing.

  3. Meadows, D. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.

  4. New Economics Foundation. (2005). Plugging the Leaks: Making the most of every pound that enters your local economy.

  5. Costanza, R., et al. (2017). Twenty years of ecosystem services: How far have we come and how far do we still need to go? Ecosystem Services, 28, 1-16.

  6. Micali, S. (2017). ALGORAND: the efficient and democratic ledger. arXiv preprint arXiv:1607.01341.

  7. Fairtrade Foundation. (2018). Driving Income Security for Cocoa Farmers.

  8. Adams, H., et al. (2018). Uniswap v2 core. Technical whitepaper.

  9. Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

  10. Gesell, S. (1916). The Natural Economic Order. Free-Economy Publishing.

  11. Korjus, K. (2014). Estonia's digital revolution. Government Technology, 27(8), 14-17.

  12. Glover, P. (2014). Ithaca Hours: Local Currency That Works. Ithaca Hours.

  13. Berkshire Regional Planning Commission. (2020). BerkShares Impact Report.

  14. Kamath, R. (2018). Food traceability on blockchain: Walmart's pork and mango pilots with IBM. Journal of the British Blockchain Association, 1(1), 1-12.

  15. Westerkamp, M., et al. (2018). Tracing manufacturing processes using blockchain-based token compositions. Digital Communications and Networks, 6(2), 167-176.

  16. Kshetri, N. (2018). Blockchain's roles in meeting key supply chain management objectives. International Journal of Information Management, 39, 80-89.

  17. California Air Resources Board. (2020). Cap-and-Trade Program Summary.

  18. Sanderman, J., et al. (2017). Soil carbon debt of 12,000 years of human land use. Proceedings of the National Academy of Sciences, 114(36), 9575-9580.

  19. MIT Community Innovators Lab. (2019). Alternative Currency Design Principles.

  20. Nielsen. (2015). The Sustainability Imperative: New Insights on Consumer Expectations.

  21. Fullerton, J. (2015). Regenerative Capitalism: How Universal Principles and Patterns Will Shape Our New Economy. Capital Institute.



The Power-Knowledge Gap: Why Good Ideas Don't Always Win

We've all been there. You see a problem clearly, have a solution that makes obvious sense, but can't get anyone in authority to listen. Meanwhile, decisions get made by people who seem disconnected from the reality you're experiencing.

Here's the uncomfortable truth: political influence and actual knowledge often exist on opposite ends of the spectrum. The people making decisions frequently aren't the ones who best understand the issues or their consequences. Those with the deepest expertise—frontline workers, subject matter experts, affected communities—typically have the least say in solutions.

Enter Large Language Models

Something interesting is happening with AI. Both sides of the power-knowledge equation now have access to a sophisticated thinking partner that exists outside traditional hierarchies.

For those with knowledge but limited power:

  • Test controversial ideas without career risk
  • Validate thinking outside of local groupthink
  • Develop stronger arguments before presenting upward
  • Practice articulating complex ideas accessibly

For those with power but limited domain knowledge:

  • Rapidly get up to speed on complex topics before deciding
  • Explore multiple perspectives without revealing knowledge gaps
  • Test the logic of proposals before committing resources
  • Challenge assumptions in a safe environment

What Those in Power Can Do

  • Actively seek disconfirming information. Create channels for hearing bad news and dissenting views. Regularly ask "What am I missing?" and "Who disagrees with this?"
  • Reward truth-telling over agreement. Make it safe—even advantageous—for people to disagree with you.
  • Design knowledge-seeking processes. Create structured ways to gather input from experts and people closest to problems, rather than defaulting to whoever speaks loudest.
  • Use LLMs to test your decision-making. Explore different angles and potential consequences through AI conversations to identify blind spots privately.
  • Leverage AI for rapid learning. When making decisions outside your expertise, use LLMs to understand complex topics and ask better questions of your experts.

What Those Without Power Can Do

  • Build undeniable expertise. Become so knowledgeable that your insights are harder to dismiss. Credibility is your primary currency.
  • Find strategic allies. Identify people with influence who can amplify your voice or bring your ideas to higher-level discussions.
  • Choose battles wisely. Focus on issues where you have the strongest case and best chance of being heard.
  • Make ideas accessible. Present insights clearly, anticipate objections, and show you understand the broader context.
  • Build coalitions. Connect with others who have similar concerns. A collective voice is harder to ignore.
  • Leverage AI conversations strategically. Use LLMs to refine ideas, anticipate objections, and develop stronger arguments before presenting them.

Beyond Individual Solutions

While personal strategies matter, the deeper solution requires systemic changes: creating institutions that naturally surface dissenting views, protect truth-tellers, and align incentives so those in power benefit from seeking challenging perspectives.

The goal isn't to eliminate hierarchy—it's to create better information flow between those who know and those who decide.

The Bottom Line

The power-knowledge gap isn't going away, but it doesn't have to be permanent. LLMs are creating new opportunities for both sides to bridge this divide more effectively. Those with power can systematically seek out knowledge. Those with knowledge can find strategic ways to influence power.

The organizations that figure this out will make better decisions. The ones that don't will keep making predictable mistakes, wondering why good ideas never seem to win.

Curiosity About Others: The Superpower for Human Connection

When we direct the same curiosity we apply to technology toward other human beings, something remarkable happens. The same mechanisms that build technical intuition can create profound human understanding and connection. Here's how this transformation works and why it matters for creating a better world.

Part 1: Moving Beyond Assumptions to Genuine Discovery

In technology, curiosity leads us to ask, "How does this work?" instead of assuming we already know. When directed toward other people, this same curiosity transforms how we approach human differences.

Instead of making assumptions about someone based on surface characteristics or preconceived notions, curiosity prompts us to wonder: "What experiences shaped this person? What might I not understand about their perspective? What could I learn from their unique journey?"

This shift from assumption to inquiry is transformative. It moves us from judgment to genuine discovery.

When we encounter someone with different political views, cultural backgrounds, or life choices, curiosity leads us to ask questions rather than make declarations. It creates space for understanding rather than debate. It opens doors rather than builds walls.

The curious mind approaches human difference with the same excitement it brings to a new technology: "This is interesting! I wonder how this perspective works and what I might learn from it."

This curiosity-driven approach to human difference doesn't mean abandoning our own values. Rather, it means approaching others with the humility to recognize that our understanding is incomplete and with the genuine interest to learn more.

Part 2: Transforming Interpersonal Failures into Growth

Just as curiosity helps tech professionals learn from technical failures, it can transform our interpersonal missteps into opportunities for deeper connection.

When a conversation goes poorly or a relationship hits a roadblock, the curious mind doesn't just feel bad or assign blame. It wonders: "What happened there? What did I miss? What can I learn about this person or about myself from this difficulty?"

This curiosity-driven approach to interpersonal challenges creates resilient relationships. Rather than seeing conflicts as evidence that a connection isn't viable, we see them as interesting data points that reveal something important about the other person's needs, values, or boundaries.

We can approach our own emotional reactions with the same curiosity: "That's interesting—why did I respond so strongly to what they said? What might that reveal about my own values or unexamined assumptions?"

This approach transforms our failures of understanding from sources of shame or frustration into opportunities for growth and deeper connection. Each misunderstanding becomes a doorway to greater empathy rather than a wall between people.

Part 3: Building Bridges Across Human Differences

Just as technical curiosity creates connections between different domains of knowledge, human curiosity builds bridges across different life experiences and perspectives.

When we cultivate genuine curiosity about people unlike ourselves—those from different generations, cultural backgrounds, socioeconomic circumstances, or belief systems—we create mental networks that can recognize common humanity across apparent divides.

This bridge-building capacity is increasingly crucial in our polarized world. The person who has curiously explored many different human perspectives can see connections and possibilities for common ground that others miss. They become translators between different worldviews, helping each side understand the legitimate concerns and values of the other.

These curiosity-built bridges don't erase important differences or paper over real conflicts. Instead, they create the conditions where different perspectives can interact productively rather than destructively. They make space for the creative tension that drives social innovation.

The curious person becomes invaluable in diverse teams, communities, and organizations precisely because they can connect seemingly disconnected human experiences and find pathways for collaboration that others cannot see.

Part 4: The Compassion Explosion

The most powerful effect of human curiosity is what we might call the "compassion explosion"—the exponential growth in our capacity to understand and care for others that happens when we've curiously explored many different human experiences.

Just as technical curiosity creates combinatorial insights, human curiosity creates combinatorial compassion. Each new perspective we genuinely explore doesn't just add linearly to our understanding—it multiplies it by creating new connections with everything we've previously learned about human experience.

This explains why the most effective bridge-builders, peacemakers, and community leaders often have unusually diverse human connections. Their curiosity has taken them across many different human boundaries, and the interaction between these different perspectives creates a rich mental model of human experience that allows them to connect with almost anyone.

In our complex global society, this capacity for combinatorial compassion is essential. Our biggest challenges—from climate change to economic inequality to technological disruption—require unprecedented collaboration across different perspectives. Only the curious mind can build the bridges these collaborations require.

Cultivating Curiosity About Others

How do we develop this superpower of human curiosity? Here are some practical approaches:

  1. Practice question-first conversations. When meeting someone new or discussing sensitive topics, challenge yourself to ask three genuine questions before sharing your own perspective.

  2. Seek out "curiosity frontiers." Identify groups or perspectives you know little about but could learn from. Find respectful ways to explore these different experiences.

  3. Notice judgment. When you catch yourself making quick judgments about others, pause and replace the judgment with a question: "I wonder why they see things that way?"

  4. Consume diverse narratives. Read books, watch films, and listen to podcasts featuring perspectives significantly different from your own. Approach them with genuine curiosity rather than evaluation.

  5. Practice "perspective taking." Regularly challenge yourself to imagine complex issues from viewpoints you don't share. The goal isn't to agree but to understand.

  6. Create diverse spaces. Build environments—physical or virtual—where different perspectives can interact regularly in psychologically safe ways.

The World-Changing Power of Human Curiosity

Imagine a world where we approached human difference with the same curiosity tech innovators bring to new technologies.

We would see conflicts not as battles to be won but as interesting problems to be understood. We would approach social challenges with the humble recognition that our current understanding is incomplete. We would treat each new human perspective as a potential source of insight rather than a threat to our existing beliefs.

This curiosity-driven approach to human connection wouldn't eliminate disagreement or conflict. But it would transform how we engage with those inevitable human differences—moving us from polarization to productive tension, from demonization to discovery, from monologue to dialogue.

The greatest challenges we face as a species—from climate change to poverty to technological disruption—are too complex for any single perspective to solve alone. They require the combinatorial creativity that only emerges when diverse viewpoints connect through bridges of mutual understanding.

And those bridges are built, one conversation at a time, by people who have cultivated the superpower of curiosity about others.

When we direct our curiosity toward other human beings, we don't just build better technology—we build a better world.

Curiosity: The Hidden Superpower Behind Tech Success


Part 1: Curiosity Means Saying "Yes" When Others Say "Not Now"

Have you ever wondered why some people seem to navigate technology with such ease?

It's not because they're geniuses or because they were born with a keyboard in their hands. Their secret weapon is much simpler: curiosity.

Curiosity is what drives someone to install a new app just to see how it works. It's what makes them volunteer for the project no one else understands. It's that inner voice that says, "I wonder what would happen if..." when everyone else is saying, "Let's stick with what we know."

In the tech world, curiosity is the fundamental difference between those who merely use technology and those who shape it.

I've noticed that the most successful tech professionals aren't necessarily those with the highest IQs or the most prestigious degrees. They're the ones who approach new technologies with genuine interest rather than apprehension. While others are groaning about having to learn something new, they're thinking, "This could be interesting. Let me see how it works."

This willingness to explore—to be a beginner again and again—isn't about natural courage. It's about cultivating curiosity as a habit. As many tech innovators have noted, the key is getting comfortable with not knowing, trusting that your curiosity will guide you to understanding.

Think about learning to cook or play an instrument. The first attempts are always rough. But curiosity pushes you to try again, to experiment, to wonder "what if I tried it this way instead?"

That's the first superpower of curiosity: it transforms the uncomfortable into the intriguing.

Try This: Identify one piece of technology you've been avoiding or postponing learning. Approach it with pure curiosity this week—not with the pressure to master it, but with the simple question: "I wonder how this works?" Notice how this mindset feels different from "I have to learn this."

Part 2: Curiosity Turns Failures Into Data

Here's something fascinating about curious people in tech: they have a completely different relationship with failure than most of us.

For many people, technical failures feel like personal failures. But the curious mind sees them differently—as interesting data points, as puzzles to be solved.

This is why some forward-thinking tech professionals actually document their failures. Not to punish themselves, but because they're genuinely curious about what went wrong and why. Each error becomes a case study driven by questions like: "That's interesting—why did it break that way?" or "What does this failure teach me about how this system actually works?"

When a curious person's code crashes or their design fails usability testing, they don't just feel bad and move on. Their curiosity kicks in: "Why did this happen? What assumptions did I make that weren't true? How does this change my understanding?"

Children learning to walk embody this curious approach to failure. Each fall isn't demoralizing—it's information. They adjust, try again, fall differently, and their curiosity about walking propels them forward despite hundreds of failures.

That's the second superpower of curiosity: it transforms failures from disappointments into discoveries.

Try This: The next time something goes wrong with technology you're using, pause before finding the quickest fix. Get curious instead. Ask: "Why exactly did this happen? What does this tell me about how this really works?" Write down what you discover. You're building your curiosity muscle.

Part 3: Curiosity Creates Unexpected Connections

As you follow your curiosity across different technologies and domains, something remarkable begins to happen in your brain: it starts connecting dots between seemingly unrelated areas.

This is where curiosity truly becomes a superpower.

The tech industry is full of examples. The person who explored both design and programming out of curiosity suddenly sees user interface solutions that neither pure designers nor pure programmers would imagine. The professional who followed their curiosity from marketing into data analysis brings insights about customer behavior that transform product development.

These connections aren't random—they're the natural result of a curious mind exploring diverse territories. When you're curious about many things, your brain naturally looks for patterns, similarities, and relationships between them.

I've observed this in collaborative tech environments: the most valuable insights often come from someone saying, "This reminds me of something I encountered in a completely different context." That's not coincidence—it's curiosity bearing fruit.

Unlike specialized expertise, which goes deep but narrow, curiosity creates a web of understanding that spans disciplines. This broad network of knowledge becomes invaluable when tackling complex problems that don't fit neatly into a single specialty.

That's the third superpower of curiosity: it builds bridges where others see separate islands.

Try This: Consider a technology challenge you're facing. Now think about a completely different domain you're curious about (could be gardening, music, cooking, sports—anything). Ask yourself: "Are there any principles or approaches from that area that might apply to my tech challenge?" Let your curiosity connect worlds that don't usually meet.

Part 4: The Curiosity Compound Effect

Now here's where curiosity becomes truly explosive in its impact.

When you've followed your curiosity in multiple directions and built numerous mental connections, you reach a tipping point. Your understanding doesn't just add up—it multiplies.

Mathematically, if you have curiosity-driven knowledge in five different areas, that doesn't give you 5 units of knowledge. It potentially gives you 120 different combinations (that's 5 factorial) of insights that can intersect in unexpected ways.

This explains why the most innovative solutions in tech often come from people with unusual combinations of interests. Their curiosity has taken them into diverse territories, and the interaction between these different knowledge areas creates possibilities that specialists simply cannot see.

In tech companies, you can witness this when a stubborn problem finally yields to someone who says, "You know, this reminds me of something I explored in a completely different field." Their curiosity-driven explorations across domains created the perfect mental toolkit for that specific challenge.

Each new area your curiosity leads you to explore doesn't just add linearly to your capabilities—it multiplies them by creating new combinations with everything you've previously discovered.

That's the fourth and most powerful superpower of curiosity: it creates exponential rather than linear growth in your ability to solve problems.

Try This: List all the different areas of technology you've explored out of curiosity, even briefly. Don't just list work skills—include hobbies, interests, and side explorations. Now consider how many potential combinations exist between these different areas. That diverse curiosity-driven background is your unique advantage.

Curiosity: The Superpower Hidden in Plain Sight

The secret to technological intuition isn't innate brilliance. It isn't memorizing specifications or mastering every programming language.

It's curiosity—consistent, genuine curiosity about how things work.

Curiosity is what makes you say yes to new experiences when others hesitate. It's what helps you see failures as fascinating data rather than discouraging setbacks. It's what creates connections between different domains in your thinking. And ultimately, it's what creates the combinatorial explosion of insights that looks like tech brilliance to outside observers.

The most powerful aspect of this superpower? Anyone can develop it. Curiosity isn't fixed at birth—it's a habit you can cultivate, a muscle you can strengthen.

The next time you encounter someone who seems to have an almost magical ability with technology, look past the surface impression. What you're really seeing is the compound interest of curiosity—years of wondering, exploring, connecting, and discovering.

And the best news? You can start building your curiosity superpower today. Just follow that little voice that says, "I wonder..."

Your future self will thank you when your understanding of technology doesn't just grow—it explodes. 💥

The Hidden Cost of AI: Losing Our Human Connections in Pursuit of Efficiency

In our rush to embrace AI and data-driven decision making, we're making a fundamental error that could have profound consequences for how we think, work, and live together. We're building systems that prioritize "human + machine" interactions when what we truly need is "human + human + machine" frameworks.

The Difference Matters

The distinction is subtle but crucial. Throughout human history, our greatest achievements and resilience have come through collective intelligence—people thinking together, challenging each other, providing emotional support, and creating shared meaning. These social processes aren't just nice-to-have features; they're the foundation of what makes us human.

The "human + machine" paradigm isolates individuals with technology, while the "human + human + machine" approach preserves the social fabric that has been essential to human flourishing.

False Conviction and Distributed Harm: The Self-Driving Car Paradox

What makes our current AI trajectory particularly concerning is the false conviction people develop when interacting with AI systems. We've established rigorous safety standards and public scrutiny for self-driving cars because the risk is obvious: one malfunction could cause immediate, visible harm.

Yet we're not applying the same scrutiny to AI systems that influence our information, decisions, and values.

Unlike the dramatic crash of a self-driving vehicle, AI systems like large language models operate through what I call "death by a thousand paper cuts" - a fundamentally different harm model that's:

  • Distributed across millions of daily interactions
  • Often subtle and impossible to trace back to a single source
  • Cumulative in their societal impact over time
  • Targeting our information ecosystem, decision-making processes, and social bonds

Consider how we interact with these systems: A physician relies on an AI diagnostic tool without consulting colleagues. A judge reviews an algorithm's sentencing recommendation without community input. A student crafts essays with AI assistance rather than through peer review and discussion.

The errors or biases in these interactions may not be immediately catastrophic like a car accident, but their cumulative effect on healthcare outcomes, justice, and education could be equally devastating over time.

Just because error attribution is harder doesn't mean companies should avoid accountability. In fact, the moral and value decay potential rivals or exceeds that of more visible technologies. We're rightfully concerned about physical safety on our roads—shouldn't we be equally vigilant about the health of our information ecosystems and social institutions?

The Social Fabric at Risk

What we're seeing now is a subtle but profound shift:

  • Individuals increasingly turning to AI rather than peers for information
  • Decision-making becoming privatized rather than socially deliberated
  • Knowledge validation happening through algorithms rather than communities
  • Cultural transmission occurring through machines rather than intergenerational human contact

The risk isn't just about getting bad information—it's about atrophying our social thinking muscles. When we outsource thinking to machines rather than engaging with other humans, we lose the productive friction that generates new ideas, the emotional connection that builds trust, and the shared context that creates meaning.

Overindexing on Data

This connects to a broader problem of overindexing on data in organizational decision-making:

  1. We mistake data volume for insight - Collecting massive amounts of information doesn't automatically translate to better decisions
  2. We create false precision - Numbers can create an illusion of certainty even when based on flawed assumptions
  3. We ignore unmeasurable factors - Things like community trust, organizational culture, or human dignity don't easily translate into metrics
  4. We abdicate responsibility - Decision-makers sometimes hide behind "what the data tells us" rather than acknowledging subjective judgments

A Better Path Forward

The solution isn't rejecting technology but reframing its role. We need systems that:

  • Augment human collaboration rather than replace it
  • Make the limitations of AI transparent to users
  • Value qualitative insights alongside quantitative data
  • Preserve spaces for human deliberation and connection
  • Balance efficiency with maintaining social capital

As leaders, we must ask not just "How can AI make us more efficient?" but "How can AI strengthen rather than erode the human connections that make our organizations and society function?"

The technologies that will truly advance humanity aren't those that isolate us with machines, but those that enhance our ability to think, create, and solve problems together.

Starting a technology business: Part 1 - Full-stack business

Take two pieces of paper. Stack them on top of each other. To make them stick, throw some glue between them. You get a big messy middle of glue.
What has a mess of glue got to do with anything with business?  Let me explain.

Anytime two different concepts or ideas come together, they create a mess, like glue. “Glue code” is a term used by software engineers trying to stick two layers or systems together. A stack is a formal way to represent things layered on top of each other. A stack is also a data structure in computer science.  A web application is an example of a stack of a frontend and a backend.  The hidden magic of software lies in the ability to stack and glue things together. Graphical user interfaces glued with mathematical modeling and you get Excel. Excel glued with the cloud, you get Google sheets. You get the point. One might argue software engineering is less about algorithms and more about gluing.


Let’s look at how this "glue" enables web applications to create value for their users. The frontend is the part of your application that customers interact with. It helps your customers create value for their customers or themselves. It defines the entry point for value creation for your application. . The backend enables the operations that help meet the needs of the frontend. In software, glue work includes work like API design, performance and reliability. Without the glue work, it feels like consuming technology like bad pasta. The pasta and the sauce are not proportional. They are not mixed well either. It sure leaves a bad taste in the mouth of the customer.


What about technology businesses? Do they have similar gluing challenges? Sure, they do. The glue layer for a company helps customers convert their desires to reality. Product management and customer support are of the early stage "glue layer" functions. HR, finance and legal become started functioning as internal glue. These functions drive increased internal and external cohesion through their work.

 The role of the frontend of business is to understand the desires of all customers. The frontend includes the sales and marketing functions. They exist to grow users, grow revenue, and ensure that customers’ interests are always met.

 The business backend's role is to exists to convert customer expectations to reality. Operations, engineering and domain expertise fall into this category. These teams use their skills to create value for customers in an effective manner. These teams need to account for cost-value tradeoffs. They answer questions like, do you need to grow your own tomatoes for the best pasta? What is the cheapest way to get these tomatoes to our restaurant's kitchen?


"(Value creation is) the capacity to find meaning, to enhance one's own existence and contribute to the well-being of others, under any circumstance.” - Daisaku Ikeda

The stack and the messy glue are metaphors that might resonate with technologists. Let's recap on the value creation system. Web applications enable the creation and exchange of digital value for users. Businesses exist to create and exchange financial value for other people or businesses. Value creation in the middle of different technologies or markets is going to be messy. The metaphorical glue helps with cohesion in the messy middle of value creation. One of my favorite ways to summarize value creation is, "(Value creation is) the capacity to find meaning, to enhance one's own existence and contribute to the well-being of others, under any circumstance.” - Daisaku Ikeda.  Always ask yourself, what's the missing glue here?

This is the first of a three-part series on starting a business as a technical co-founder

1. Full-stack business
2. Full-stack entrepreneur.
3. Do you need a co-founder? If yes, what to look for?

Maximize context, maximize impact

Photo by Dio Hasbi Saniskoro from Pexels


At some point in their career, every individual in the technology industry struggles with answering the question, “how do I maximize my impact?”  If you feel stuck in your career I hope this provides some clarity to you. 

Every job I’ve taken up has been based on the potential of learning and the potential of impact. I started my career in a roughly 300-person engineering team. You get started on the job and learn the ropes in your first few months. In a few months, you start feeling that luck is a big part of working on the most interesting products. Do you wonder what it takes to work on the most interesting projects? How do you determine what work is high-impact and what is not?  

The secret to maximizing impact is to maximize context. Context is the set of invisible rules that guide individual and collective action in an organization. Three things are key in building context to maximize impact: 

  1. A story about the future state of the business

  2. Understanding of incentives of others

  3. Networks of people and teams 

A story about the future state of the business

The ideal future state of any business is to have happier paying customers, higher profit, and more innovation. In the early days of Explorer.ai, our initial goal was to be more innovative at solving mapping problems for self-driving car customers. We spoke to over 90% of the companies in that space. A key driver of continued interaction was that the customers resonated with our stories as individuals. 

You need to be good at telling your story to many diverse people. The only way to get better at it is to practice.  The more revolutionary your story, the wider the support it can garner. Think like the CEO of a profit-making enterprise. If you are presented with two ideas - one that increases profit margin by 5% versus 40%. Which one will capture your initial attention? 

The stories you craft about yourself, your projects and your teams show your team and peers how big you are thinking. This ability to keep thinking big requires courage to believe in bold ideas. To keep your story believable you need to keep executing towards the big idea. 

Let’s say you want to increase the profit margin of the products you are selling. Let’s look at a few ways to increase your profit margin by 5 % or 40% in one year. Here are a few ways to do it 

  1. You can do it by increasing the price: Possible in 5% case, tough to execute in the 40% case.

  2. You can increase the number of customers: For 5% you might be able to make it as a part of your sales quota goals. Depending on your product 40% might be hard. 

  3. You can reduce the cost: 5% seems achievable in comparison to 40%. 

You can see that each approach can achieve a 5% change. If you do them all with the 5% scope you are still 25% short to your goal. This will force you to talk to more people. The conclusion of those conversations might lead to the launch of a new product offering. Change the objective and the ways to get there, I am sure you can find similar examples in different companies. One side effect of a bigger goal is that you will need to work with others in your organization to hit the tough goal. The 5% goal could have been achieved with a smaller group, not the 40%.  

You cannot only be a storyteller in your organization. You need to take action in the direction of increasing the profits by 40%. Every milestone you hit towards your fabled destination will keep transforming your fiction into more fact. 

Understanding of incentives of others

Early in my career, I struggled to understand why people acted in the way they did. The actions of others often felt misaligned with the company’s goals and objectives. People would say different things in a meeting and do very different things. This caused me immense frustration. Then I learned about intrinsic incentives. 

There are two main kinds of incentives that are at play in a company. One, extrinsic incentives, bonuses, promotions and other more obvious incentives. Two, intrinsic incentives, the ones that are self-driven. These are often driven by some form of internal principles or values of an individual. A lot of people struggle to articulate their intrinsic incentives of working. This makes understanding co-workers tough.  

One shortcut to understanding the incentives of others is to observe the actions taken by others. In the example of increasing profits, you will encounter different people. Some support your cause, some against it and some figure out if your ideas are worth engaging with. This requires collaboration over multiple months to begin to understand the other people and teams. Be aware of what working relationships click and which ones don’t. The ones that don’t will require a lot more work. 

Each team leader or individual you work with will have different incentives, so avoid generalization. Focus on understanding what makes the people tick. If someone loves working with data, ask them to do the financial analysis for understanding the current sources of profit. If someone likes being organized, ask them to run the meetings. People feel empowered when their work is aligned with intrinsic incentives. A lot of people are not aware of their own intrinsic incentives. This can make the process of understanding others harder. Be kind to yourself through the process. 

In understanding the incentives of others, you will experience a wide range of feelings. Your feelings can be classified into over fifty labels including relaxed, supported, helpless, warm, angry, humiliated, etc. Here’s a more comprehensive list from Hoffman Lab for future reference. This will provide you with language to understand your own feelings in the situation. 

Thinking of what incentives drive beyond the financial will allow you to work with a wider range of people. The more people you can collaborate with, the more enjoyable your experience working with others will be. 

Networks of people and teams

 Crafting your story is not enough. You need to start adding value to the company by delivering value. There are three levels at which you can deliver value. Each level is progressively harder in the short term but compounds in the long term.

 

Level 1: Ask for a problem that you think matches your skills. Do it yourself.

Level 2: Ask for a problem that you think matches your team’s skills. Work with a teammate to get it done.

Level 3: Ask for a problem that you think matches the skills within your entire company. Work with the different parts of the company to get it done.

 

Solving a level 1 task will teach you how to work with the tools and ecosystem within your company. Don’t skip this level. This task will teach you about the various processes and tools within your company. While operating on level 2 and level 3 tasks, this skill is valuable. Level 1 tasks are valuable in larger organizations. It exposes the various gatekeepers in the organizations. It sets your expectations on how long it takes your organization to do certain basic actions. 

Level 2 tasks are great at getting to know people. This is a good time to set up one-on-one meetings with your collaborators. In these one-on-ones, you can share progress on your level 1 task and get feedback. You will always be enlightened by the suggestions. These one-on-ones are a great place to start understanding the incentives of your team. It also forces you to test your understanding of the company so far. You can share your understanding of why a certain level 2 task is important to your team. This will give you feedback from your teammates and clarify your understanding of how your team works.

Level 3 tasks are the toughest to make progress. Succeeding in them will teach you how your company functions and what are the incentives of different teams. Level 3 tasks are typically not assigned to you. You define them and get them done. The skills you gained doing level 1 and 2 tasks are helpful in level 3 tasks. They will include convincing a large group of stakeholders on what they need to do and why. The timeline of such a task would often be a few months to a few years, depending on the size of your company. Accomplishing this task will require you to build strong networks within the company and understand business priorities better. Success will also cement your place as a valuable employee to your company.

What are the principles of being effective in leveling up?

  1. Ruthless prioritization against a timeline

  2. Cut complaints and maximize action.

  3. Be open-minded to feedback.

  4. Have a plan A, plan B, plan C. If needed plan D, plan E, plan F, etc. 

Developing the context of different teams and individuals helps you bypass a lot of steps for your next big idea.

Closing thoughts 

Attempts to maximize context will expose you to business-aligned priorities. This enables you to get a head start on identifying and solving high-impact problems for your company. Often there are high-impact low-effort business problems staring at you. You only need to zoom out a little. 

Organizations are constantly changing and evolving and so are the people inside them. Building context is a muscle worth building in the rapidly changing world. Maximizing context frees up a lot of your time. I hope you can use this insight to make the desired impact you wish for. The greater your impact the happier you will be at work. A personal inspiration to challenge my circumstances at work has come from this quote by Daisaku Ikeda, “If you’re passive, you’ll feel trapped and unhappy in even the freest of environments. But if you take an active approach and challenge your circumstances, you will be free, no matter how confining your situation may actually be.”

A note about rabbit holes

What do you do about rabbit holes that you keep discovering? Working on interesting problems yields an endless list of rabbit holes to explore. Be curious and follow some rabbit holes. They help you develop a broader context that is often useful at a later date.