In just 27 months, Cursor went from a basic code editor to the defining AI development environment by making one crucial bet: shipping velocity is more valuable than shipping perfection. They turned their ability to learn and iterate faster than anyone else into an insurmountable competitive moat. While competitors debated features in boardrooms, Cursor shipped, learned, and evolved in real-time with their users.
Cursor's changelog link: https://www.cursor.com/changelog
The most audacious moment came just one month into their journey. On April 6, 2023 (v0.2.0), they made a decision that could have killed their startup:
"We've transitioned to building Cursor on top of a fork of VSCodium, moving away from our previous Codemirror-based approach. This allows us to focus on AI features while leveraging VSCode's mature text editing capabilities."
This wasn't a small iteration—it was a complete platform migration. But instead of disappearing for months to perfect it, they shipped the migration immediately and iterated in public. Within days: v0.2.1 (April 6), v0.2.2 (April 7), v0.2.3 (April 7)—each fixing issues with the new platform in real-time.
The principle: When you need to make big bets, ship them boldly and iterate quickly rather than trying to perfect them in isolation.
Cursor didn't hide their experimental nature—they embraced it. When launching Copilot++ in November 2023 (v0.15.0):
"Copilot++ (beta): this is an 'add-on' to Copilot that suggests diffs around your cursor, using your recent edits as context... This is very experimental, so don't expect too much yet! Your feedback will decide which direction we take this."
When introducing Background Agent in March 2024 (v0.50):
"We're curious to hear what you think. While it is still early, we've found background agents useful internally for fixing nits, doing investigations, and writing first drafts of medium-sized PRs."
Throughout their 27-month journey, phrases like "We'd love your feedback," "Please let us know what you think in the Discord," and "Your feedback will decide which direction we take this" appear dozens of times. They turned transparency about feature maturity into trust and users into collaborators.
Starting in June 2023, Cursor built a parallel universe for wild experiments. On December 31, 2023 (v0.21.3-nightly), they shipped something audacious:
"Hold down command, press and release shift, and continue holding down command. This will trigger tha AI to rewrite code around your Cursor — you can think of it as a manually triggered GPT-4-powered Copilot++."
On July 10, 2023 (v0.2.46-nightly), they experimented with interface agents:
"This nightly build comes with experimental interface agent support! The goal: you write an interface specification, and an agent writes both the tests and the implementation for you."
Some experiments disappeared. Others graduated from nightly to beta to general availability. The nightly builds let them test paradigm shifts without betting the company on each one.
Cursor didn't try to ship perfect software—they built systems for rapid response. Every major version spawned immediate fixes that show real-time learning:
v0.47.x series (March 2025):
- 0.47.1: Improved performance, added back play button to apply code blocks
- 0.47.2: Cursor Tab accepts work with single-line selections
- 0.47.3: Fixes an issue with tool call errors on file edits
- 0.47.4: Fixes an edge case where red diffs stick around in the editor
v0.46.x series (February 2025):
- 0.46.1: Fixed HTTP2 and system certificate errors, resolved memory leaks
- 0.46.2: Improved MCP reliability, added option to disable yolo mode for MCP
- 0.46.3: Enhanced download reliability, fixed keybinding issue
This wasn't sloppy engineering—it was strategic. They optimized for learning velocity over initial perfection. Each hotfix was a mini-product cycle: identify, fix, ship, learn.
By March 2025 (v0.47.x), Cursor had developed sophisticated release strategies:
"Following this update, future updates should come as staged rollouts. This will mean greater guarantees of stability and more frequent updates."
They built multiple safety nets:
This let them ship aggressively while minimizing risk—the best of both worlds.
Cursor didn't start with a grand vision of AI agents. They let user behavior guide them toward bigger ideas, shipping capabilities and discovering what platform they were actually building:
March 2023 (v0.1.x): Basic AI chat and editing
"AI now requires login. Use an OpenAI API key for unlimited requests at cost"
June 2023 (v0.2.27): Codebase-wide context
"We've improved codebase context! In order to take full advantage, navigate to Settings, then 'Sync the current codebase'"
November 2023 (v0.15.0): Enhanced completions
"Copilot++ (beta): this is an 'add-on' to Copilot that suggests diffs around your cursor"
March 2024 (v0.48.x): Multi-conversation workflows
"Create new tabs (⌘T) in chat to have multiple conversations in parallel"
March 2025 (v0.50): Autonomous agents
"Background Agent for everyone... You can start using it right away by clicking the cloud icon in chat"
June 2025 (v1.0): Automated code review
"BugBot automatically reviews your PRs and catches potential bugs and issues"
Each step built on the previous one, but they never waited to have the full vision before shipping the next piece.
By June 2025, when they shipped v1.0, the scope was breathtaking:
"Cursor 1.0 is here! This release brings BugBot for code review, a first look at memories, one-click MCP setup, Jupyter support, and general availability of Background Agent."
This wasn't because they had better AI models or more funding—it was because they had compressed multiple product generations into 27 months of relentless iteration. Their competitors couldn't match their learning velocity. Every month Cursor shipped features, gathered feedback, and evolved, they pulled further ahead.
Cursor's story proves that in fast-moving technical fields, shipping velocity isn't just a competitive advantage—it's existential. They turned their ability to ship, learn, and evolve quickly into their most defensible moat.
While their competitors built in stealth mode and planned perfect launches, Cursor was already three iterations ahead, learning from real users and discovering new possibilities. They didn't just build a better code editor—they built a better way of building products.
For startups: The question isn't whether you can build great features, but whether you can evolve faster than the market changes around you. In Cursor's own words, repeated throughout their journey: "Your feedback will decide which direction we take this." Make that your north star, ship relentlessly, and let your users show you what you're actually building.
The team that learns fastest doesn't just win—they often discover they're playing an entirely different game.
Recent research reveals the scope of this crisis. MIT's Sherry Turkle documents how digital communication has created what she calls "alone together"—physically connected but emotionally isolated (Turkle, 2017). Studies by the Pew Research Center show that despite unprecedented connectivity, rates of loneliness and social isolation have reached epidemic levels, particularly among young adults who've grown up primarily in digital environments (Anderson & Jiang, 2018).
Neuroscientist Matthew Lieberman's research demonstrates that our brains are fundamentally wired for social connection—that collaboration and empathy activate the same neural networks as physical needs like hunger and thirst (Lieberman, 2013). Yet the Harvard Study of Adult Development, tracking lives for over 80 years, shows that the quality of our relationships is the strongest predictor of life satisfaction and health outcomes (Waldinger & Schulz, 2023).
The disconnect is stark: we're biologically designed for collaborative meaning-making, but our asynchronous digital communication systems now actively discourage it.
To understand what we've lost, we can turn to the educational philosopher Tsunesaburo Makiguchi, whose theory of value creation offers a profound lens for examining human communication. Makiguchi identified three fundamental types of value that humans create through interaction: beauty (aesthetic/emotional value), benefit (practical value), and good (moral/ethical value).
The philosophy of value creation stresses the autonomous capacities of learners. For Makiguchi, children were anything but empty vessels to be filled with the knowledge prescribed for them by adults. Children arrived in the classroom already possessing experience, knowledge and a capacity to learn."The aim of education is not to transfer knowledge; it is to guide the learning process, to equip the learner with the methods of research. It is not the piecemeal merchandizing of information; it is to enable the acquisition of the methods for learning on one's own; it is the provision of keys to unlock the vault of knowledge. Rather than encouraging students to appropriate the intellectual treasures uncovered by others, we should enable them to undertake on their own the process of discovery and invention. [1934]"
In traditional human dialogue—the kind that built civilizations—all three forms of value emerge naturally:
Beauty manifests in the emotional resonance of shared stories, the aesthetic pleasure of collaborative discovery, and the inherent satisfaction of being truly heard and understood.
Benefit comes through practical wisdom exchange, problem-solving together, and the mutual learning that emerges when different perspectives combine constructively.
Good develops through the moral growth that happens when we genuinely encounter other viewpoints, build empathy across difference, and strengthen the social bonds that create ethical communities.
Modern social media platforms systematically destroy Makiguchi's three forms of value. Research by the Center for Humane Technology shows how engagement-optimization algorithms specifically reward content that triggers negative emotional responses—anger, outrage, fear—while suppressing content that builds understanding or connection (Harris, 2019).
Studies by MIT's Sinan Aral reveal that false information spreads six times faster than truth on social platforms, not because people intentionally share misinformation, but because falsehoods are designed to be more emotionally provocative than nuanced truth (Vosoughi et al., 2018). The algorithmic preference for engagement over accuracy creates an information ecosystem that rewards the most inflammatory takes while drowning out collaborative, value-creating dialogue.
The result is what researchers call "context collapse"—the flattening of complex human experiences into bite-sized, context-free content optimized for viral spread rather than genuine understanding (Boyd, 2011). We've traded the collaborative meaning-making that creates Makiguchi's three forms of value for systems that extract attention and monetize division.
Anthropologist Helen Fisher's research on human pair bonding shows that deep conversation—what she calls "intricate conversation"—is one of the primary mechanisms through which humans build trust and connection (Fisher, 2016). Yet studies by the American Psychological Association demonstrate that the average person now spends less than 30 minutes per day in meaningful face-to-face conversation (APA, 2019).
Every day, profound human wisdom disappears without being captured or shared. Research by the MacArthur Foundation's How We Get To Next project shows that traditional knowledge transfer—the passing of wisdom from elders to younger generations through story and dialogue—has declined dramatically in industrialized societies (MacArthur Foundation, 2020).
A grandmother's insights about resilience, learned through decades of hardship and joy. An immigrant's story of adaptation and belonging. A founder's real journey through failure and breakthrough. These stories contain what Makiguchi would recognize as the fullest expression of human value creation—beauty in their emotional truth, benefit in their practical wisdom, and good in their capacity to build empathy and connection.
But in our current information ecosystem, this wisdom has no place. It's too personal for news, too unpolished for social media, too deep for algorithmic feeds optimized for quick engagement rather than lasting value.
The solution lies in what MIT's Rosalind Picard calls "affective computing"—technology designed to recognize and respond to human emotional and social needs rather than simply optimizing for engagement metrics (Picard, 1997). Recent advances in AI make it possible to preserve and surface the collaborative essence of human dialogue at scale.
Research by Stanford's Center for Compassion and Altruism shows that hearing someone's actual voice—as opposed to reading their words—activates mirror neurons and empathy responses in ways that text-based communication cannot (Doty, 2016). Studies by the University of Chicago's Behavioral Science Lab demonstrate that voice-based storytelling creates stronger emotional connections and better retention of complex information than any other medium (Schroeder & Epley, 2015).
This research points toward a solution: collecting, preserving, and sharing the lived wisdom of everyday people through voice-recorded conversations that create all three of Makiguchi's forms of value.
Here are two examples:
Story Collection involves trained interviewers having rich, meaningful conversations with people about their life journeys, relationships, challenges, and transformations. These conversations create:
Voice Journaling creates a simple practice of reflection and self-discovery. People call a number, receive a thoughtful prompt, speak freely about their experience, and receive an AI-generated summary that helps them process their own thoughts and feelings over time. This creates value through:
The key innovation lies in using AI to preserve and surface Makiguchi's three forms of value rather than optimizing for engagement metrics. Advanced natural language processing can identify and highlight moments of genuine insight, emotional resonance, and collaborative meaning-making within conversations.
Instead of reducing complex human experiences to viral soundbites, AI summarization can preserve the texture of collaborative dialogue—the moments where understanding emerges through exchange, where people build on each other's ideas, where genuine learning happens through respectful disagreement.
Research by MIT's Computer Science and Artificial Intelligence Laboratory shows that AI systems trained to recognize collaborative dialogue patterns can help surface the most valuable aspects of human conversation while maintaining their authentic, emotionally resonant qualities (Cao et al., 2020).
Makiguchi understood that education—in its deepest sense—is about creating value through human interaction. This approach represents a new kind of educational infrastructure: a searchable, emotionally resonant library of human insight that serves all three forms of value creation.
Imagine searching for wisdom about career transitions and finding not expert advice, but the actual voices of dozens of people who've navigated similar changes—their fears, their insights, their hard-won understanding creating beauty through emotional connection, benefit through practical wisdom, and good through expanded empathy.
This isn't just a media project or a tech platform. It's social infrastructure designed around Makiguchi's insight that human value is created through the collaborative exchange of experience, wisdom, and understanding.
Research across neuroscience, psychology, and sociology points to the same conclusion: humans are fundamentally collaborative meaning-making creatures. The current digital landscape has pushed us away from these collaborative instincts, but emerging technologies make it possible to restore what we've lost at unprecedented scale.
By intentionally capturing and preserving genuine human dialogue, we can begin to rebuild communication systems that create Makiguchi's three forms of value rather than destroying them. We can move beyond the engagement-optimization that has fractured human connection toward technology that genuinely serves human flourishing.
The question isn't whether technology will continue to shape human communication—it will. The question is whether we'll build systems that create beauty, benefit, and good through collaborative dialogue, or continue to drift toward platforms that extract attention while destroying the social bonds that make life meaningful.
Makiguchi believed that the purpose of education—and by extension, all human communication—is value creation (Makiguchi, 1930/2002). In an age of artificial intelligence and algorithmic feeds, this vision offers both a diagnosis of what's gone wrong and a blueprint for building something better.
The conversation starts now. The value we create together will determine not just our individual flourishing, but the kind of civilization we become.
American Psychological Association. (2019). Stress in America 2019: Stress and current events. APA.
Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center.
Boyd, D. (2011). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self: Identity, community, and culture on social network sites (pp. 39-58). Routledge.
Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2020). A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv preprint.
Doty, J. R. (2016). Into the magic shop: A neurosurgeon's quest to discover the mysteries of the brain and the secrets of the heart. Avery.
Fisher, H. (2016). Anatomy of love: A natural history of mating, marriage, and why we stray. W. W. Norton & Company.
Harris, T. (2019). The tech industry's psychological war on kids. Center for Humane Technology.
Lieberman, M. D. (2013). Social: Why our brains are wired to connect. Crown Publishers.
MacArthur Foundation. (2020). How we get to next: Traditional knowledge systems in the digital age. MacArthur Foundation Reports.
Makiguchi, T. (2002). A geography of human life (D. M. Bethel, Trans.). Caddo Gap Press. (Original work published 1930)
Picard, R. W. (1997). Affective computing. MIT Press.
Schroeder, J., & Epley, N. (2015). The sound of intellect: Speech reveals a thoughtful mind, increasing a job candidate's appeal. Psychological Science, 26(6), 877-891.
Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other. Basic Books.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Waldinger, R. J., & Schulz, M. S. (2023). The good life: Lessons from the world's longest scientific study of happiness. Simon & Schuster.
Complete Works of Tsunesaburo Makiguchi, (in Japanese) Daisan Bunmeisha, Vol 6, pg 285 (Cf. Bethel 1989, 168)
Thanks to Tee Ponsukcharoen for reading drafts and giving feedback.
]]>An LLM agent analyzes quarterly sales data. It misinterprets Q2 growth as 15% instead of 5%. This becomes the baseline for Q3 projections. The agent then builds a hiring plan based on the inflated projections. By step 5, it's recommending the company triple its workforce.
Both scenarios showcase the same fundamental problem: error propagation. Yet while computer science theory predicted this decades ago and robotics engineers have spent decades developing sophisticated error correction mechanisms, the AI community is deploying multi-step LLM agents with barely a whisper about compound failures.
Long before robots or LLMs existed, computer science established the mathematical foundations of error propagation. Wilkinson (1963) in "Rounding Errors in Algebraic Processes" proved that numerical errors compound predictably in sequential computations. His work on condition numbers showed exactly how input uncertainties amplify through algorithmic chains.
Goldberg (1991) in "What Every Computer Scientist Should Know About Floating-Point Arithmetic" demonstrated that even simple arithmetic operations suffer from cumulative precision loss. The IEEE 754 standard exists precisely because early computer scientists recognized that ignoring error propagation leads to catastrophic failures in computational systems.
The theoretical framework was clear: any sequential system without error correction will experience reliability degradation proportional to the number of operations. This isn't just theory—it's why financial systems use decimal arithmetic instead of floating-point, and why NASA's flight computers employ triple redundancy.
The robotics community didn't just acknowledge these mathematical realities—they engineered solutions. The transition from theoretical computer science to physical systems revealed new dimensions of the error propagation problem.
Chatila and Laumond (1985) in "Position Referencing and Consistent World Modeling for Mobile Robots" showed that sensor noise compounds quadratically with the number of observations. This led to the development of simultaneous localization and mapping (SLAM) algorithms that explicitly model and correct for cumulative uncertainty.
LaValle (2006) in "Planning Algorithms" formalized the concept of configuration space obstacles created by uncertainty propagation. His work showed that without explicit error modeling, path planning algorithms become unreliable after just a few waypoints.
The robotics solution was systematic:
LLM agents represent a fascinating convergence of computer science theory and robotics practice, but operating in the space of semantic computation rather than numerical calculation or physical manipulation.
The Computer Science Parallel: Like floating-point arithmetic, each LLM inference introduces uncertainty. Bengio et al. (2013) in "Representation Learning: A Review and New Perspectives" showed that deep networks accumulate representational errors through their layers. LLM agents simply extend this to the temporal dimension—errors accumulate across reasoning steps rather than network layers.
The Robotics Parallel: Like sensor fusion, LLM agents must integrate information from multiple sources (context, tools, memory) while maintaining coherent world models. Thrun et al. (2005) demonstrated that without explicit uncertainty tracking, integrated information becomes unreliable exponentially fast.
The Unique Challenge: Unlike numerical computation (where errors are well-defined) or robotics (where errors are measurable), LLM semantic errors are often undetectable until propagation makes them catastrophic. A hallucinated fact looks identical to a real fact until it causes downstream failures.
The error propagation in LLM agents follows well-established mathematical principles, but manifests in ways that make traditional solutions challenging:
From Numerical Analysis: Higham (2002) in "Accuracy and Stability of Numerical Algorithms" proved that error propagation follows condition number mathematics. For LLM agents, the "condition number" is effectively the semantic sensitivity of each reasoning step to input uncertainty.
From Information Theory: Shannon (1948) established that information transmission through noisy channels degrades predictably. LLM reasoning chains are essentially semantic channels where each step introduces noise, but unlike digital channels, we lack error-correcting codes for meaning.
From Control Theory: Åström and Murray (2021) in "Feedback Systems: An Introduction for Scientists and Engineers" showed that open-loop systems (like current LLM agents) are inherently unstable over multiple iterations, while closed-loop systems with feedback can maintain stability.
The mathematics predicted exactly what we're observing: sequential systems without error correction mechanisms will fail predictably as chain length increases.
Before accepting the doom-and-gloom narrative, we should examine why some researchers believe LLMs might escape the traditional error propagation trap:
Emergent Error Correction: Brown et al. (2020) in the GPT-3 paper suggested that large language models exhibit "emergent capabilities" that might include self-correction. Some researchers argue that sufficiently large models can detect and correct their own errors through their training on self-consistent text.
Semantic Robustness: Hendrycks et al. (2021) in "Measuring Massive Multitask Language Understanding" found that large models show surprising robustness to input perturbations. This suggests that semantic reasoning might be more fault-tolerant than numerical computation.
Context-Driven Recovery: Wei et al. (2022) in "Chain-of-Thought Prompting" showed that models can sometimes recover from early errors when provided with sufficient context. The argument: semantic systems might have self-healing properties that numerical systems lack.
The Scale Hypothesis: Kaplan et al. (2020) in "Scaling Laws for Neural Language Models" suggested that error rates decrease predictably with model size. If true, sufficiently large models might achieve error rates low enough to make propagation manageable.
However, empirical evidence suggests these optimistic views don't hold under systematic analysis:
Emergent Correction is Inconsistent: Kadavath et al. (2022) in "Language Models (Mostly) Know What They Know" found that while models can sometimes self-correct, this ability is unpredictable and doesn't scale systematically with task complexity.
Semantic Robustness Has Limits: Ribeiro et al. (2020) in "Beyond Accuracy: Behavioral Testing of NLP Models" showed that apparent robustness often masks brittleness to specific types of semantic perturbations—exactly the kind that propagate through reasoning chains.
Context Recovery Requires Perfect Context: The self-healing properties depend on maintaining perfect contextual information, but Liu et al. (2023) in "Lost in the Middle" demonstrated that long-context reasoning degrades significantly as context length increases.
Scale Doesn't Solve Systemic Issues: Ganguli et al. (2022) in "Predictability and Surprise in Large Generative Models" found that while individual error rates decrease with scale, systemic issues like hallucination and reasoning failures persist even in the largest models.
The theoretical debates matter less than empirical evidence. Recent systematic studies provide clear data on error propagation in LLM systems:
Wei et al. (2023) in "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" found that reasoning accuracy degrades significantly in multi-step problems. Their analysis of GPT-3 on arithmetic word problems showed:
This follows the exponential decay predicted by classical error propagation theory.
Press et al. (2023) in "Measuring and Narrowing the Compositionality Gap in Language Models" provided the most comprehensive analysis to date. Their key finding: "Performance degradation follows a power law as the number of composition steps increases." This matches exactly what Wilkinson (1963) predicted for sequential computational systems.
Huang et al. (2023) in "A Survey on Hallucination in Large Language Models" documented the mechanism: factual errors propagate through reasoning chains with 73% probability of causing downstream failures. Critically, they found that error detection decreases as chain length increases—the system becomes less capable of recognizing its own mistakes precisely when it's making more of them.
The convergence is striking:
Computer Science Theory (1960s-1990s): Sequential computation without error correction is inherently unstable. Mathematical proof exists.
Robotics Practice (1980s-2010s): Physical systems confirm the theory. Engineering solutions developed through necessity.
LLM Empirics (2020s): Semantic reasoning systems exhibit identical patterns. The mathematics still holds.
All three fields arrived at the same conclusion through different paths: systems that chain operations without explicit error correction will fail predictably as chain length increases.
The mathematics of error propagation are well-established in control theory. Kalman (1960) laid the groundwork in "A New Approach to Linear Filtering and Prediction Problems," showing how errors accumulate in dynamic systems.
For LLM agents, Dziri et al. (2023) in "Faith and Fate: Limits of Transformers on Compositionality" provided concrete measurements. They found that if each step in an agent workflow has a 90% accuracy rate, a 10-step process has only 35% reliability (0.9^10 = 0.35).
More concerning, Gao et al. (2023) in "Retrieval-Augmented Generation for AI-Generated Content: A Survey" showed that errors don't just multiply—they accelerate. In their study of multi-step reasoning:
This matches exactly what robotics engineers discovered in the 1980s with manipulator arms and mobile robots.
This isn't theoretical. Shinn et al. (2023) in "Reflexion: Language Agents with Verbal Reinforcement Learning" documented systematic failures in production-like environments:
Yao et al. (2023) in "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" showed similar patterns across multiple domains, concluding that "the reliability of multi-step reasoning degrades faster than previously assumed."
The reluctance to address error propagation in LLM agents stems from several sources:
Historical Perspective: Computer science developed error analysis because early systems failed catastrophically without it. Robotics adopted these principles because physical failures are impossible to ignore. LLM agents produce plausible-sounding failures that can be dismissed as "edge cases" or "prompt engineering problems."
Economic Incentives: The current AI boom rewards rapid deployment over systematic reliability. Christensen (1997) in "The Innovator's Dilemma" predicted this pattern: disruptive technologies initially prioritize capability over reliability, often to their eventual detriment.
Complexity Illusion: The sophistication of modern LLMs masks the brittleness of multi-step systems built on top of them. This is analogous to what Perrow (1984) called "normal accidents" in "Normal Accidents: Living with High-Risk Technologies"—complex systems fail in ways that seem impossible until they happen.
Domain Transfer Resistance: Each field (computer science, robotics, AI) tends to believe its problems are unique. The mathematical foundations are identical, but the surface differences create cognitive barriers to knowledge transfer.
The solution isn't to abandon LLM agents, but to apply the hard-won lessons from computer science theory and robotics practice:
From Numerical Analysis: Implement semantic condition numbers—metrics that quantify how sensitive each reasoning step is to input uncertainty. Demmel (1997) showed how to compute these for numerical algorithms; we need equivalent measures for semantic reasoning.
From Robotics: Deploy closed-loop verification systems. Instead of open-loop agent workflows, implement verification steps that validate outputs before proceeding. Siciliano and Khatib (2016) in "Springer Handbook of Robotics" provide extensive frameworks for fault-tolerant control that could be adapted to semantic reasoning.
From Information Theory: Develop semantic error-correcting codes. MacKay (2003) in "Information Theory, Inference, and Learning Algorithms" showed how redundancy can correct transmission errors. LLM agents need similar redundancy mechanisms for reasoning errors.
Specific Engineering Solutions:
The False Dichotomy: The current debate often presents a false choice between "LLM agents are magic and will solve everything" versus "LLM agents are hopeless and will always fail." The reality is that they're engineering systems subject to well-understood mathematical constraints. We can build reliable systems if we apply the same rigor that computer science and robotics developed over decades.
As LLM agents move from demos to production systems handling financial transactions, medical decisions, and infrastructure management, the cost of compound errors grows exponentially. We're not just building cool demos—we're building systems that could cause real harm when they fail.
The robotics community learned to design for reliability because physical failures were impossible to ignore. The AI community needs to learn the same lesson before our invisible failures become visible disasters.
Error propagation isn't an unsolvable problem—it's an engineering challenge that robotics has already addressed. The question is whether the AI community will learn from these lessons or repeat the same mistakes at scale.
The next time you see a demo of an LLM agent completing a complex multi-step task, ask the hard question: "What's the failure rate when this runs 1,000 times in production?"
Because in the world of compound errors, being impressive once means nothing if you're unreliable twice.
Sarah runs a small organic farm outside Portland. She spends her mornings testing soil pH and her evenings calculating whether she can afford health insurance. Despite growing food that nourishes her community and stewarding land that sequesters carbon¹, she watches cryptocurrency speculators make more in a day than she earns in a year. Her neighbor Dave flips houses for profit, contributing nothing to local food security, but his "net worth" dwarfs hers.
This isn't just unfair—it's economically irrational. Our monetary system rewards financial extraction over value creation, speculation over stewardship. As economist Kate Raworth demonstrates in Doughnut Economics², current GDP measurements fail to capture ecological and social value creation, leading to systematic undervaluation of regenerative practices.
The dollar trying to measure Sarah's farm value, Portland's housing market, and global semiconductor trade exemplifies what systems theorist Donella Meadows called "policy resistance"³—when systems generate the opposite of intended outcomes. One currency cannot effectively represent multiple, often conflicting forms of value.
Research by the New Economics Foundation shows that local food systems generate $1.90 in local economic activity for every dollar spent, compared to $1.15 for conventional food retail⁴. Yet financial markets systematically undervalue these multiplier effects because they occur outside monetized exchange systems.
Sarah's farm creates what economists call "positive externalities"—benefits not captured in market prices. Soil carbon sequestration, watershed protection, biodiversity conservation, and community resilience don't appear on balance sheets, despite their measurable economic value⁵.
Recent advances in blockchain technology and smart contracts enable what computer scientist Silvio Micali terms "algorand consensus"⁶—decentralized systems that maintain integrity without central authorities. Applied to supply chains, these technologies could create what we might call "value-differentiated exchange systems."
Consider Maria's coffee import business. Currently, she pays farmers commodity prices that fluctuate with global speculation, often disconnected from production costs or quality. The Fairtrade Foundation has documented how price volatility forces farmers into unsustainable practices⁷.
A nested currency system could separate different value streams:
Labor tokens maintaining stable purchasing power for farmer compensation
Environmental tokens rewarding measurable sustainability practices
Community tokens funding local infrastructure and education
Quality tokens recognizing superior products and craftsmanship
Each system maintains internal stability while enabling seamless conversion through automated market makers, similar to those used in decentralized finance protocols⁸.
The primary risk is speculation capture—wealthy actors accumulating tokens designed for community circulation. Behavioral economist Richard Thaler's work on "nudge theory"⁹ suggests design solutions that make speculation unattractive while preserving legitimate use.
Freicoin, launched in 2012, implemented demurrage (holding costs) that discouraged hoarding while maintaining transaction utility¹⁰. Estonia's e-Residency program demonstrates how digital identity verification can restrict token ownership to legitimate participants¹¹.
These concepts aren't theoretical. Multiple real-world examples demonstrate viability:
Local Currencies:
Ithaca Hours has circulated over $100,000 since 1991, supporting 900+ local businesses¹²
BerkShares has facilitated millions in regional commerce with 400+ participating businesses¹³
Mountain Hours and Bay Bucks demonstrate scalability across different regional contexts
Supply Chain Applications:
Walmart's blockchain food tracking reduces contamination response time from weeks to seconds¹⁴
Provenance helps brands verify sustainability claims throughout supply chains¹⁵
Fair Trade USA's blockchain pilot tracks premium payments directly to farmers¹⁶
Environmental Markets:
California's cap-and-trade program has generated over $17 billion for climate investments¹⁷
Nori creates marketplaces where farmers earn $15+ per ton of CO2 sequestered through regenerative practices¹⁸
Research by MIT's Community Innovators Lab suggests successful alternative currency adoption requires addressing specific friction points rather than wholesale system replacement¹⁹.
Phase 1: Identify Pain Points Visit local farmers markets, credit unions, and community land trusts. Document specific challenges: seasonal cash flow variations, supply chain opacity, difficulty accessing capital for sustainable practices.
Phase 2: Build Minimal Viable Products Create simple digital tools addressing identified problems. A CSA management app with automated seasonal pricing adjustments. A supply chain tracker providing transparency that 73% of consumers report wanting²⁰.
Phase 3: Test Interoperability Connect successful local implementations. Enable value transfer between different regional systems while preserving local control and priorities.
The deeper promise extends beyond technology to what economist John Fullerton calls "regenerative capitalism"²¹—economic systems that enhance rather than degrade the conditions for life. When soil stewardship, community building, and ecological restoration generate appropriate economic returns, practitioners like Sarah don't choose between values and viability.
Current pilot programs demonstrate feasibility. The question isn't whether nested currency systems can work, but who will scale them first.
Paustian, K., et al. (2016). Climate-smart soils. Nature, 532(7597), 49-57.
Raworth, K. (2017). Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. Chelsea Green Publishing.
Meadows, D. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
New Economics Foundation. (2005). Plugging the Leaks: Making the most of every pound that enters your local economy.
Costanza, R., et al. (2017). Twenty years of ecosystem services: How far have we come and how far do we still need to go? Ecosystem Services, 28, 1-16.
Micali, S. (2017). ALGORAND: the efficient and democratic ledger. arXiv preprint arXiv:1607.01341.
Fairtrade Foundation. (2018). Driving Income Security for Cocoa Farmers.
Adams, H., et al. (2018). Uniswap v2 core. Technical whitepaper.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.
Gesell, S. (1916). The Natural Economic Order. Free-Economy Publishing.
Korjus, K. (2014). Estonia's digital revolution. Government Technology, 27(8), 14-17.
Glover, P. (2014). Ithaca Hours: Local Currency That Works. Ithaca Hours.
Berkshire Regional Planning Commission. (2020). BerkShares Impact Report.
Kamath, R. (2018). Food traceability on blockchain: Walmart's pork and mango pilots with IBM. Journal of the British Blockchain Association, 1(1), 1-12.
Westerkamp, M., et al. (2018). Tracing manufacturing processes using blockchain-based token compositions. Digital Communications and Networks, 6(2), 167-176.
Kshetri, N. (2018). Blockchain's roles in meeting key supply chain management objectives. International Journal of Information Management, 39, 80-89.
California Air Resources Board. (2020). Cap-and-Trade Program Summary.
Sanderman, J., et al. (2017). Soil carbon debt of 12,000 years of human land use. Proceedings of the National Academy of Sciences, 114(36), 9575-9580.
MIT Community Innovators Lab. (2019). Alternative Currency Design Principles.
Nielsen. (2015). The Sustainability Imperative: New Insights on Consumer Expectations.
Fullerton, J. (2015). Regenerative Capitalism: How Universal Principles and Patterns Will Shape Our New Economy. Capital Institute.
Here's the uncomfortable truth: political influence and actual knowledge often exist on opposite ends of the spectrum. The people making decisions frequently aren't the ones who best understand the issues or their consequences. Those with the deepest expertise—frontline workers, subject matter experts, affected communities—typically have the least say in solutions.
Something interesting is happening with AI. Both sides of the power-knowledge equation now have access to a sophisticated thinking partner that exists outside traditional hierarchies.
For those with power but limited domain knowledge:
While personal strategies matter, the deeper solution requires systemic changes: creating institutions that naturally surface dissenting views, protect truth-tellers, and align incentives so those in power benefit from seeking challenging perspectives.
The goal isn't to eliminate hierarchy—it's to create better information flow between those who know and those who decide.
The power-knowledge gap isn't going away, but it doesn't have to be permanent. LLMs are creating new opportunities for both sides to bridge this divide more effectively. Those with power can systematically seek out knowledge. Those with knowledge can find strategic ways to influence power.
The organizations that figure this out will make better decisions. The ones that don't will keep making predictable mistakes, wondering why good ideas never seem to win.
In technology, curiosity leads us to ask, "How does this work?" instead of assuming we already know. When directed toward other people, this same curiosity transforms how we approach human differences.
Instead of making assumptions about someone based on surface characteristics or preconceived notions, curiosity prompts us to wonder: "What experiences shaped this person? What might I not understand about their perspective? What could I learn from their unique journey?"
This shift from assumption to inquiry is transformative. It moves us from judgment to genuine discovery.
When we encounter someone with different political views, cultural backgrounds, or life choices, curiosity leads us to ask questions rather than make declarations. It creates space for understanding rather than debate. It opens doors rather than builds walls.
The curious mind approaches human difference with the same excitement it brings to a new technology: "This is interesting! I wonder how this perspective works and what I might learn from it."
This curiosity-driven approach to human difference doesn't mean abandoning our own values. Rather, it means approaching others with the humility to recognize that our understanding is incomplete and with the genuine interest to learn more.
Just as curiosity helps tech professionals learn from technical failures, it can transform our interpersonal missteps into opportunities for deeper connection.
When a conversation goes poorly or a relationship hits a roadblock, the curious mind doesn't just feel bad or assign blame. It wonders: "What happened there? What did I miss? What can I learn about this person or about myself from this difficulty?"
This curiosity-driven approach to interpersonal challenges creates resilient relationships. Rather than seeing conflicts as evidence that a connection isn't viable, we see them as interesting data points that reveal something important about the other person's needs, values, or boundaries.
We can approach our own emotional reactions with the same curiosity: "That's interesting—why did I respond so strongly to what they said? What might that reveal about my own values or unexamined assumptions?"
This approach transforms our failures of understanding from sources of shame or frustration into opportunities for growth and deeper connection. Each misunderstanding becomes a doorway to greater empathy rather than a wall between people.
Just as technical curiosity creates connections between different domains of knowledge, human curiosity builds bridges across different life experiences and perspectives.
When we cultivate genuine curiosity about people unlike ourselves—those from different generations, cultural backgrounds, socioeconomic circumstances, or belief systems—we create mental networks that can recognize common humanity across apparent divides.
This bridge-building capacity is increasingly crucial in our polarized world. The person who has curiously explored many different human perspectives can see connections and possibilities for common ground that others miss. They become translators between different worldviews, helping each side understand the legitimate concerns and values of the other.
These curiosity-built bridges don't erase important differences or paper over real conflicts. Instead, they create the conditions where different perspectives can interact productively rather than destructively. They make space for the creative tension that drives social innovation.
The curious person becomes invaluable in diverse teams, communities, and organizations precisely because they can connect seemingly disconnected human experiences and find pathways for collaboration that others cannot see.
The most powerful effect of human curiosity is what we might call the "compassion explosion"—the exponential growth in our capacity to understand and care for others that happens when we've curiously explored many different human experiences.
Just as technical curiosity creates combinatorial insights, human curiosity creates combinatorial compassion. Each new perspective we genuinely explore doesn't just add linearly to our understanding—it multiplies it by creating new connections with everything we've previously learned about human experience.
This explains why the most effective bridge-builders, peacemakers, and community leaders often have unusually diverse human connections. Their curiosity has taken them across many different human boundaries, and the interaction between these different perspectives creates a rich mental model of human experience that allows them to connect with almost anyone.
In our complex global society, this capacity for combinatorial compassion is essential. Our biggest challenges—from climate change to economic inequality to technological disruption—require unprecedented collaboration across different perspectives. Only the curious mind can build the bridges these collaborations require.
How do we develop this superpower of human curiosity? Here are some practical approaches:
Practice question-first conversations. When meeting someone new or discussing sensitive topics, challenge yourself to ask three genuine questions before sharing your own perspective.
Seek out "curiosity frontiers." Identify groups or perspectives you know little about but could learn from. Find respectful ways to explore these different experiences.
Notice judgment. When you catch yourself making quick judgments about others, pause and replace the judgment with a question: "I wonder why they see things that way?"
Consume diverse narratives. Read books, watch films, and listen to podcasts featuring perspectives significantly different from your own. Approach them with genuine curiosity rather than evaluation.
Practice "perspective taking." Regularly challenge yourself to imagine complex issues from viewpoints you don't share. The goal isn't to agree but to understand.
Create diverse spaces. Build environments—physical or virtual—where different perspectives can interact regularly in psychologically safe ways.
Imagine a world where we approached human difference with the same curiosity tech innovators bring to new technologies.
We would see conflicts not as battles to be won but as interesting problems to be understood. We would approach social challenges with the humble recognition that our current understanding is incomplete. We would treat each new human perspective as a potential source of insight rather than a threat to our existing beliefs.
This curiosity-driven approach to human connection wouldn't eliminate disagreement or conflict. But it would transform how we engage with those inevitable human differences—moving us from polarization to productive tension, from demonization to discovery, from monologue to dialogue.
The greatest challenges we face as a species—from climate change to poverty to technological disruption—are too complex for any single perspective to solve alone. They require the combinatorial creativity that only emerges when diverse viewpoints connect through bridges of mutual understanding.
And those bridges are built, one conversation at a time, by people who have cultivated the superpower of curiosity about others.
When we direct our curiosity toward other human beings, we don't just build better technology—we build a better world.
Have you ever wondered why some people seem to navigate technology with such ease?
It's not because they're geniuses or because they were born with a keyboard in their hands. Their secret weapon is much simpler: curiosity.
Curiosity is what drives someone to install a new app just to see how it works. It's what makes them volunteer for the project no one else understands. It's that inner voice that says, "I wonder what would happen if..." when everyone else is saying, "Let's stick with what we know."
In the tech world, curiosity is the fundamental difference between those who merely use technology and those who shape it.
I've noticed that the most successful tech professionals aren't necessarily those with the highest IQs or the most prestigious degrees. They're the ones who approach new technologies with genuine interest rather than apprehension. While others are groaning about having to learn something new, they're thinking, "This could be interesting. Let me see how it works."
This willingness to explore—to be a beginner again and again—isn't about natural courage. It's about cultivating curiosity as a habit. As many tech innovators have noted, the key is getting comfortable with not knowing, trusting that your curiosity will guide you to understanding.
Think about learning to cook or play an instrument. The first attempts are always rough. But curiosity pushes you to try again, to experiment, to wonder "what if I tried it this way instead?"
That's the first superpower of curiosity: it transforms the uncomfortable into the intriguing.
Try This: Identify one piece of technology you've been avoiding or postponing learning. Approach it with pure curiosity this week—not with the pressure to master it, but with the simple question: "I wonder how this works?" Notice how this mindset feels different from "I have to learn this."
Here's something fascinating about curious people in tech: they have a completely different relationship with failure than most of us.
For many people, technical failures feel like personal failures. But the curious mind sees them differently—as interesting data points, as puzzles to be solved.
This is why some forward-thinking tech professionals actually document their failures. Not to punish themselves, but because they're genuinely curious about what went wrong and why. Each error becomes a case study driven by questions like: "That's interesting—why did it break that way?" or "What does this failure teach me about how this system actually works?"
When a curious person's code crashes or their design fails usability testing, they don't just feel bad and move on. Their curiosity kicks in: "Why did this happen? What assumptions did I make that weren't true? How does this change my understanding?"
Children learning to walk embody this curious approach to failure. Each fall isn't demoralizing—it's information. They adjust, try again, fall differently, and their curiosity about walking propels them forward despite hundreds of failures.
That's the second superpower of curiosity: it transforms failures from disappointments into discoveries.
Try This: The next time something goes wrong with technology you're using, pause before finding the quickest fix. Get curious instead. Ask: "Why exactly did this happen? What does this tell me about how this really works?" Write down what you discover. You're building your curiosity muscle.
As you follow your curiosity across different technologies and domains, something remarkable begins to happen in your brain: it starts connecting dots between seemingly unrelated areas.
This is where curiosity truly becomes a superpower.
The tech industry is full of examples. The person who explored both design and programming out of curiosity suddenly sees user interface solutions that neither pure designers nor pure programmers would imagine. The professional who followed their curiosity from marketing into data analysis brings insights about customer behavior that transform product development.
These connections aren't random—they're the natural result of a curious mind exploring diverse territories. When you're curious about many things, your brain naturally looks for patterns, similarities, and relationships between them.
I've observed this in collaborative tech environments: the most valuable insights often come from someone saying, "This reminds me of something I encountered in a completely different context." That's not coincidence—it's curiosity bearing fruit.
Unlike specialized expertise, which goes deep but narrow, curiosity creates a web of understanding that spans disciplines. This broad network of knowledge becomes invaluable when tackling complex problems that don't fit neatly into a single specialty.
That's the third superpower of curiosity: it builds bridges where others see separate islands.
Try This: Consider a technology challenge you're facing. Now think about a completely different domain you're curious about (could be gardening, music, cooking, sports—anything). Ask yourself: "Are there any principles or approaches from that area that might apply to my tech challenge?" Let your curiosity connect worlds that don't usually meet.
Now here's where curiosity becomes truly explosive in its impact.
When you've followed your curiosity in multiple directions and built numerous mental connections, you reach a tipping point. Your understanding doesn't just add up—it multiplies.
Mathematically, if you have curiosity-driven knowledge in five different areas, that doesn't give you 5 units of knowledge. It potentially gives you 120 different combinations (that's 5 factorial) of insights that can intersect in unexpected ways.
This explains why the most innovative solutions in tech often come from people with unusual combinations of interests. Their curiosity has taken them into diverse territories, and the interaction between these different knowledge areas creates possibilities that specialists simply cannot see.
In tech companies, you can witness this when a stubborn problem finally yields to someone who says, "You know, this reminds me of something I explored in a completely different field." Their curiosity-driven explorations across domains created the perfect mental toolkit for that specific challenge.
Each new area your curiosity leads you to explore doesn't just add linearly to your capabilities—it multiplies them by creating new combinations with everything you've previously discovered.
That's the fourth and most powerful superpower of curiosity: it creates exponential rather than linear growth in your ability to solve problems.
Try This: List all the different areas of technology you've explored out of curiosity, even briefly. Don't just list work skills—include hobbies, interests, and side explorations. Now consider how many potential combinations exist between these different areas. That diverse curiosity-driven background is your unique advantage.
The secret to technological intuition isn't innate brilliance. It isn't memorizing specifications or mastering every programming language.
It's curiosity—consistent, genuine curiosity about how things work.
Curiosity is what makes you say yes to new experiences when others hesitate. It's what helps you see failures as fascinating data rather than discouraging setbacks. It's what creates connections between different domains in your thinking. And ultimately, it's what creates the combinatorial explosion of insights that looks like tech brilliance to outside observers.
The most powerful aspect of this superpower? Anyone can develop it. Curiosity isn't fixed at birth—it's a habit you can cultivate, a muscle you can strengthen.
The next time you encounter someone who seems to have an almost magical ability with technology, look past the surface impression. What you're really seeing is the compound interest of curiosity—years of wondering, exploring, connecting, and discovering.
And the best news? You can start building your curiosity superpower today. Just follow that little voice that says, "I wonder..."
Your future self will thank you when your understanding of technology doesn't just grow—it explodes. 💥
The distinction is subtle but crucial. Throughout human history, our greatest achievements and resilience have come through collective intelligence—people thinking together, challenging each other, providing emotional support, and creating shared meaning. These social processes aren't just nice-to-have features; they're the foundation of what makes us human.
The "human + machine" paradigm isolates individuals with technology, while the "human + human + machine" approach preserves the social fabric that has been essential to human flourishing.
What makes our current AI trajectory particularly concerning is the false conviction people develop when interacting with AI systems. We've established rigorous safety standards and public scrutiny for self-driving cars because the risk is obvious: one malfunction could cause immediate, visible harm.
Yet we're not applying the same scrutiny to AI systems that influence our information, decisions, and values.
Unlike the dramatic crash of a self-driving vehicle, AI systems like large language models operate through what I call "death by a thousand paper cuts" - a fundamentally different harm model that's:
Consider how we interact with these systems: A physician relies on an AI diagnostic tool without consulting colleagues. A judge reviews an algorithm's sentencing recommendation without community input. A student crafts essays with AI assistance rather than through peer review and discussion.
The errors or biases in these interactions may not be immediately catastrophic like a car accident, but their cumulative effect on healthcare outcomes, justice, and education could be equally devastating over time.
Just because error attribution is harder doesn't mean companies should avoid accountability. In fact, the moral and value decay potential rivals or exceeds that of more visible technologies. We're rightfully concerned about physical safety on our roads—shouldn't we be equally vigilant about the health of our information ecosystems and social institutions?
What we're seeing now is a subtle but profound shift:
The risk isn't just about getting bad information—it's about atrophying our social thinking muscles. When we outsource thinking to machines rather than engaging with other humans, we lose the productive friction that generates new ideas, the emotional connection that builds trust, and the shared context that creates meaning.
This connects to a broader problem of overindexing on data in organizational decision-making:
The solution isn't rejecting technology but reframing its role. We need systems that:
As leaders, we must ask not just "How can AI make us more efficient?" but "How can AI strengthen rather than erode the human connections that make our organizations and society function?"
The technologies that will truly advance humanity aren't those that isolate us with machines, but those that enhance our ability to think, create, and solve problems together.
At some point in their career, every individual in the technology industry struggles with answering the question, “how do I maximize my impact?” If you feel stuck in your career I hope this provides some clarity to you.
Every job I’ve taken up has been based on the potential of learning and the potential of impact. I started my career in a roughly 300-person engineering team. You get started on the job and learn the ropes in your first few months. In a few months, you start feeling that luck is a big part of working on the most interesting products. Do you wonder what it takes to work on the most interesting projects? How do you determine what work is high-impact and what is not?
The secret to maximizing impact is to maximize context. Context is the set of invisible rules that guide individual and collective action in an organization. Three things are key in building context to maximize impact:
A story about the future state of the business
Understanding of incentives of others
Networks of people and teams
The ideal future state of any business is to have happier paying customers, higher profit, and more innovation. In the early days of Explorer.ai, our initial goal was to be more innovative at solving mapping problems for self-driving car customers. We spoke to over 90% of the companies in that space. A key driver of continued interaction was that the customers resonated with our stories as individuals.
You need to be good at telling your story to many diverse people. The only way to get better at it is to practice. The more revolutionary your story, the wider the support it can garner. Think like the CEO of a profit-making enterprise. If you are presented with two ideas - one that increases profit margin by 5% versus 40%. Which one will capture your initial attention?
The stories you craft about yourself, your projects and your teams show your team and peers how big you are thinking. This ability to keep thinking big requires courage to believe in bold ideas. To keep your story believable you need to keep executing towards the big idea.
Let’s say you want to increase the profit margin of the products you are selling. Let’s look at a few ways to increase your profit margin by 5 % or 40% in one year. Here are a few ways to do it
You can do it by increasing the price: Possible in 5% case, tough to execute in the 40% case.
You can increase the number of customers: For 5% you might be able to make it as a part of your sales quota goals. Depending on your product 40% might be hard.
You can reduce the cost: 5% seems achievable in comparison to 40%.
You can see that each approach can achieve a 5% change. If you do them all with the 5% scope you are still 25% short to your goal. This will force you to talk to more people. The conclusion of those conversations might lead to the launch of a new product offering. Change the objective and the ways to get there, I am sure you can find similar examples in different companies. One side effect of a bigger goal is that you will need to work with others in your organization to hit the tough goal. The 5% goal could have been achieved with a smaller group, not the 40%.
You cannot only be a storyteller in your organization. You need to take action in the direction of increasing the profits by 40%. Every milestone you hit towards your fabled destination will keep transforming your fiction into more fact.
Early in my career, I struggled to understand why people acted in the way they did. The actions of others often felt misaligned with the company’s goals and objectives. People would say different things in a meeting and do very different things. This caused me immense frustration. Then I learned about intrinsic incentives.
There are two main kinds of incentives that are at play in a company. One, extrinsic incentives, bonuses, promotions and other more obvious incentives. Two, intrinsic incentives, the ones that are self-driven. These are often driven by some form of internal principles or values of an individual. A lot of people struggle to articulate their intrinsic incentives of working. This makes understanding co-workers tough.
One shortcut to understanding the incentives of others is to observe the actions taken by others. In the example of increasing profits, you will encounter different people. Some support your cause, some against it and some figure out if your ideas are worth engaging with. This requires collaboration over multiple months to begin to understand the other people and teams. Be aware of what working relationships click and which ones don’t. The ones that don’t will require a lot more work.
Each team leader or individual you work with will have different incentives, so avoid generalization. Focus on understanding what makes the people tick. If someone loves working with data, ask them to do the financial analysis for understanding the current sources of profit. If someone likes being organized, ask them to run the meetings. People feel empowered when their work is aligned with intrinsic incentives. A lot of people are not aware of their own intrinsic incentives. This can make the process of understanding others harder. Be kind to yourself through the process.
In understanding the incentives of others, you will experience a wide range of feelings. Your feelings can be classified into over fifty labels including relaxed, supported, helpless, warm, angry, humiliated, etc. Here’s a more comprehensive list from Hoffman Lab for future reference. This will provide you with language to understand your own feelings in the situation.
Thinking of what incentives drive beyond the financial will allow you to work with a wider range of people. The more people you can collaborate with, the more enjoyable your experience working with others will be.
Crafting your story is not enough. You need to start adding value to the company by delivering value. There are three levels at which you can deliver value. Each level is progressively harder in the short term but compounds in the long term.
Level 1: Ask for a problem that you think matches your skills. Do it yourself.
Level 2: Ask for a problem that you think matches your team’s skills. Work with a teammate to get it done.
Level 3: Ask for a problem that you think matches the skills within your entire company. Work with the different parts of the company to get it done.
Solving a level 1 task will teach you how to work with the tools and ecosystem within your company. Don’t skip this level. This task will teach you about the various processes and tools within your company. While operating on level 2 and level 3 tasks, this skill is valuable. Level 1 tasks are valuable in larger organizations. It exposes the various gatekeepers in the organizations. It sets your expectations on how long it takes your organization to do certain basic actions.
Level 2 tasks are great at getting to know people. This is a good time to set up one-on-one meetings with your collaborators. In these one-on-ones, you can share progress on your level 1 task and get feedback. You will always be enlightened by the suggestions. These one-on-ones are a great place to start understanding the incentives of your team. It also forces you to test your understanding of the company so far. You can share your understanding of why a certain level 2 task is important to your team. This will give you feedback from your teammates and clarify your understanding of how your team works.
Level 3 tasks are the toughest to make progress. Succeeding in them will teach you how your company functions and what are the incentives of different teams. Level 3 tasks are typically not assigned to you. You define them and get them done. The skills you gained doing level 1 and 2 tasks are helpful in level 3 tasks. They will include convincing a large group of stakeholders on what they need to do and why. The timeline of such a task would often be a few months to a few years, depending on the size of your company. Accomplishing this task will require you to build strong networks within the company and understand business priorities better. Success will also cement your place as a valuable employee to your company.
What are the principles of being effective in leveling up?
Ruthless prioritization against a timeline
Cut complaints and maximize action.
Be open-minded to feedback.
Have a plan A, plan B, plan C. If needed plan D, plan E, plan F, etc.
Developing the context of different teams and individuals helps you bypass a lot of steps for your next big idea.
Attempts to maximize context will expose you to business-aligned priorities. This enables you to get a head start on identifying and solving high-impact problems for your company. Often there are high-impact low-effort business problems staring at you. You only need to zoom out a little.
Organizations are constantly changing and evolving and so are the people inside them. Building context is a muscle worth building in the rapidly changing world. Maximizing context frees up a lot of your time. I hope you can use this insight to make the desired impact you wish for. The greater your impact the happier you will be at work. A personal inspiration to challenge my circumstances at work has come from this quote by Daisaku Ikeda, “If you’re passive, you’ll feel trapped and unhappy in even the freest of environments. But if you take an active approach and challenge your circumstances, you will be free, no matter how confining your situation may actually be.”
What do you do about rabbit holes that you keep discovering? Working on interesting problems yields an endless list of rabbit holes to explore. Be curious and follow some rabbit holes. They help you develop a broader context that is often useful at a later date.
]]>
Treating privilege as black and white lead to one of two outcomes. Either, the privileged feel bad about their privilege. Or, the privileged abuse their privilege. Neither position helps. We could instead ask ourselves, how can I use my privilege to help others. We gain privilege because of our gender, race, caste, language, location, family. Privilege does exist in society. Let's figure out how to use it instead of denying it. Privileged folks don't use their voices enough to speak up for others. The key to using your privilege is courage. It's a tough internal battle to see your privilege. It's even tougher to help someone else gain the same privilege as you. It often takes years if not centuries for social structures to change.
The Harvard Business Review has a great article on using our privilege. Using our privilege is not only applicable in our professional lives. If we want to leave the world in a better place than we found it we need the courage to use our privilege for helping others. We often think that money brings privilege, so we should donate our money to lend our privilege.
Is donating money enough to lend your privilege? Imagine after being born your parents give you a monthly allowance of $1000. They leave to figure everything out. Do you think you will survive beyond a few months or years? Lending our privilege is a lot more than making a financial contribution. Money as a resource is useful, but often not enough. Why do we struggle to understand our privilege and use it for the better?
Each one of us has our own struggles. They include financial, health, relationships, understanding our purpose in life. We each struggle on some level with each of these at different points in our lives. I’ve struggled over the years to maintain an exercise routine and a healthy weight. I struggled a little during my first job search but since then I’ve been lucky to find good jobs. Over time your privileges multiply and you attribute it to luck and hard work.
A lot of privileges contributed to my life. Ability to read, write and speak English. Being male. Having access to a computer and internet early in life. Access to food and clean drinking water. The list is endless. It’s easy to take a lot of these things for granted, but they should not be. There were a lot of human beings who made a conscious decision to lend their privilege to me. My parents, my teachers, my peers, and a lot many amazing people. Paying your privilege forward requires you to be aware of it and then lend it to others.
There was one incident that had a disproportionate impact on me. It led me to think about my privilege and bias. I’ve had the good fortune of being able to interview a wide variety of people in my career. I once interviewed a woman for an engineering internship. Due to the circumstances, our interview loop didn’t have any women on the panel. During the debrief my team’s instant reaction was that this person was a ‘no hire’. We started looking at the written notes and noticed something unusual. We found that the candidate had given detailed answers on topics they were confident about. For topics, they had lesser certainty they didn’t take a leap of faith or guess. As a team, we asked each other 'Are we biased?'
I self-reflected and decided to talk to my wife about it. She shared her experience interviewing men and interviewing women. It sounded too familiar to our experience with the interview my team did. I could have ignored that single incident and let go of it. I spoke to many women in the next few years at the workplace. Experiences of men sounding more confident than other genders was all too common. The consequences are often seen in promotions and project opportunities. I started using my position of privilege to speak up on behalf of the women, not in the room. Ask specifics to the men making assumptions about non-men. I attempted to speak the truth and ask specific questions. Specific questions are the enemy of biased people. It makes them break down.
One of my favorite examples of using privilege comes from the Buddha. It's called the parable of the poisonous arrow.
One day, a new follower of the Buddha asked him a series of metaphysical questions. The Buddha replied in the form of a parable about a man who had been shot by a poisonous arrow. Although the man's friends and relatives tried to get a surgeon to heal him, he refused to have the arrow pulled out until he knew who had shot it, his caste, name, height, where he came from, what kind of bow had been used, what it was made of, who feathered the arrow and with what kind of feather. Before all these answers could be found, the man had died. The Buddha employed this parable to demonstrate the meaninglessness of being obsessed with abstract speculation.
The Buddha teaches through this parable the importance of using situational privilege. When a healthy person sees a person shot by a poisonous arrow, they better take action and remove the arrow. Overthinking will kill the person. We can take action using our privilege.
My privilege has let me take more risks and help break through the biases of others. I don't succeed often. If you have any kind of privilege, use it to help someone else. It makes our world better that way. The only thing stopping you is yourself. Here are 3 steps you can take towards lending your privilege
1. Identify a privilege you have
2. Identify someone who doesn’t have that privilege. Talk to them about it. Ask them how to identify it in your daily life.
3. Be on the lookout and use your privilege when appropriate.
Human history has long awaited the time when the energy of hope and creativity will arise from among the most downtrodden and oppressed. When people who have experienced such abuses become empowered and take their place at the heart of international society, and their welfare becomes the focus of new ideas and new thinking, our world will be immeasurably enriched―both in a material and a spiritual sense. - Daisaku Ikeda
Lending our privilege can help empower our fellow human beings and create a better world. A world that we are proud to inhabit.
]]>To startup or not to startup? This is the million-dollar question that I asked myself every couple of weeks through my 20s. How do I decide? What are the criteria? Will the idea be good enough? Am I good enough? Will I be able to execute? So many questions and no answers. Every day potential entrepreneurs wake up feeling I am not good enough. Building confidence to take that plunge is tough. Here’s a little insight into my journey.
In the competitive job market of India, a job in the field of your study after graduation is a privilege. I had an internship in the final semester of college and no job lined up. I wanted to work at a startup. My definition of a startup was a company that had few employees. I got an offer from a firm that had under 20 employees in 2011. The highlight of my job was I got to work with computers and didn’t have to badge in and out.
My company was building a product that needed someone to go and showcase the product. I volunteered. I learned later that I had assumed the role of a part-time sales engineer. I once told the customer, “You should understand our product. It’s not our fault. You are at fault." I was proud of what I had said and done. In reality, I had failed. All the effort of my team and myself over the past few months to get the product to a point to showcase it felt wasted. I’m grateful to my company for giving me space to learn a lesson and not firing me for it. In a close-knit team, it’s heartbreaking to see your work not convert into revenue. I later learned that this was the hard reality of sales.
I arrived in Silicon Valley in early 2013 to study Software Engineering. I felt that I could assemble a computer gathering parts from companies on the US-101. It is a well-funded state-of-the-art technology playground. A tiny fraction of companies become household names worldwide. Most of them either die in oblivion or get acquired. The other big thing for computer programmers here is a hackathon, a marathon of hacking.
Put a group of technologists for 12-24 hours and feed them pizza. You lose sleep, consume sugar, caffeine, and carbs. You fuse your brain with the computers to build interesting things. It's the closest to the singularity you can get to. I built an app to control your music player with your brain waves (mood). I met my future co-founder building a party assistant to show a dancing skeleton of you to the attendees. I was able to control a computer to do interesting things. This led me to my first job in the USA.
Click-clack-click-clack on the keyboard in front of a computer was a large part of my job. I found myself immersed in writing code for customers I would never see in my life but they paid money to my company. That money after exchanging hands would result in me getting paid. This in turn would let me pay my student loans and bills. A big part of my learning was how was potential customer value turned into revenue for the company. This led me to challenge myself a step further.
Starting a company was not an overnight decision. I needed confidence and fallback options played out in my head. First, I had the confidence to find a job if the startup didn’t work out. I also needed to sustain myself for a certain time frame without a salary. I had never built a new product and sold it to customers. I did not understand the market of my company (automotive technology). I had no experience in hiring and managing people. I also had no idea how to fundraise. To do something outside your comfort zone, you need to understand your comfort zone well. With this information, I clarified the risk I was taking with my family and co-founders. A shared understanding of your personal and professional risks with your founding team is crucial. This helps make decisions when conditions are not favorable. I was fortunate to build that trust early and it serves me well to this day.
Building confidence to take on something ambitious is an iteration. You start with none, take action and then you get some. Confidence is not something you have when you are doing something for the first time. The big hairy ambitious goals that we strive for are all done for the first time. Each time you take a small step you get the confidence to do it better. The key is to build on your prior confidence and keep going for your ambitious goals.
What about starting a company? Ask yourself the following questions
How do you sell a product to someone else?
How does your favorite company make money?
What's your process to build something that will make money?
It's important to assess facts on your own. Every person associated to a startup is taking some risk, but not the same one. The VC is taking a financial risk, the founder is taking a time and money risk. Early team members are taking career and financial risks. There is only so much risk you can understand upfront. The best way to learn is to take the plunge. I want to leave you with my favorite quote on courage
No matter how wonderful our dreams, how noble our ideals, or how high our hopes, ultimately we need courage to make them a reality. Without action, it’s as if they never existed. - Daisaku Ikeda
I scribbled this short stanza in my diary this morning. I often think about what is the truth? How do we tell truth and falsehood apart? We all have our lenses to look at the truth. Your truth and my truth are not the same. Truth often lies in understanding the context of the other. Falsehood stems from assuming the context of the world. To understand others' truths you need to understand their history and their hopes. For yourself, feel the present. That is your truth.
Do you have a tough time starting new things? Do you fear failure? Starting new things is hard. Starting new things while working with other humans, even tougher. Starting new things well is a superpower worth cultivating.
Starting things you consider hard can be overwhelming. Starting a company is one of them. If we are able to start things with a group, we can achieve things that we will never be able to achieve by ourselves. This is rewarding. Do it well and teach it to others. It buys you lifelong access to people with who you can start new projects.
Over the past fifteen years, I’ve studied in two universities and worked at five companies. The number of existing employees, when I've joined, have ranged from 0 to 2 million. I’ve had to start things with a variety of people and a range of prior art at an organization. I've been fortunate to do work spanning roles, technologies, people and cultures. I started a new job last week and was reflecting on how I’ve evolved my process of starting.
Early in my career, if someone asked me to make an app, I would make an app. Write a Python script to do something, I’ll do it. I worked assuming my manager knew the priority of everything. I repeated this process for a couple of months. I started wanting a greater return on investment of my time. How do I get more impact with lesser input? Isn't that the whole point of technology and productivity? I started asking leaders and peers about my work. Why are we writing this app? What is the purpose of the script for the customer? How much is the customer paying for this? Every question led to a useful lesson. It taught me how to think about the situation from a different lens. Do things well first then ask questions.
In my next job, I started getting things done. I learned that the focus of the company was to enable sales. This meant I started helping sales teams understand the technology. This helped us get new deals and made our existing customers happier. The size of the organization (~1500) meant that I had to stick to my technical focus and teach others about it. Learning the business context and teaching the technology put me on a growing path.
I jumped into starting Explorer.ai. I wanted to control my own destiny. A self-driving startup meant competing with multi-billion dollar investments. Problems of fund-raising, product and hiring blew up in my face. I lacked experience in every area. I had no clue how to make decisions. Things turned out okay. We made hard decisions based on our shared values. In retrospect our implicit shared values made things work out. We got acquired. It taught us that no one understands reality completely. We all need to do our best to make a difference. People put in their best based on the stories they tell each other. Stories emerge from the values we hold as a group. Shared values, though implicit, kept us together.
My next job was at the acquiring company. Joining a new company after an exit is tough because of the difference in values. I found a lot of early success. This was due to my understanding of business reality. I ran into a roadblock where many people in the company saw the reality with a different lens. As time progressed, it became harder to achieve a shared understanding of reality. I realized that my values will never align completely with that of my employer. Understanding values exhibited by a group takes some time. It takes time to understand the dynamics of a group. You need quite a few data points to understand the extent of disparity in values. Lack of collective action made me unhappy. It was a result of different values.
"When we care for others our own strength to live increases. When we help people expand their state of life, our lives also expand. Actions to benefit others are not separate from actions to benefit oneself. Our lives and the lives of others are ultimately inseparable." - Daisaku Ikeda
I started a new job last week. A big part of my decision was the alignment of values between the people during the interview. I am spending time understanding the values of the team. It will help me drive action based on a shared understanding of reality. I care about creating value with others. I resonate with Daisaku Ikeda's view of helping others. He shares, "When we care for others our own strength to live increases. When we help people expand their state of life, our lives also expand. Actions to benefit others are not separate from actions to benefit oneself. Our lives and the lives of others are ultimately inseparable."
If you are part of an amazing team, appreciate them. If not, keep searching for that team and do great things. Life is too short to not do amazing things with other humans.
]]>Imagine you are a founder of an early-stage startup. You run into a situation where you don't have enough cash to run payroll next month. As a leader, you can either share this information with your team members, not share till they ask or do nothing. Your team members put a lot on the line to join you and don’t share much of a financial upside if things do go well. You owe the transparency to them. Doing nothing is taking the path of least emotional resistance. It erodes trust in the leadership. I've met startup employees who would never work for a founder they worked with in the past. These are the consequences of being a bad leader.
Telling your team you don't have money to continue paying them can feel scary. To be able to share this with your team is hard, but a win-win proposition. It helps your team build confidence in leadership that shares hard truths. The emotional process you undergo will make sharing hard truths easier for you. A side effect of this is that your team will reshuffle. If someone believes in your company's mission, they will double down on your company. If they have doubts, they will leave. This helps cement the culture of openness and transparency in your founding team.
Let's say you decide to not share anything about the poor financial situation. You get lucky and you find that paying customer for closing your next round of funding. Everyone on your team is happy. Your team size doubles. In a few months, the same customer is unhappy with your product. What do you do? There were no consequences of not sharing important information last time. You don't share anything again, this time with double the team size. You have now cemented a culture of not sharing hard truths with the team. Looking up to you, your leadership team does the same thing with their teams. Over time no one in your organization is sharing hard truths with each other. Those who do, look like outsiders in your organization. The growing information gap about hard truths will lead to an ineffective organization. Everyone will be second-guessing their leadership, peers and managers.
Transparency is about treating people right. Leadership is about decision-making under ambiguity. Values guide your thinking in ambiguous situations. The value of transparency helps people share their true opinions. Leading with transparency is not easy in a large number of organizations. Let's challenge ourselves to building organizations of greater transparency. Here's a quote that has served me well in my challenges with transparency. "Rise to the challenges that life presents you. You can’t develop genuine character and ability by sidestepping adversity and struggle." - Daisaku Ikeda. A culture of transparency is hard to build. Anyone can start creating a culture of transparency. A good starting point is to start writing down the decisions you make and how you made them. People will notice how you take the messy glue of human emotions and transform it into a great culture.
]]>
I first ran into one-on-one meetings in the famed High Output Management by Andy Grove. It’s a meeting in which a manager meets with their direct report. If you are a knowledge worker, you have these meetings either as the direct report or as the manager. Running a good one-on-one meeting is the tangible thing you can do to grow as a leader in your organization.
In my first job, I would always be waiting for my manager to reach out to me and ask me questions. It took me a few years to realize how limited in scope my one-on-one meetings were. I would talk about compensation and vacation, but I rarely spoke about my career and never about how I felt about the different situations. My first management role was at my own company, Explorer.ai. There I started doing one-on-ones with an intent to understand what my team wants. We discussed their job, their career, their immigration challenges and many more topics. My role was to guide them on their journey. Helping my team and myself through one-on-ones was two birds with one stone.
To help your direct report the first step is to cultivate trust. To build trust one needs to work on credibility, reliability, authenticity and self-interest. The fastest way to build credibility is to ask questions in your one-on-one meetings. Simple things like being on time, not canceling meetings without reason and following up on promises help build reliability. Sharing relevant context and being direct with unpleasant information reflects authenticity. Self-interest shows up when you misrepresent your direct report. It happens outside your one-on-one and people are good at catching it. Start building trust in your next one-on-one.
When you go into your next one-on-one meeting, think about how you can grow trust. It’s the bedrock of a healthy professional relationship. If something is uncomfortable for you, learn how to deal with it to benefit your direct report or team. One of my direct reports shared with me, “I was a little apprehensive about the regular one on ones. Now, I find them very insightful, to understand the direction and how our team fits into the bigger picture.” One-on-ones get easier the more you practice. Building strong working relationships takes time. Enjoy the messy process of your team and your growth.
[1] Andy Grove, High Output Management
[2] Daisaku Ikeda, New Human Revolution Vol. 8
[3] Anne Raimondi, Use This Equation to Determine, Diagnose, and Repair Trust
One non-negotiable idea as a Buddhist is that each person is capable and enlightened. “A great human revolution in just a single individual will help achieve a change in the destiny of a nation and, further, can even enable a change in the destiny of all humankind.” - Daisaku Ikeda, The New Human Revolution. I’ve chanted Nam Myoho Renge Kyo over a large part of my life. It's been a great way for self-introspection and challenging my fears.
My writer friend Manu Pillai introduced me to the podcast, The Seen and the Unseen . The title of the show itself was interesting enough to give it a shot. The conversations provided practical examples of cause-effect relationships. The podcast gave examples applying economic reasoning and probabilistic thinking. It helped that the guests and the host cared enough about each other and the subject they were discussing. It also helped that these people had lived experiences about the subject at hand.
My trust in Amit Varma based on his podcast made the course, The Art of Clear Writing a must sign-up course. I started logging in at 5:30 AM PT every Saturday of August 2020. All good instructors make you uncomfortable in a good way, Amit was no exception. He forced us to build a writing habit. He once shared the following, “I liked the way it began, with quick action and a sense of this lively girl in a precarious world. But then the piece became overwrought. One sign of that happening is when you have sentences that don't add anything new to the story, which happened a bit in the second half. From painting a picture, as the first half did, the piece became a sentimental lament, and that didn't work for me.”
Amit’s final class had a recommendation of writing 200 words every day. It took me a few weeks to get started but here I am finished with two notebooks with my daily journal entries.
Here are a few journal entries from there
I am a curious person who is trying to understand myself and the world around me. In modern society separating the two is close to impossible. We are more dependent on each other than we were at any other time in history. This curiosity combined with a process of writing and chanting daily has taken me to the next step.
We are all on a journey to gain clarity in our life. Board the train of curiosity and you will enjoy the boredom. With consistency, you also will gain some clarity.
I wanted to thank my family, clear writing community, friends and peers who pushed me to get this off the ground. You know who you are.