New: Ali's Brand OS & synthetic personas featured in Brands in the Age of AI by SVA

June 25, 2025

Inheriting the Weird: What AI Innovation Actually Demands

Ali Madad

Ali Madad

Author

Everyone loves a demo.

But keeping an AI product alive after the applause fades—that’s the real challenge. The excitement quickly transforms into something slower, quieter, and messier. You're no longer chasing "wow" moments; you're pursuing real, stable, functional systems. And that’s where many AI projects stall—not because the tech itself isn't powerful, but because they’re suddenly burdened by a unique kind of inherited complexity.

This piece is about the uncomfortable tension between quick AI experiments and the deep, slow-moving engineering needed to build stable products. It’s also about what happens when organizations try to apply traditional software approaches to problems AI has uniquely reshaped.

The Two Speeds of AI Innovation

AI development inherently operates at two speeds:

  • Fast, experimental, opportunistic: You prototype a copilot, agent, or persona generator in hours. It feels agile and empowering—a sandbox for quick tests and exciting results.
  • Slow, heavy, deliberate: Then reality hits. To scale these prototypes into stable, secure, and interconnected products, traditional software engineering kicks in—state management, security, orchestration, UX. Every feature demands significant engineering effort.

In short, complexity doesn't just add up—it multiplies. Small, seemingly straightforward ideas (like "chat with personas") balloon into sprawling problems around memory, data management, workflows, and interface design. Stakeholders, initially thrilled, soon ask uncomfortable questions:

“When will this actually feel done?”

This is the moment organizations realize they've inherited something unexpected—a deeply conceptual debt, not merely unfinished code. It’s a strange, uncomfortable feeling of uncertainty that many teams instinctively resist.

The Trap of Traditional Approaches

This was exactly the crossroads I found myself at with a recent client. Initially, they chose a path toward building something robust—an AI platform envisioned as heavy machinery, carefully engineered, deeply integrated, with numerous complex features defined from the outset.

It sounded strategically prudent. But it quickly became clear the team was getting bogged down in a morass of decisions. Each small feature spiraled into weeks of complex engineering. Instead of delivering rapid iterations, the project became a metaphor for broader organizational inertia: unclear expectations, deferred decisions, and excitement without ownership.

The root cause? The traditional software engineering approach wasn't a match for the unique nature of AI—where everything shifts under your feet, where complexity compounds exponentially, and where rigid planning can kill innovation.

A Modest Proposal: From Heavy Machinery to Strategic Playgrounds

My recommendation was simple but counterintuitive:

Move at the pace of AI itself—fast, loose, experimental.
Create strategic playgrounds rather than heavy machines.

Instead of attempting to scale the product holistically, we embraced modularity:

  • Persona generation as a self-contained module.
  • Document search optimized independently.
  • Source pulling handled in isolation.

By breaking the product into clear, independently valuable pieces, integration became an advantage rather than a requirement. This strategy didn't eliminate complexity, but it managed it, allowing teams to iterate quickly, experiment confidently, and avoid getting stuck.

This wasn’t just a technical or architectural shift; it was cultural and strategic too. It meant adopting new organizational habits:

  • Releasing quickly, even internally.
  • Explicitly defining features in practical, real-world terms.
  • Accepting the inherent uncertainty and imperfection of generative AI—and designing explicitly for that.

One team member summed it up neatly:

“Let them know the work was done. Then give them a Google Doc.”

In other words: clarity, simplicity, pragmatism.

Coming Full Circle: Accepting Discomfort as Strategy

The client initially resisted this approach. After all, stakeholders often feel uncomfortable without detailed roadmaps and predefined endpoints. But after months spent wrestling with complexity, they began to understand and embrace the reality of AI innovation—the need for dual speeds:

  • Guided and strategic at the architectural and infrastructural level.
  • Advantageous and opportunistic at the exploratory and feature-development level.

This acceptance didn't come easily. As one stakeholder openly admitted in a moment of clarity:

“We’re trying to allow more room for rudimentary exploration—to feel how hypotheses work, experiment with new interaction strengths and weaknesses. It’s counter to stakeholders needing to know exactly what they can do. We have to play defense a little while we figure out the next iteration.”

This statement captured the shift perfectly. The stakeholders, once uncomfortable, eventually recognized that this two-speed approach—guided yet exploratory, structured yet fluid—wasn’t a compromise but an essential strategy. The path forward wasn't merely technical or architectural, but deeply cultural, changing how they approached problems, decisions, and uncertainty itself.

Embracing the Weird: A New Blueprint for AI Innovation

AI innovation demands we embrace complexity rather than avoid it. It demands teams not just tolerate but strategically leverage uncertainty. And perhaps most importantly, it requires recognizing that innovation is inherently messy, experimental, and uncomfortable.

The future of successful AI products won't look like traditional software. It won't always feel comfortable or predictable. But it will be purposeful, meaningful, and truly innovative—if we’re brave enough to inherit and master the weird.

← Back to all articles

Get in Touch

Want to learn more about how we can help your organization navigate the AI-native era?