May 19, 2025
Inheriting the Weird: What AI Innovation Actually Demands

Ali Madad
Author
Everyone loves a demo.
But keeping an AI product alive after the first round of applause is a different kind of challenge. The work becomes quieter, slower, and more complex. You move from making something impressive to making something real. This is where a lot of AI initiatives stall—not because the tech isn’t powerful, but because what’s required next starts to look like classic software development. And AI software is… not like other software.
This post is a reflection on that moment. When the team is excited. When stakeholders are asking “What’s next?” And when you realize you’ve inherited a unique kind of tech debt—not just unfinished features, but unclear expectations, brittle abstractions, and the open-ended nature of building on evolving ground.
The Two Speeds of AI Software
There’s a tension at the heart of AI work today:
- Speed 1: Flashy Tools. You can prototype an agent, mock up a copilot, or run a persona generator in hours. These are the TikToks of software—fast, punchy, often brilliant.
- Speed 2: Real Systems. The moment you try to build for persistence, for UX, for interconnection—you hit the wall. State, security, roles, workflows, orchestration. Classic problems, but shaped by the unique nature of AI.
One stakeholder said it best:
It’s about output that feels done.
That’s where things break. What felt like a finished product turns out to be a compelling prototype. Now you need a data lake. A vector DB. A consistent document format. Maybe even… a timeline.
From Crawl to Walk (With a Deliberate Pace)
We framed the roadmap as Crawl → Walk → Run. “Crawl” got us an internal tool that worked well enough to generate excitement. “Walk” is where things got harder. We weren’t blocked by code—we were blocked by decisions.
I don’t want to work on a thing that just… drags. I want a thing that’s out there.
Everyone agreed. But aligning on what “out there” meant took longer than expected. The product became a metaphor for broader organizational habits: deferred decisions, shifting definitions of success, excitement without ownership.
We started to articulate a new framing:
- Output-focused: What are people actually doing with this?
- Composable: Can each piece live on its own? (Persona generation ≠ Document search ≠ Source pulling)
- Agentic: Could small agents manage their own scope, roles, even PRs?
Inherited Tech Debt: The Unique Kind
This wasn’t tech debt in the traditional sense—spaghetti code or missing tests. This was conceptual debt. A vague feature like “chat with personas” sounds simple, until it touches memory, UX design, chat routing, persistent storage, and more.
We realized:
- Some features sound small but are deep.
- Others sound impressive but are shallow.
You don’t have to raise your own cow to serve Wagyu—but you do need a plan for how you’ll plate it.
That quote came up in a discussion about data sourcing. We could plug into proprietary datasets. But do we need to? Will it scale? Who’s maintaining it?
AI software multiplies these questions. Models change. APIs drift. Outputs are probabilistic. You inherit every unique edge case of the LLM—and then you have to make it feel reliable to users.
Survey Insights: What People Want vs. What They’ll Use
Our internal survey results were telling:
- People wanted guardrails, security, compliance—classic asks from legal and strategy teams.
- But what actually created value was speed, polish, and clarity. Tools that got them 90% of the way to a brief, a deck, or a report.
Interestingly, what teams often see as necessary—like guardrails and compliance—tends to be implemented last in the development process. Each department brings distinct priorities to the table, highlighting the diverse expectations and opportunities that arise when building AI products. Balancing these varied needs is crucial for creating tools that are both functional and widely adopted.
Departmental priorities from the survey included:
- Strategy: Focused on data sourcing and ensuring robust, compliant data sets.
- Product Management: Prioritized seamless workflow integration and user experience.
- Technology: Concentrated on governance, security, and system reliability.
- Creative: Emphasized depth and quality in content generation.
- New Business: Valued generative documentation to accelerate proposal and pitch creation.
They’re excited—but they don’t know what the product is.
So we leaned into simplicity. One participant suggested: “Just let me email an agent and get a deck back.” That’s not a feature. That’s a use case.
A Modest Proposal: Don’t Scale. Modularize.
Instead of scaling fast, we started splitting.
- Let persona generation own its domain.
- Let document search handle its scope.
- Let source pulling manage its shared space.
Each piece gets to be “shiny and useful” on its own. Integration is optional—not a prerequisite. This helps address the challenges of scaling integrated systems trying to satisfy too many needs.
We also began embedding agentic workflows—little agents to triage tasks, route PRs, test flows. Not because it’s flashy, but because it buys us time.
The bottleneck isn’t the dev team. It’s product decisions.
Moving Forward: Real AI Work is Cultural
The mistake is thinking this is about code. It’s about habits:
- The habit of releasing early and often, even internally.
- The habit of writing down what a feature actually means.
- The habit of accepting that generative AI will always be a little wrong—and designing for that.
As one team member put it:
Let them know the work was done. Then give them a Google Doc.
That’s it. Not perfection. Not AI-as-godmode. Just usable systems with a clear output and a tight loop from input to impact.
Embracing the Opportunity in AI Innovation
Sustaining AI innovation means holding two truths at once:
- AI makes certain kinds of software unusually easy to build.
- But turning those things into reliable, team-usable tools is a rewarding challenge.
This journey requires a hybrid mindset—equal parts engineering, product strategy, and speculative design. It demands fast loops, strong filters, and flexible systems. And it invites us to embrace the complexity and uniqueness of AI as an opportunity for growth and discovery.
Navigating this path is not just about overcoming obstacles—it’s about unlocking new possibilities. With deliberate effort and a constructive approach, we can build AI tools that are not only innovative but also meaningful and impactful.
The future is bright for those willing to keep moving forward.
Get in Touch
Want to learn more about how we can help your organization navigate the AI-native era?