Skip to content
Back to Blog
1 min read

Cost Discipline for LLM Apps: reducing token waste without hurting answer quality

I worked on smoothing the handoff between data engineering and AI teams—standardizing feature contracts, embedding validation, and adding lightweight integration tests.

The friction I kept seeing was simple: we can ship quickly but still lose reliability when ownership stays fuzzy.

Instead of adding more moving parts, I tested a short feedback loop with measurable quality gates.

March for me has been about tightening execution after an idea-heavy February.

What I changed today

  • I clarified ownership for one high-impact surface so escalations are faster.
  • I cut one source of rework by tightening upstream validation.
  • I reduced unnecessary variability by standardizing one recurring pattern.

What changed my thinking

Delivery speed held, while ambiguity dropped. That is a win in real teams. Most of the win comes from making ownership and boundaries unmistakably clear.

Tomorrow’s focus

Tomorrow I want to verify this pattern under a busier workload before I call it stable.

References

Michael John Peña

Michael John Peña

Senior Data Engineer based in Sydney. Writing about data, cloud, and technology.