If you're in FinOps, you've probably gotten pretty good at the core playbook to reduce cloud costs: negotiate committed use discounts, right-size VMs, clean up idle resources, and build reporting that keeps everyone honest. These are important. They work. A solid implementation can cut 20 to 40 percent off the infrastructure bill.
But here's the problem. The fastest-growing portion of cloud services cost doesn't respond to any of that.
The cost center is shifting
The cloud used to be mostly compute and storage. You rented VMs, you bought disks, you negotiated rates. The levers were clear. Now? The bill is increasingly dominated by managed services: AI inference, observability platforms, databases, data pipelines, queues, streaming systems. These services meter every event, request, token, row read, GB scanned, or log line ingested.
And the growth curves are steep. LLM usage is exploding. Observability cost runaway is common as teams scale. The hidden costs of data pipelines compound as they scale automatically whether you want them to or not.
In this world, cost is no longer a static line item that finance can negotiate. It's an emergent property of the application itself.
The limits of infrastructure-focused optimization
Rate optimization and right-sizing absolutely matter. But they do nothing to correct cost drivers embedded in the code.
Consider: You can negotiate the best discount in the world on your GPUs. If your application makes ten unnecessary AI calls per request, the bill still climbs. You can squeeze EC2 down to exactly the right size. If your logs or S3 usage explode because of a noisy code path, the gains disappear.
The interventions that FinOps teams have perfected simply don't reach into the application layer. You can't discount your way out of inefficient code.
Cost Saving Potential by Strategy
| Cost Type | Rate Optimization | Right-Sizing | Code Optimization |
|---|---|---|---|
| Compute (EC2, VMs, K8s) | High | High | Medium |
| Serverless Compute | Low | Medium | High |
| Databases | Medium | Medium | High |
| Object Storage | Low | Low | Medium |
| Logs / Metrics / Traces | Low | Low | High |
| Queues / Event Streams | Low | Medium | Medium |
| AI Inference | Low | Low | High |
| Managed SaaS-like Services | Low | Low | High |
Look at that rightmost column. For the service categories growing fastest—AI, observability, managed data services—the primary optimization lever is the code itself.
What is Application Cost Engineering?
This is why Application Cost Engineering has become the missing discipline in cloud cost management.
Application Cost Engineering focuses on giving developers insight into the cost impact of their code and enabling them to make smarter decisions before things hit production. It means:
- Catching the expensive log line during development
- Quantifying the cost of an AI prompt inside a pull request
- Showing the downstream effects of a loop that calls DynamoDB too often
- Flagging patterns that lead to runaway token usage
- Enabling automated cost remediation before problems reach production
It treats cost the same way engineering teams already treat performance, reliability, and security: something to model early, validate during development, and monitor continuously in production.
Why this matters now
A few reasons this is urgent:
AI usage is unpredictable and easy to overuse without visibility. When you're paying cost per 1M tokens, a single misconfigured prompt can burn through budget fast.
Managed services represent a growing share of total cloud costs. Managed database spend optimization alone has become a discipline—and the trend line isn't slowing down.
Developer velocity is higher than ever. The cost feedback loop has to shift left or you're always playing catch-up.
Teams are under pressure to improve gross margin, not just infra hygiene. The CFO cares about cloud unit economics for SaaS—cost per feature, cost per customer—not just whether you're using reserved instances.
You can't optimize what you can't see. And right now, no cloud cost optimizer can see into the code.
Complementary, not competing
When evaluating FinOps vs. Application Cost Engineering, it's not an either-or proposition. Application Cost Engineering doesn't replace FinOps. It extends it.
Rate optimization, committed use discounts, and right-sizing remain essential for infrastructure costs. The companies that get this right will continue using those levers aggressively. But they'll also recognize that a growing portion of their spend originates in application decisions that no amount of rate negotiation can fix.
The future of cloud cost management requires visibility into both layers:
- Infrastructure layer — where traditional FinOps operates
- Application layer — where code paths determine managed service consumption
Teams that only optimize infrastructure will find those cloud computing cost savings flattening out while bills from AI, logs, and data services continue to compound. Teams that address both layers will have a structural cost advantage.
What this means for FinOps teams
If you're running a FinOps practice, here's the practical implication: you need to figure out how to turn cloud insights into engineering action.
This doesn't mean FinOps teams need to become software engineers. It means bridging the FinOps-Engineering gap:
- Building relationships with engineering teams around shared cost visibility
- Advocating for cost instrumentation in application telemetry
- Pushing for cost considerations in architecture reviews
- Creating continuous cost control loops that connect billing data to the code paths driving it
The next wave of cloud cost management isn't about better dashboards or more sophisticated rate negotiations. It's about giving developers the information they need to make cost-informed decisions throughout the software development lifecycle.
Rate optimization and right-sizing will always be part of the formula. They're just no longer the whole story.
Ready to explore Application Cost Engineering? Learn how Frugal connects cloud spend to application code