2025 was the year teams stopped just rightsizing infrastructure and started digging into how their applications actually spend. We've seen a lot of bills this year. Some made us wince. Here are the cost traps that kept showing up.
What makes the list: how easy it is to stumble into, how much it hurts when you do, and how often we're seeing it in the wild.
Without further ado, 2025's Top Ten Cost Traps:
10. GCP Storage: Operations Imbalance
Feel like your storage strategy is basically write once, read never? You're probably looking at long-term storage options to reduce costs—Coldline, say. Makes sense.
One catch: while Coldline saves on storage, those write operations cost more. If your objects are small, the cost of operations can far outweigh the storage costs. We've seen buckets where 80% of the bill was operations, not storage.
The fix: Consider batching small objects and reading from the batch for those occasional reads. It's not intuitive, but it gets you to a cost-effective solution.
Related: The Frugal Approach to GCS Storage Costs
9. Anthropic: Busted Prompt Cache
With cached input tokens discounted by 90%, prompt caching is one of the best efficiency levers available. Which means when it breaks, it breaks hard.
Cache invalidations happen silently. Modifications to tools. Changes in the tool_choice parameter. Variable content early in the prompt where the static bits should be. Any of these can blow out your bill pretty quickly while everything looks fine from your application's perspective.
The fix: Monitor your cache hit rates. Watch for the sneaky invalidators: tool modifications, parameter changes, and anything dynamic creeping into the prefix.
Related: What Invalidates the Cache (Anthropic Docs)
8. OpenAI: Model Past Its Use-By Date
A nice win we saw this year. An application moved from an early version of GPT-4o to the latest GPT-5 mini. Same quality of inference. Nearly 10x cost reduction.
The AI model landscape moves fast. What was cutting-edge six months ago might be overkill today. Those newer, cheaper models aren't just cheaper—they're often better at the specific tasks you need.
The fix: Periodically audit which models you're using and whether cheaper alternatives have caught up. They probably have.
Related: The Frugal Approach to OpenAI API Costs
7. Datadog Log Management: Structured Log Bloat
Use structured logging, they said. Enable higher precision searches on your logs. Create better dashboards and alerts. Standardize across services. All sounds good.
Until somebody adds something huge into the template, and suddenly every log message is checking two large suitcases for every flight. That request context object with the full user session? That field with the long list of oauth scopes? Now it's in every single message.
The fix: Audit your structured log templates. Look for fields that repeat large content across messages. Ask whether each field earns its keep.
Related: Stop Paying Premium Prices for Repeated Log Content
6. GCP Cloud Logging: Overlogging
Some log messages are more valuable than others. If you could put a price tag on every log message, you'd find some crazy mismatches between cost and value.
The cost impact isn't always huge on its own. But it's a quiet, pervasive waste that is pretty much universal. Those status messages that maybe were useful to somebody once, but definitely aren't worth the 20% of your logging bill they're driving.
The fix: Profile your log volume by message type. Find the high-volume, low-value messages. Turn them down or off.
Related: The Frugal Approach to GCP Cloud Logging Costs
5. GCP Pub/Sub: Subscriber Sprawl
We saw some applications in 2025 that were way overspending on GCP Pub/Sub. The pattern: many applications hanging off one topic, relying heavily on filtering in the subscriber.
The problem is you pay for messages to be delivered to subscribers. When you filter in the subscriber, those messages still get delivered—and billed—before they get filtered out. Twenty subscribers filtering for different things means paying for delivery twenty times, even if each subscriber only wants 5% of the messages.
The fix: Push filtering closer to the publisher. Consider multiple topics instead of one mega-topic with subscriber-side filtering. The architecture change pays for itself.
Related: GCP Pub/Sub Pricing
4. AWS S3: Storing Expanded
I have a down sleeping bag that comes with a stuff sack so you can pack it small when traveling, and a larger bag to store it all fluffed up for the rest of the time.
Your data isn't a sleeping bag. You can store it compressed and it will be fine.
We keep seeing uncompressed JSON, XML, and text files sitting in S3 buckets. Gzip typically achieves 70-90% compression on text-based data. That's 70-90% off your storage bill for adding a compress step to your pipeline.
The fix: Compress before you store. Decompress when you read. Your data doesn't need to breathe.
Related: The Frugal Approach to AWS S3 Storage Costs
3. AWS CloudWatch: Metrics Gone Wild
Uh oh, somebody added the customer ID to the metric.
Custom metrics in CloudWatch are priced per unique metric. Add a high-cardinality dimension—customer ID, user ID, request ID—and your metric count explodes. What was one metric becomes ten thousand metrics. What was a few dollars becomes a few thousand dollars.
The really fun part is that CloudWatch doesn't warn you. You just get the bill.
The fix: Audit your custom metric dimensions. High-cardinality dimensions don't belong in metrics—they belong in logs. Use CloudWatch Logs Insights for queries that need that granularity.
Related: CloudWatch Pricing
2. GCP Cloud Logging: Overlogging (Again)
Yes, overlogging makes the list twice. That's how pervasive it is.
At number 6, we talked about low-value log messages. At number 2, we're talking about the sheer volume problem. Applications logging at DEBUG level in production. Every HTTP request logged with full headers and body. Every database query logged with parameters.
When you're paying per GB ingested, verbose logging is expensive logging.
The fix: Set appropriate log levels for production. Sample high-volume log sources. Use log exclusion filters for known-noisy sources. Question every log statement: is this worth paying for?
Related: The Frugal Approach to GCP Cloud Logging Costs
1. Not Looking at the Bill
The number one cost trap of 2025: not looking at the bill in the first place.
We've talked to teams spending six figures monthly on cloud services who couldn't tell you what their top three cost drivers were. The bill exists. It has answers. But nobody's looking at it with the application context needed to understand what the numbers mean.
Cost traps thrive in the dark. Every trap on this list is findable if you know where to look. The teams that avoid these traps aren't luckier or smarter—they've built the habit of attributing costs to code and questioning what they find.
The fix: Look at the bill. Regularly. With your application architecture in mind. The bill will tell you which of these traps you've fallen into and which ones you've avoided.
Closing Thoughts
Every cost trap on this list has the same root cause: a decision that made sense in isolation but nobody checked the bill afterward. High-cardinality metrics? Useful for debugging. Subscriber-side filtering? Simple to implement. Verbose logging? Helpful for troubleshooting.
The trap springs when these reasonable decisions meet AWS and Google Cloud usage-based pricing at scale. The same pattern plays out with Datadog, OpenAI, Anthropic and other managed services.
The fix is always the same two steps: attribute costs to code, then optimize what matters. The teams avoiding these traps in 2025 aren't doing anything magic. They're just looking at their bills with application context and asking "why is this costing what it costs?"
Avoid Cost Traps with Frugal
Want help finding cost traps like these in your own applications? Sign up for Early Access to Frugal. Frugal attributes cloud and API costs to your code, finds inefficient usage, and helps you fix it. From runaway logging costs to Frugal AI optimization, we're here to help you stay out of cost traps in 2026.