FinOps

6 FinOps Mistakes We’ve Seen with Clients – and How You Can Avoid Them

FinOps in AWS isn’t about cutting corners — it’s about making smart, informed decisions. In this blog, we share real-world cases that show how small tweaks can lead to big savings without compromising quality.

FinOps in AWS: Practical Insights Without Cutting Costs at All Costs

FinOps isn’t a goal in itself, nor a standalone job, and definitely not just about cutting costs. It’s a mindset. If you think about how you use cloud resources, you can often realize structural savings without sacrificing quality or stability.

In this blog, we share real-life examples and insights about AWS cost management. Not just generic theory or AI-generated fluff — but actual cases where thousands of euro’s were saved per month simply by taking a closer look at usage, configuration, and behavior.

Many organizations see FinOps as “cutting costs.” But in practice, it’s often about unnoticed mistakes — until the invoice suddenly spikes.

We’ll walk you through 6 concrete cases where small missteps led to major waste. No finger-pointing — just insights and fixes that actually work.

Mistake 1: Costs are incorrectly attributed

When costs suddenly deviate from the average, there’s usually a reason. These deviations are rarely just accounting issues — more often, they’re symptoms of something technical that isn’t working optimally. Here are a few real examples we encountered with clients.

🔹 Example 1: Cache misconfiguration on one app → $2000–$2500 saved per month

One of our clients noticed an unusually high load on their CDN (Content Delivery Network – a kind of middle layer that brings assets like images and webpages closer to users, reducing the load on your application). Upon investigation, it turned out the caching logic was poorly configured. All requests bypassed the cache and went straight to the containers. The result? More containers, more data transfer, and higher costs.

An added issue: Access logs were written separately by each container. During peak times, a single request could be logged up to 50 times.

After fixing the caching setup, monthly costs dropped by $2000 to $2500. And this was just one out of 20 applications in the project.

🔹 Example 2: Backup costs from $650 → $250 per month

This client originally had incremental backups every 12 hours, alongside daily full backups for point-in-time recovery (PITR). The full backups were retained for three days.

They later adjusted the strategy: incremental backups were reduced to once every 24 hours, and full backups retained for just one day. This backup approach better matched the actual usage and risk profile of the application — instead of blindly following the default setup used for other databases in the project.

Lesson learned: not every workload needs the same retention, frequency, or AZ-redundancy. Standard setups aren’t always the cheapest or most effective.

🔹 Example 3: Config costs as a red flag

High AWS Config costs or sudden CloudWatch spikes often point to something deeper: a container in an infinite loop, a service stuck in restart mode, or overly aggressive autoscaling.

💬 What seems “normal” to you might not actually be good. The example above ran like this for over a year until someone noticed it by accident.

Mistake 2: Enabling heavy or unnecessary configurations without thinking

AWS is powerful — but also greedy if left unchecked. Many features are enabled by default, even when they don’t bring value.

🔹 ECS Container Insights: $1500 saved

A client with a $75,000/month AWS bill had ECS Container Insights enabled on every cluster — without any real benefit. By turning it on only where needed, they saved $1500/month.

🔹 Multi-AZ DBs: double the cost, not always necessary

Multi-AZ databases are great for business-critical systems, but often overkill for internal tools, test environments, or dev setups. We only enable it by default for truly critical workloads.

🔹 Scheduled Fargate tasks: more expensive than always-on

One client spending $20,000/month had an extra $2500 in configuration costs from scheduled Fargate tasks alone. These tasks ran every 1, 2, or 5 minutes — meaning they were effectively always on, just with extra overhead. A regular service would’ve been cheaper and more stable.

❗ For every architecture decision, ask yourself: what’s the goal, how critical is it, and which service fits best? Not everything needs to be ‘enterprise-grade’.

Mistake 3: Cleaning up only the old or forgotten resources

Cleaning up in AWS isn’t just about unused or forgotten resources. Active ones sometimes need tidying up too.

🔹 Infrastructure as Code (IaC)

Tools like Terraform or Pulumi make sure that anything deployed is also cleaned up properly. No more “forgotten test environments” or lingering S3 buckets.

🔹 Lifecycle policies

One client hadn’t cleaned up ECR images in years — some were three years old. After implementing lifecycle policies (everything older than 90 days with no active references was removed), costs dropped by over $1000/month.

Mistake 4: Using Saving Plans for workloads that don’t run full-time

Not everything needs to run 24/7. You can easily shut down non-prod environments in the evenings and on weekends using Lambda or scheduling tools.

But be careful with Saving Plans:

💬 If you commit to a one-year plan to save 30%, you’re still paying 70% — even when your environment is off. If you turn it off half the time, you may end up paying more than without a plan.

➡️ Use Saving Plans only for continuously running workloads. For part-time use, apply other cost-saving tactics.

Mistake 5: Dragging along outdated architectures

New AWS services and features are often cheaper and more efficient.

  • Clean up legacy architectures where possible.
  • Review your stack regularly: could Redis be replaced by Valkey? Should you switch to Graviton instances?

Small changes can lead to significant structural savings.

Mistake 6: Not actively monitoring costs or not assigning ownership

Many companies see their AWS bill each month — but don’t really know what’s driving it. You don’t need to be a FinOps expert to get your costs under control. A few simple practices can already make a big difference.

Monitor your costs monthly. Don’t just look at the total — dive into the details. Which services are driving costs? Which accounts? Which environments?

-> TIP: Compare to previous months. A small deviation today could become an expensive trend tomorrow.

Set budgets and alerts on key services. Don’t wait for the bill to find out something went off the rails.

Tag consistently and smartly. Every workload, resource, and environment should have clear tags. Without them, there’s no visibility or accountability. You need to know who’s responsible for which costs. (Except for things like network traffic — that still tends to be a gray area.)

💬 Knowing your costs isn’t the same as understanding them. Knowing why you’re paying is where the real difference starts.

FinOps = Common Sense

FinOps isn’t a straitjacket. You don’t have to cut everything. Some things need to cost money — like security, compliance, or monitoring.

But if:

– your configuration is thoughtful,
– you clean up regularly,
– and you dare to challenge what seems “normal,”

...then you create space to focus on what truly matters — without unnecessary waste.

Need help?

Want us to review your setup or identify your quick wins? Let’s talk — sometimes one session can save you thousands of dollars each month.

Safi Bouziane
AWS Cloud Engineer
,
Bulls-i