✍️ By Abhishek Kumar | #FirstCrazyDeveloper
🧭 The Hidden Cost of “Default is Fine”
When teams build quickly in Azure, they pick the fastest path to “it works.”
✅ Azure App Service → Premium SKU
✅ Azure Function → Elastic Premium Plan
✅ Logic App → Recurring trigger every 5 minutes
✅ Cosmos DB → Fixed 10,000 RU/s “just to be safe”
Everything seems perfect — until the FinOps team starts asking questions.
The problem isn’t what you deployed.
It’s that you never went back to check the defaults once the system stabilized.
🧭 Why Understanding This Is Important
When we build solutions in Azure, we’re encouraged to move fast.
We pick PaaS services (Platform as a Service) because they’re:
- Easy to deploy
- Fully managed by Microsoft
- Auto-scalable and secure by design
But there’s a hidden trade-off:
PaaS services are optimized for reliability and performance — not cost efficiency.
Most developers and architects never revisit their initial configurations.
The result? You end up paying for performance buffers and redundant capacity that you never use.
Understanding how Azure PaaS pricing really works helps you:
- ⚙️ Build smarter architectures aligned with workload patterns.
- 💡 Apply FinOps principles (Visibility → Optimization → Automation).
- 💸 Reduce costs sustainably without impacting performance or speed.
🚦 Why PaaS Is Rarely the Cheapest by Default
Let’s break down why this happens technically before we jump into examples.
| PaaS Service | Why Cost Rises by Default | Optimization Approach |
|---|---|---|
| App Service | Always-on compute, even when idle | Use Basic tier or scale down |
| Function App | Elastic Premium allocates reserved instances | Use Consumption plan or autoscale |
| Logic App | Time-based triggers run 24×7 | Switch to event-based triggers |
| Cosmos DB | Manual RU/s fixed for peak | Enable autoscale or serverless mode |
Now, let’s deep dive into each with real-world cases and automation examples 👇
Azure App Service: The Idle Premium Problem
💡 Why It Matters
Azure App Service Plans define the compute your web apps use.
If you’re running Premium v2/v3, you’re reserving compute all day — whether you use it or not.
Each plan is backed by dedicated VMs under the hood. Even idle apps cost the same.
🧩 Real-world Example
Scenario:
A marketing portal for a B2B product runs on a P2V3 plan (210 EUR/month).
Traffic: only 30 users weekly.
Root Cause:
The team used “Premium” during testing for performance.
After go-live, no one downgraded it to Basic B1 (10 EUR/month) or Shared plan (4 EUR/month).
Impact:
Wasted ~200 EUR/month for 0 performance gain.
🧠 Optimization Techniques
- Use Azure Advisor to detect low-utilization App Services.
- Combine apps under one plan (same region + SKU).
- Use Auto-Scale rules or Automation Account Runbook to scale down during off-hours.
⚙️ Implementation Examples
✅ C# – Scale Down Premium App Service Automatically
using Azure.Identity;
using Azure.ResourceManager;
using Azure.ResourceManager.AppService;
// Authenticate using Managed Identity or Azure CLI
var client = new ArmClient(new DefaultAzureCredential());
var subscription = await client.GetDefaultSubscriptionAsync();
await foreach (var plan in subscription.GetAppServicePlansAsync())
{
if (plan.Data.Sku.Name.StartsWith("P"))
{
Console.WriteLine($"Scaling down {plan.Data.Name} to Basic...");
plan.Data.Sku.Name = "B1";
await plan.UpdateAsync(plan.Data);
}
}
✅ Python – Combine App Services in a Shared Plan
from azure.identity import DefaultAzureCredential
from azure.mgmt.web import WebSiteManagementClient
credential = DefaultAzureCredential()
client = WebSiteManagementClient(credential, "YOUR_SUBSCRIPTION_ID")
for plan in client.app_service_plans.list():
if plan.sku.tier == "Premium":
print(f"Moving {plan.name} to Shared plan...")
plan.sku.name = "SHARED"
client.app_service_plans.begin_create_or_update(plan.resource_group, plan.name, plan)
💰 Result: 80–90% cost reduction on low-traffic web apps.
Function Apps: Elastic Premium ≠ Elastic Cost
💡 Why It Matters
Developers often pick Elastic Premium to avoid “cold starts.”
But this plan pre-warms instances even with 0 invocations.
Result: Reserved compute → fixed monthly cost.
🧩 Real-world Example
Scenario:
A Function App that syncs user data once every 30 minutes runs on Elastic Premium (EP1).
Actual monthly executions: ~1,500 only.
Issue:
Minimum charge for EP1 = ~150 EUR/month.
Consumption plan for same workload = <1 EUR/month.
🧠 Optimization Techniques
- Use Consumption Plan for event-driven or batch workloads.
- Use Queue/Service Bus triggers to handle concurrency.
- Enable Always On only if strictly required (e.g., for webhooks).
⚙️ Implementation
✅ Check Function Plan via Azure CLI
az functionapp plan list --query "[].{Name:name, SKU:sku.name}" -o table
✅ Move to Consumption Plan via Bicep
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
name: 'my-func-app'
location: resourceGroup().location
kind: 'functionapp'
properties: {
serverFarmId: appServicePlan.id
}
}
resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
name: 'consumption-plan'
sku: {
name: 'Y1'
tier: 'Dynamic'
}
}
Logic Apps: The “Every 5-Minute” Trap
💡 Why It Matters
Every Logic App execution incurs cost — even when there’s nothing to process.
Polling every 5 minutes means 8,640 runs/day, even if the source system updates once daily.
🧩 Real-world Example
Scenario:
A Logic App polls SharePoint for new items every 5 minutes.
Each run costs ~0.00025 EUR.
That’s 8,640 * 0.00025 * 30 = 64 EUR/month doing nothing.
🧠 Optimization Techniques
- Replace recurrence triggers with Event Grid or Service Bus events.
- If polling is unavoidable, increase frequency to every hour or day.
- Use Control action with “Until loop” only on change detection.
⚙️ Implementation Example
Event-Based Trigger (Recommended)
{
"triggers": {
"When_a_resource_event_occurs": {
"type": "Microsoft.EventGrid.Subscription",
"eventType": "Microsoft.Storage.BlobCreated"
}
}
}
💡 Cost Impact
✅ 8,640 → ~15 executions/day
✅ Saved ~95% of the Logic App cost
Cosmos DB: The RU/s Overprovisioning Trap
💡 Why It Matters
Cosmos DB throughput is measured in Request Units per second (RU/s).
Setting manual 10,000 RU/s means you’re billed for that every second, even with 1% usage.
🧩 Real-world Example
Scenario:
An IoT telemetry archive used 10,000 RU/s fixed.
Average consumption: 700 RU/s.
After switching to Autoscale (max 4,000 RU/s) → 60% monthly saving.
🧠 Optimization Techniques
- Use Autoscale mode (
maxThroughput) instead of manual RU/s. - Use Serverless for low-traffic or dev environments.
- Enable TTL (Time to Live) for data lifecycle management.
⚙️ Implementation
✅ Python – Convert Manual to Autoscale
from azure.identity import DefaultAzureCredential
from azure.mgmt.cosmosdb import CosmosDBManagementClient
credential = DefaultAzureCredential()
client = CosmosDBManagementClient(credential, "YOUR_SUBSCRIPTION_ID")
database = client.sql_resources.get_sql_database("rg-name", "account-name", "db-name")
if not database.resource.throughput_settings_resource.autoscale_settings:
print("Enabling autoscale (max 4000 RU/s)...")
client.sql_resources.begin_migrate_sql_database_to_autoscale("rg-name", "account-name", "db-name")
✅ C# – Adjust RUs Based on Usage
using Azure.Identity;
using Azure.ResourceManager.CosmosDB;
var client = new CosmosDBManagementClient("YOUR_SUBSCRIPTION_ID", new DefaultAzureCredential());
var throughput = await client.SqlResources.GetSqlDatabaseThroughputAsync("rg-name", "account-name", "db-name");
if (throughput.Value.Resource.Throughput > 4000)
{
Console.WriteLine("Scaling down to 4000 RU/s");
await client.SqlResources.UpdateSqlDatabaseThroughputAsync(
"rg-name", "account-name", "db-name",
new ThroughputSettingsUpdateParameters { Resource = new ThroughputSettingsResource { Throughput = 4000 } });
}
🧩 Azure FinOps Cycle: Engineering Meets Cost Awareness
FinOps isn’t just a finance process — it’s an engineering discipline.
🔁 The FinOps Cycle
- Observe → Use Azure Cost Analysis, Monitor, and Advisor
- Analyze → Identify idle or over-provisioned resources
- Act → Automate scaling, optimize tiers, consolidate plans
- Review → Repeat quarterly
⚙️ Automate FinOps Checks with Python
from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient
credential = DefaultAzureCredential()
client = ResourceManagementClient(credential, "YOUR_SUBSCRIPTION_ID")
for group in client.resource_groups.list():
print(f"Scanning resources in {group.name}")
for res in client.resources.list_by_resource_group(group.name):
if any(x in res.type.lower() for x in ["appservice", "cosmos", "function"]):
print(f"Found candidate for optimization: {res.name} - {res.type}")
You can integrate this script with:
- Azure Logic Apps for scheduling
- Power BI dashboards for reporting
- Azure DevOps pipelines for automated reviews
🧠 Abhishek’s Take
“Cloud optimization isn’t about cutting corners — it’s about smart allocation.
Azure’s PaaS services are powerful accelerators, but they come with performance-first defaults.
Revisit your configurations regularly; what was safe during development may be overkill in production stability.A truly mature Azure Architect doesn’t just deploy services — they tune the cloud for performance, reliability, and cost efficiency.”
🏁 Final Takeaways
- PaaS services trade ease of use for cost awareness.
- Defaults are designed for maximum reliability, not minimal cost.
- Regular FinOps reviews are as important as security or performance audits.
- Automate optimization with SDKs, Bicep, or Azure Policy.


Leave a comment