AWS Cost Optimisation
AWS Cost Optimisation: Complete Guide for Australian Businesses
Master AWS cost optimisation with practical strategies for reducing cloud spend while maintaining performance, reliability, and security for your Australian business.
CloudPoint Team
AWS costs can quickly spiral out of control without proper management. For Australian businesses, where every dollar counts, effective cost optimisation is essential for maintaining a sustainable cloud footprint while delivering business value.
The AWS Cost Challenge
Common cost pitfalls:
- Zombie Resources: Forgotten instances running 24/7
- Oversized Instances: More capacity than needed
- Inefficient Storage: Old snapshots, unused volumes
- Data Transfer: Unnecessary cross-region or internet traffic
- Lack of Visibility: Not knowing where money is spent
- Auto-Scaling Misconfig: Scaling up but never down
- Development Waste: Prod-sized resources in dev/test
These issues compound quickly - a $100/month waste becomes $1,200/year, and across multiple services and accounts, costs balloon.
Cost Optimisation Framework
Effective cost optimisation follows four principles:
1. Visibility
You can’t optimise what you can’t measure.
Cost Allocation Tags:
{
"Environment": "production",
"Application": "web-app",
"Owner": "engineering-team",
"CostCenter": "product-development",
"Project": "customer-portal"
}
Apply consistently across all resources for granular cost tracking.
AWS Cost Explorer:
- View costs by service, account, tag, region
- Identify trends and anomalies
- Forecast future spending
- Download detailed cost data
Cost and Usage Reports (CUR): Most detailed billing data:
- Hourly resource usage
- Pricing details
- Resource IDs
- Tags
Store in S3, analyze with Athena or QuickSight.
2. Right-Sizing
Match resources to actual requirements.
Compute Right-Sizing: Use AWS Compute Optimizer recommendations:
- Analyzes CloudWatch metrics
- Recommends instance type changes
- Projects cost savings
- Minimal performance impact
Storage Right-Sizing:
- S3 Intelligent-Tiering
- EBS volume resizing
- RDS instance optimisation
- Delete unused volumes/snapshots
3. Purchasing Options
Use the right pricing model for each workload.
On-Demand: Pay per hour
- Good for: Unpredictable workloads, testing, short-term
- Most expensive
- Maximum flexibility
Reserved Instances: 1 or 3-year commitment
- 30-75% discount vs on-demand
- Good for: Steady-state workloads
- Regional or zonal
- Standard or convertible
Savings Plans: 1 or 3-year commitment
- Similar discounts to RIs
- More flexible (apply across instance families)
- Compute or EC2 Savings Plans
- Good for: Dynamic workloads with consistent spend
Spot Instances: Bid on spare capacity
- Up to 90% discount vs on-demand
- Can be interrupted
- Good for: Batch jobs, stateless apps, fault-tolerant workloads
4. Automation
Manual cost management doesn’t scale.
Automated Schedules: Start/stop resources based on usage patterns
- Dev/test environments off nights and weekends
- Batch processing on-demand
- Auto Scaling based on load
Automated Cleanup:
- Delete old snapshots
- Remove unattached volumes
- Clean up unused elastic IPs
- Decommission zombie resources
Quick Wins: Immediate Cost Reductions
1. Delete Unused Resources
Find and Remove:
# Unattached EBS volumes
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[*].{ID:VolumeId,Size:Size,Type:VolumeType}' \
--output table
# Unused elastic IPs
aws ec2 describe-addresses \
--query 'Addresses[?AssociationId==null].{IP:PublicIp}' \
--output table
# Old snapshots (older than 90 days)
aws ec2 describe-snapshots \
--owner-ids self \
--query 'Snapshots[?StartTime<=`2024-08-01`].{ID:SnapshotId,Date:StartTime}' \
--output table
Typical Savings: 10-15% of EC2/EBS costs
2. Right-Size Oversized Instances
Use AWS Compute Optimizer:
aws compute-optimizer get-ec2-instance-recommendations \
--query 'instanceRecommendations[*].[currentInstanceType,recommendationOptions[0].instanceType,currentMonthlyCost,recommendationOptions[0].estimatedMonthlyCost]' \
--output table
Implement recommendations with minimal performance impact.
Typical Savings: 20-30% of EC2 costs
3. Implement S3 Lifecycle Policies
Move infrequently accessed data to cheaper storage:
{
"Rules": [{
"Id": "Archive old data",
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555
}
}]
}
Typical Savings: 40-60% of S3 costs
4. Enable S3 Intelligent-Tiering
Automatic cost optimisation for S3:
aws s3api put-bucket-intelligent-tiering-configuration \
--bucket my-bucket \
--id EntireBucket \
--intelligent-tiering-configuration '{
"Id": "EntireBucket",
"Status": "Enabled",
"Tierings": [
{
"Days": 90,
"AccessTier": "ARCHIVE_ACCESS"
},
{
"Days": 180,
"AccessTier": "DEEP_ARCHIVE_ACCESS"
}
]
}'
Typical Savings: 30-50% of S3 costs with no operational overhead
5. Stop Dev/Test Environments
Schedule instances to run only during business hours:
# lambda_function.py - Instance Scheduler
import boto3
from datetime import datetime
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
# Check if current time is within business hours (AEST)
current_hour = datetime.now().hour
# Find instances tagged for scheduling
instances = ec2.describe_instances(
Filters=[
{'Name': 'tag:Schedule', 'Values': ['business-hours']},
{'Name': 'instance-state-name', 'Values': ['running', 'stopped']}
]
)
for reservation in instances['Reservations']:
for instance in reservation['Instances']:
instance_id = instance['InstanceId']
state = instance['State']['Name']
# Start if business hours (8 AM - 6 PM) and currently stopped
if 8 <= current_hour < 18 and state == 'stopped':
ec2.start_instances(InstanceIds=[instance_id])
# Stop if outside business hours and currently running
elif (current_hour < 8 or current_hour >= 18) and state == 'running':
ec2.stop_instances(InstanceIds=[instance_id])
Typical Savings: 65-75% on scheduled resources
6. Use NAT Gateway Alternatives
NAT Gateways cost ~$0.059/hour + data transfer.
Alternatives:
- NAT Instances: Lower hourly cost, requires management
- VPC Endpoints: Free for S3/DynamoDB, eliminates NAT Gateway for these services
- Consolidate: One NAT Gateway per AZ instead of per subnet
Typical Savings: $500-2,000/month per NAT Gateway eliminated
Service-Specific Optimisation
EC2 Cost Optimisation
1. Use Graviton Instances AWS Graviton (ARM-based) offers 20-40% better price-performance:
- t4g, m7g, c7g, r7g families
- Works with Amazon Linux 2, Ubuntu, and others
2. Implement Auto Scaling Scale based on demand:
- CPU utilization
- Request count
- Custom metrics
- Scheduled scaling
3. Use Spot for Fault-Tolerant Workloads
# ECS Task Definition with Spot
capacityProviderStrategy:
- capacityProvider: FARGATE_SPOT
weight: 3
base: 0
- capacityProvider: FARGATE
weight: 1
base: 2
Mix of Spot and On-Demand for resilience.
RDS Cost Optimisation
1. Right-Size Instances Start smaller, scale up if needed:
- Monitor CPU, memory, disk I/O
- Use Performance Insights
- Test performance with smaller instances
2. Use Aurora Serverless v2 For variable workloads:
- Scales automatically
- Pay per ACU (Aurora Capacity Unit)
- No charge when idle
- Sub-second scaling
3. Implement Reserved Instances For production databases:
- 35-60% discount
- 1 or 3-year terms
- Size flexibility within instance family
4. Optimize Storage
- Delete old automated backups
- Reduce backup retention period
- Use snapshot lifecycle policies
- Consider Aurora I/O-Optimized for high I/O workloads
S3 Cost Optimisation
1. Storage Class Optimisation
STANDARD: $0.023/GB/month
STANDARD_IA: $0.0125/GB/month (50% savings)
GLACIER: $0.004/GB/month (83% savings)
GLACIER_DEEP_ARCHIVE: $0.00099/GB/month (96% savings)
2. Request Optimisation
- Batch operations instead of individual requests
- Use CloudFront for frequently accessed objects
- Implement caching headers
3. Data Transfer Optimisation
- Use CloudFront instead of direct S3 access
- Access from same region
- Use VPC Endpoint (free data transfer)
- Compress before storing
Lambda Cost Optimisation
1. Optimize Memory Allocation Test different memory settings:
- More memory = faster execution = potentially cheaper
- Use Lambda Power Tuning
2. Reduce Execution Time
- Optimize code
- Reuse connections
- Lazy load dependencies
- Use Provisioned Concurrency wisely
3. Architecture Optimisation
- Consider Step Functions for orchestration
- Use EventBridge instead of polling
- Implement caching (ElastiCache, DAX)
Data Transfer Cost Optimisation
Data transfer is often overlooked but expensive.
Costs (from Sydney region):
- Within same AZ: Free
- Between AZs: $0.01/GB
- Between regions: $0.09/GB
- To internet: $0.114/GB (first 10 TB)
Optimisation strategies:
- Architect for locality: Keep data close to compute
- Use CloudFront: Cheaper data transfer to internet
- VPC Endpoints: Free transfer to S3/DynamoDB
- Compression: Reduce data transferred
- PrivateLink: Keep traffic on AWS network
Reserved Capacity and Savings Plans
When to Use Reserved Instances
Good for:
- Steady-state workloads
- Production databases
- Baseline compute capacity
- Predictable usage
Not good for:
- Variable workloads
- Short-term projects
- Uncertain requirements
Savings Plans vs Reserved Instances
Savings Plans:
- Pros: Flexible across instance types, more forgiving
- Cons: Requires consistent spend commitment
Reserved Instances:
- Pros: Highest discount, can resell on marketplace
- Cons: Less flexible, specific instance type
Recommendation: Savings Plans for most workloads, RIs for specific long-term database instances.
Coverage Targets
Conservative: 60-70% reserved capacity Moderate: 70-80% reserved capacity Aggressive: 80-90% reserved capacity
Leave 10-30% as on-demand for growth and flexibility.
Cost Governance and Controls
Budgets and Alerts
aws budgets create-budget \
--account-id 123456789012 \
--budget '{
"BudgetName": "Monthly-EC2-Budget",
"BudgetLimit": {
"Amount": "1000",
"Unit": "USD"
},
"TimeUnit": "MONTHLY",
"BudgetType": "COST",
"CostFilters": {
"Service": ["Amazon Elastic Compute Cloud - Compute"]
}
}' \
--notifications-with-subscribers '[
{
"Notification": {
"NotificationType": "ACTUAL",
"ComparisonOperator": "GREATER_THAN",
"Threshold": 80
},
"Subscribers": [
{
"SubscriptionType": "EMAIL",
"Address": "team@example.com"
}
]
}
]'
Set budgets for:
- Overall account spending
- Individual services
- Projects/applications
- Teams/cost centers
- Environments (prod, dev, test)
Cost Anomaly Detection
AWS automatically identifies unusual spending:
aws ce create-anomaly-monitor \
--anomaly-monitor '{
"MonitorName": "ProductionCostMonitor",
"MonitorType": "DIMENSIONAL",
"MonitorDimension": "SERVICE"
}'
Receives alerts when spending patterns deviate from normal.
Service Quotas and Limits
Prevent runaway costs by setting service limits:
- Maximum number of EC2 instances
- Maximum RDS storage
- Lambda concurrent executions
Cost Optimisation Culture
Technology alone isn’t enough - build a cost-conscious culture:
1. Shared Responsibility
Make teams responsible for their costs:
- Cost allocation tags by team
- Regular cost reviews
- Budgets per team
- Cost visibility in dashboards
2. Incentive Alignment
Reward cost optimisation:
- Celebrate cost reductions
- Include in performance reviews
- Share savings across organization
3. Education and Training
Ensure teams understand:
- AWS pricing models
- Cost optimisation techniques
- Available tools and resources
- Impact of architectural decisions
4. Regular Reviews
Weekly: Review anomalies and unexpected spikes Monthly: Analyze trends, implement quick wins Quarterly: Assess Reserved Instance/Savings Plan coverage Annually: Comprehensive cost optimisation review
Cost Optimisation Checklist
Immediate Actions:
- Enable cost allocation tags
- Set up AWS Budgets
- Delete unused resources
- Implement S3 lifecycle policies
- Enable Cost Anomaly Detection
Short-Term (30 days):
- Right-size oversized instances
- Schedule dev/test environments
- Implement Auto Scaling
- Review and optimise data transfer
- Enable S3 Intelligent-Tiering
Medium-Term (90 days):
- Purchase Reserved Instances/Savings Plans
- Implement cost dashboards
- Establish FinOps processes
- Optimize storage classes
- Review and optimise databases
Ongoing:
- Monthly cost reviews
- Quarterly RI/SP coverage analysis
- Continuous right-sizing
- Regular zombie resource cleanup
- Architecture reviews for cost efficiency
Conclusion
AWS cost optimisation is not a one-time project but an ongoing practice. By implementing visibility, right-sizing, purchasing optimisation, and automation, Australian businesses can significantly reduce cloud spending while maintaining or improving performance and reliability.
Start with quick wins - delete unused resources, right-size instances, implement lifecycle policies - then build toward a mature FinOps practice with automation, governance, and a cost-conscious culture.
CloudPoint specialises in AWS cost optimisation for Australian businesses. We can analyze your environment, identify savings opportunities, and implement sustainable cost management practices. Contact us for a complimentary cost optimisation assessment.
Ready to Reduce Your AWS Costs?
CloudPoint identifies waste and implements optimisations that deliver immediate savings—without impacting performance. Get in touch to get your cost analysis.