Generate a curriculum for the AWS Solutions Architect - Associate certification. The graph should map out the core services, covering IAM, EC2, S3, VPC, and RDS, and be structured as a study guide.
This curriculum focuses on developing a structured study guide for the AWS Solutions Architect - Associate (SAA-C03) certification, emphasizing core services like IAM, EC2, S3, VPC, and RDS. The SAA-C03 exam, updated for 2025, validates the ability to design secure, scalable, reliable, and cost-optimized architectures on AWS. The study guide details the core concepts and key exam topics for each specified service.
Key Facts:
- The AWS Solutions Architect - Associate (SAA-C03) exam, updated for 2025, requires understanding of designing secure, reliable, high-performing, and cost-optimized architectures.
- Identity and Access Management (IAM) is foundational for security, governing authentication and authorization through users, groups, roles, and policies.
- Amazon EC2 provides secure, resizable compute capacity with various instance types, launch options, and security mechanisms like Security Groups.
- Amazon S3 is a highly durable and scalable object storage service featuring diverse storage classes, versioning, lifecycle policies, and strong consistency.
- Amazon VPC offers a logically isolated section of the AWS cloud, allowing control over networking components such as subnets, route tables, Internet Gateways, NAT Gateways, and Network ACLs, and Amazon RDS is a fully managed service for relational databases supporting multiple engines with features like Multi-AZ and Read Replicas.
Amazon Elastic Compute Cloud (EC2)
Amazon EC2 provides secure and resizable compute capacity in the cloud, offering a wide range of instance types, operating systems, and purchase models. It's a foundational service for hosting applications and managing virtual servers.
Key Facts:
- EC2 provides secure, resizable compute capacity in the cloud.
- Instances are virtual servers launched from Amazon Machine Images (AMIs).
- Various instance types are optimized for general purpose, compute, memory, or storage.
- Cost optimization can be achieved through On-Demand, Reserved Instances, Spot Instances, and Savings Plans.
- Security Groups act as stateful virtual firewalls controlling instance traffic.
EC2 Cost Optimization
EC2 Cost Optimization involves leveraging various pricing models and tools to minimize the expense of running EC2 instances while meeting performance requirements. It includes strategies like using On-Demand, Reserved Instances, Spot Instances, and Savings Plans.
Key Facts:
- On-Demand Instances provide flexibility for unpredictable workloads without commitment, charged per second.
- Reserved Instances (RIs) offer significant discounts for 1 or 3-year commitments, suitable for steady-state workloads.
- Spot Instances allow bidding on unused EC2 capacity for up to 90% savings, ideal for fault-tolerant applications.
- Savings Plans provide flexible, commitment-based discounts across various compute services, including EC2.
- AWS Cost Explorer is a tool to analyze usage patterns and identify opportunities for cost optimization.
EC2 Instance Types and Use Cases
EC2 Instance Types and Use Cases categorizes virtual servers (instances) based on their optimized resource profiles, enabling users to select the most suitable configuration for specific application needs. This optimization covers areas such as compute, memory, storage, and accelerated computing.
Key Facts:
- EC2 instances are grouped into families like General Purpose, Compute Optimized, Memory Optimized, Accelerated Computing, and Storage Optimized.
- General Purpose Instances (e.g., T-series, M-series) offer a balance of resources for common workloads like web servers and development environments.
- Compute Optimized Instances (e.g., C-series) are designed for high-performance processors, ideal for HPC, batch processing, and machine learning.
- Memory Optimized Instances (e.g., R-series, X-series) provide a high memory-to-vCPU ratio for in-memory databases and real-time analytics.
- Accelerated Computing Instances (e.g., P-series, G-series) utilize GPUs or FPGAs for tasks like machine learning training and video rendering.
Network Access Control Lists (NACLs)
Network Access Control Lists (NACLs) provide an optional, stateless layer of security at the subnet level within a Virtual Private Cloud (VPC). Unlike Security Groups, NACLs can explicitly allow or deny traffic and process rules in a numbered order.
Key Facts:
- NACLs are stateless, requiring separate rules for both inbound and outbound traffic.
- They operate at the subnet level, applying security policies to all instances within a subnet.
- NACLs support both 'allow' and 'deny' rules, processing them in numbered order (lowest number first).
- Every subnet must be associated with a NACL; by default, NACLs allow all inbound and outbound traffic.
- Best practices include leaving gaps between rule numbers for future additions and using separate NACLs for different security zones.
Security Groups
Security Groups act as stateful virtual firewalls that control inbound and outbound traffic for individual EC2 instances. They operate at the instance level and only allow explicit 'allow' rules, denying all other traffic by default.
Key Facts:
- Security Groups are stateful, meaning return traffic for an allowed inbound request is automatically permitted.
- They operate at the instance level, providing fine-grained traffic control for specific EC2 instances.
- By default, all inbound traffic is denied, and all outbound traffic is allowed.
- Security Groups only support 'allow' rules; anything not explicitly allowed is denied.
- Multiple security groups can be attached to a single EC2 instance.
Amazon Relational Database Service (RDS)
Amazon RDS is a fully managed service that simplifies the setup, operation, and scaling of relational databases in the cloud. It supports various database engines and offers features for high availability, disaster recovery, and performance optimization.
Key Facts:
- RDS is a fully managed service for relational databases, handling administrative tasks like patching and backups.
- It supports multiple database engines including MySQL, PostgreSQL, Oracle, SQL Server, and Amazon Aurora.
- Multi-AZ deployments provide high availability and disaster recovery through synchronous replication.
- Read Replicas asynchronously scale read operations for performance improvement.
- RDS supports encryption at rest and in transit.
Backup and Restore Strategies
Amazon RDS provides comprehensive backup and restore capabilities, including automated backups and manual snapshots, to ensure data protection and recovery. Automated backups enable point-in-time recovery, while manual snapshots offer flexible retention for specific needs.
Key Facts:
- Automated backups are enabled by default and include daily full snapshots and continuous transaction log backups.
- Automated backup retention can be configured from 1 to 35 days for point-in-time recovery (PITR).
- Manual snapshots can be created at any time and are retained indefinitely until deleted.
- Point-in-Time Recovery (PITR) allows restoration to any second within the backup retention window.
- Backups are essential for data protection and recovery in case of data loss or corruption.
High Availability and Disaster Recovery
Amazon RDS offers robust solutions for high availability and disaster recovery through features like Multi-AZ Deployments and Read Replicas. Multi-AZ ensures data durability and minimal downtime via synchronous replication and automatic failover, while Read Replicas enhance read scalability and can serve as disaster recovery options across regions.
Key Facts:
- Multi-AZ deployments provide high availability and disaster recovery through synchronous replication to a standby instance.
- In a Multi-AZ outage, RDS automatically fails over to the standby instance.
- Read Replicas improve read scalability by creating asynchronous copies of the primary database.
- Read Replicas can be provisioned within the same AZ, across AZs, or across regions.
- Multi-AZ ensures data durability and minimal downtime, while Read Replicas offload read-intensive workloads.
Monitoring Amazon RDS Performance
Monitoring Amazon RDS performance is crucial for maintaining optimal database health and identifying potential issues. RDS offers enhanced monitoring capabilities through Amazon CloudWatch and RDS Performance Insights, providing key metrics and insights into database performance.
Key Facts:
- RDS offers enhanced monitoring through Amazon CloudWatch.
- RDS Performance Insights provides deeper visibility into database performance.
- Monitoring includes metrics on CPU, memory, file system, and disk I/O.
- Performance monitoring helps identify and resolve performance bottlenecks.
- Proactive monitoring ensures optimal database operation and resource utilization.
Scalability
Amazon RDS provides "push-button scalability" for compute, memory, and storage resources, allowing users to easily adjust capacity based on demand. This flexibility ensures that database resources can be scaled up or down without downtime, accommodating fluctuating workloads efficiently.
Key Facts:
- RDS offers "push-button scalability" for compute, memory, and storage.
- Resources can be scaled up or down based on demand.
- Storage can be increased without downtime.
- Scalability helps in handling fluctuating workloads.
- Read Replicas asynchronously scale read operations for performance improvement.
Security in Amazon RDS
Security in Amazon RDS is paramount, encompassing encryption at rest and in transit, as well as granular access control through AWS IAM. Encryption applies to the database instance, backups, snapshots, and Read Replicas, ensuring data protection throughout its lifecycle.
Key Facts:
- Amazon RDS supports encryption at rest using AWS Key Management Service (KMS).
- Encryption in transit is provided using SSL/TLS.
- Encryption applies to the database instance, backups, snapshots, and Read Replicas.
- RDS integrates with AWS Identity and Access Management (IAM) for granular access control.
- Security measures protect sensitive data from unauthorized access and breaches.
Supported Database Engines
Amazon RDS supports a diverse range of popular database engines, including open-source options like MySQL and PostgreSQL, and commercial databases such as Oracle and SQL Server. The selection of an engine is crucial and depends on specific application requirements and use cases, with MySQL and PostgreSQL often favored for new projects due to their community support and cost-effectiveness.
Key Facts:
- Amazon RDS supports MySQL, PostgreSQL, Oracle, SQL Server, MariaDB, and Amazon Aurora.
- Engine choice depends on specific use cases and application requirements.
- MySQL and PostgreSQL are often recommended for new users due to open-source nature, cost-effectiveness, and broad community support.
- Amazon Aurora is an AWS-native relational database offering higher performance for demanding OLTP workloads.
- Oracle and SQL Server offer advanced features but may have higher licensing costs.
Amazon Simple Storage Service (S3)
Amazon S3 is a highly durable, scalable, and cost-effective object storage service, fundamental for storing and retrieving any amount of data from anywhere on the web. It offers various storage classes and robust data management features.
Key Facts:
- S3 is a highly durable, scalable, and cost-effective object storage service.
- Data is stored in Buckets, which contain Objects identified by unique keys.
- Different Storage Classes (e.g., Standard, Intelligent-Tiering, Glacier) balance cost, performance, and retrieval time.
- Versioning stores multiple object versions for rollback and protection against accidental changes.
- S3 offers strong read-after-write consistency for all operations.
S3 Access Control
S3 Access Control mechanisms, primarily Bucket Policies and Access Control Lists (ACLs), govern who can access data stored in S3 buckets and what actions they can perform. While ACLs provide granular object-level permissions, Bucket Policies are the recommended modern approach for comprehensive, bucket-level access management.
Key Facts:
- Bucket Policies are the recommended method for controlling access at the S3 bucket level.
- Access Control Lists (ACLs) provide granular permissions for individual objects but are considered legacy.
- Bucket Policies support complex rules including cross-account access and IP-based restrictions.
- Setting S3 Object Ownership to 'Bucket owner enforced' disables ACLs for a bucket.
S3 Buckets and Objects
Amazon S3 organizes data into fundamental units called 'Buckets' which serve as containers, and within these buckets, data is stored as 'Objects,' uniquely identified by keys. This hierarchical structure is the basis for all data storage and retrieval operations in S3.
Key Facts:
- S3 stores data in Buckets, which are top-level containers.
- Objects are the fundamental entities stored in S3, representing data files.
- Each Object within a bucket is identified by a unique key.
- S3 provides strong read-after-write consistency for all operations, ensuring data integrity.
S3 Lifecycle Policies
S3 Lifecycle Policies are rule-based automations that enable efficient management of objects within S3 buckets. They facilitate cost optimization and compliance by defining rules for transitioning data between storage classes and for object expiration, automating data management tasks over time.
Key Facts:
- S3 Lifecycle Policies automate the transition of objects between different storage classes.
- They are crucial for cost optimization by moving less frequently accessed data to cheaper storage.
- Lifecycle policies can define rules for the permanent deletion of objects after a specified period.
- They help ensure compliance with data retention requirements by automating expiration.
S3 Storage Classes
S3 Storage Classes offer a range of options to optimize for cost, performance, and data retrieval times, catering to different access patterns and durability requirements. Understanding these classes is crucial for effective cost management and performance tuning in S3.
Key Facts:
- S3 Standard is for frequently accessed data requiring low latency and high throughput.
- S3 Intelligent-Tiering automatically optimizes costs for data with changing access patterns.
- S3 Glacier Deep Archive is the most cost-effective for long-term archiving with retrieval times typically around 12 hours.
- S3 One Zone-Infrequent Access is a more economical option for infrequently accessed, recreatable data, storing it in a single Availability Zone.
Amazon Virtual Private Cloud (VPC)
Amazon VPC allows users to provision a logically isolated section of the AWS cloud, providing full control over their virtual networking environment. This includes defining IP address ranges, creating subnets, configuring route tables, and setting up network gateways.
Key Facts:
- VPC provides a logically isolated section of the AWS cloud for private networks.
- Subnets divide a VPC and can be public (internet-facing) or private (internal).
- Internet Gateways enable communication between public subnets and the internet.
- NAT Gateways allow private subnets to access the internet without direct exposure.
- Network ACLs (NACLs) provide a stateless, subnet-level firewall, complementing stateful Security Groups.
High Availability Best Practices
High Availability Best Practices for Amazon VPC involve designing resilient architectures to prevent single points of failure and ensure continuous operation. Key strategies include multi-AZ deployments, separating public and private subnets, and implementing redundancy for critical components.
Key Facts:
- Multi-AZ deployment distributes resources across multiple Availability Zones to prevent single points of failure.
- Separation of public and private subnets enhances security and isolation for different application tiers.
- Redundancy through elements like NAT Gateways and Elastic Load Balancing contributes to fault tolerance.
- Monitoring and auditing with VPC Flow Logs and CloudWatch Logs are crucial for identifying vulnerabilities and ensuring compliance.
Internet Gateways (IGW) and NAT Gateways
Internet Gateways and NAT Gateways are crucial components that manage internet connectivity within a VPC. An Internet Gateway enables direct communication between public subnets and the internet, while a NAT Gateway allows instances in private subnets to initiate outbound internet traffic without direct exposure to inbound connections.
Key Facts:
- An Internet Gateway allows communication between instances in public subnets and the internet, with one per VPC.
- NAT Gateways enable instances in private subnets to initiate outbound traffic to the internet.
- NAT Gateways are managed services that prevent unsolicited inbound connections to private subnets.
- Deploying NAT Gateways per Availability Zone is a high availability best practice.
Route Tables
Route Tables are essential for directing network traffic within a VPC, defining rules for how data packets travel between subnets, to the internet via gateways, or to other connected networks. Customizing route tables allows for fine-grained control over traffic flow and network segmentation.
Key Facts:
- Route tables control where network traffic from subnets or gateways is directed.
- They define rules for routing traffic between subnets, to the internet, or to other connected networks.
- Customizing route tables enhances security and traffic management within a VPC.
- Each subnet in a VPC must be associated with a route table.
Security Groups and Network ACLs (NACLs)
Security Groups and Network ACLs (NACLs) are distinct but complementary firewall mechanisms in AWS VPC for controlling network traffic. Security Groups operate at the instance level as stateful firewalls, while NACLs function at the subnet level as stateless firewalls, providing layered security.
Key Facts:
- Security Groups are stateful, instance-level firewalls that allow only 'allow' rules.
- Network ACLs (NACLs) are stateless, subnet-level firewalls that support both 'allow' and 'deny' rules.
- Security Groups are evaluated before traffic reaches an instance; NACLs are evaluated before traffic enters or leaves a subnet.
- NACLs provide a second layer of defense complementing Security Groups.
VPC and Subnets
VPC and Subnets represent the fundamental building blocks of an isolated network environment within AWS. A VPC is a dedicated virtual network for an AWS account, while subnets divide this VPC into smaller IP address ranges, each residing within a single Availability Zone and designated as either public or private.
Key Facts:
- A VPC is a logically isolated section of the AWS cloud where users can launch AWS resources.
- Subnets are ranges of IP addresses within a VPC that must reside within a single Availability Zone.
- Subnets can be public (internet-facing) or private (internal) to control resource accessibility.
- Users define custom IP address ranges for VPCs and subnets using CIDR blocks.
VPC Peering, VPC Endpoints, and Transit Gateway
VPC Peering, VPC Endpoints, and Transit Gateway are advanced networking features that enable secure and efficient communication within and across VPCs, and with other AWS services. VPC Peering connects two VPCs, VPC Endpoints provide private access to AWS services, and Transit Gateway simplifies complex network architectures by acting as a central router.
Key Facts:
- VPC Peering allows traffic routing between two VPCs as if they were on the same network.
- VPC Endpoints provide private and secure access to AWS services without traversing the internet.
- Transit Gateway simplifies interconnectivity for multiple VPCs and on-premises networks.
- Non-overlapping CIDR blocks are crucial for VPC peering to work effectively.
AWS Solutions Architect - Associate Certification Overview
This section introduces the AWS Solutions Architect - Associate (SAA-C03) certification, outlining its purpose, target audience, and exam format. It emphasizes the foundational knowledge required for designing scalable, reliable, and secure cloud solutions on AWS, reflecting the 2025 exam updates.
Key Facts:
- The SAA-C03 exam validates the ability to design secure, reliable, high-performing, and cost-optimized architectures on AWS.
- The exam is intended for individuals with at least one year of hands-on experience designing cloud solutions.
- It heavily emphasizes security-related scenarios and core cloud concepts.
- The exam format includes multiple-choice and multiple-response questions.
- A passing score of 720 out of 1000 is required.
SAA-C03 Exam Domain: Design Cost-Optimized Architectures
This module covers the domain centered on designing cost-optimized architectures within AWS. It focuses on selecting cost-effective solutions, optimizing resource utilization, and implementing strategies to minimize cloud expenditure.
Key Facts:
- This domain represents 20% of the SAA-C03 exam.
- It focuses on selecting cost-effective solutions.
- Key aspects include optimizing storage tiers and implementing lifecycle policies.
- Utilizing tools like AWS Cost Explorer is important for cost management.
- Understanding the financial implications of architectural choices is crucial.
SAA-C03 Exam Domain: Design High-Performing Architectures
This module focuses on the domain of designing high-performing architectures, assessing the ability to design efficient and scalable solutions on AWS. It includes selecting appropriate compute, storage, networking, and database services for specific workload requirements.
Key Facts:
- This domain makes up 24% of the SAA-C03 exam.
- It assesses the ability to design efficient and scalable architectures.
- Topics include choosing appropriate compute, storage, networking, and database services.
- Optimizing performance for various workloads is a central theme.
- Understanding the performance characteristics of different AWS services is essential.
SAA-C03 Exam Domain: Design Resilient Architectures
This module explores the domain dedicated to designing resilient architectures on AWS, which involves building systems capable of withstanding failures and remaining operational. It covers strategies and services for high availability and disaster recovery.
Key Facts:
- This domain comprises 26% of the SAA-C03 exam.
- It involves building systems that can withstand failures and remain operational.
- Key topics include multi-AZ patterns, load balancing, and auto-scaling.
- Disaster recovery strategies are a crucial component of this domain.
- The goal is to ensure continuous availability and fault tolerance for AWS applications.
SAA-C03 Exam Domain: Design Secure Architectures
This module focuses on the largest domain of the SAA-C03 exam, which covers designing secure architectures on AWS. It emphasizes ensuring data safety and secure application access using various AWS services and security controls.
Key Facts:
- This domain accounts for 30% of the SAA-C03 exam content.
- It focuses on ensuring data safety and secure application access.
- Key services include IAM, VPC security controls, AWS WAF, and AWS Secrets Manager.
- The domain heavily emphasizes security-related scenarios.
- Understanding how to apply security best practices within AWS is critical.
SAA-C03 Exam Format and Details
This module details the practical aspects of the SAA-C03 examination, including its structure, question types, time limits, and scoring. Understanding these logistical elements is crucial for effective exam preparation and strategy.
Key Facts:
- The exam consists of 65 questions, including multiple-choice and multiple-response types.
- Candidates are allotted 130 minutes to complete the exam.
- A scaled score of 720 out of 1000 is required to pass.
- The exam can be taken at a testing center or through online proctoring.
- The exam costs 150 USD.
SAA-C03 Exam Purpose and Audience
This module introduces the core purpose and target audience for the AWS Certified Solutions Architect - Associate (SAA-C03) certification, highlighting its role in validating the ability to design solutions using AWS technologies based on the AWS Well-Architected Framework. It specifies the recommended experience level and background for candidates.
Key Facts:
- The SAA-C03 exam validates the ability to design solutions using AWS technologies based on the AWS Well-Architected Framework.
- It focuses on designing cost and performance-optimized solutions.
- Candidates should have at least one year of hands-on experience designing cloud solutions using AWS services.
- The certification is a highly sought-after credential in the cloud industry.
- Familiarity with basic programming concepts can be an advantage, though deep coding experience isn't required.
SAA-C03 Preparation and Difficulty
This module addresses the difficulty level of the SAA-C03 exam and outlines effective preparation strategies. It provides guidance on leveraging official AWS resources, hands-on experience, and practice materials for success.
Key Facts:
- The SAA-C03 exam is considered challenging, with a reported success rate of around 28% on the first attempt.
- Effective preparation involves reviewing the official AWS Exam Guide, Documentation, and Whitepapers.
- Gaining hands-on experience with AWS services is crucial for success.
- Utilizing high-quality training courses and practice questions is recommended.
- A strong background in general IT concepts (networking, storage, OS) and prior AWS experience can ease the difficulty.
Identity and Access Management (IAM)
Identity and Access Management (IAM) is a global AWS service fundamental for controlling authentication and authorization across all AWS services. It defines who can access what resources within the AWS environment through users, groups, roles, and policies.
Key Facts:
- IAM is a global service governing authentication and authorization for all AWS services.
- It utilizes Users, Groups, Roles, and Policies to manage permissions.
- Policies are JSON documents defining permissions and can be attached to users, groups, or roles.
- The Principle of Least Privilege is a core concept in IAM design.
- Key exam topics include policy evaluation order, cross-account access, and Multi-Factor Authentication (MFA).
AWS Security Token Service (STS)
AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). STS plays a crucial role in enhancing security by providing temporary credentials, reducing the reliance on long-term access keys.
Key Facts:
- AWS STS provides temporary security credentials, including an Access Key ID, Secret Access Key, and a Session Token.
- Temporary credentials issued by STS have a configurable validity duration and automatically expire.
- STS is fundamental for secure cross-account access, allowing resources in one account to be accessed by another.
- It enables federated authentication, integrating AWS with external identity providers (IdPs) like Active Directory.
- Common use cases include CI/CD pipelines, mobile/web applications, and enhanced security with MFA.
IAM Policies
IAM Policies are JSON documents that define permissions, specifying what actions are allowed or denied on specific AWS resources. They are the core mechanism for controlling authorization within AWS and can be attached to Users, Groups, or Roles.
Key Facts:
- IAM Policies are JSON documents that explicitly define permissions for AWS actions and resources.
- Policies use 'Effect' (Allow/Deny), 'Action', and 'Resource' elements to specify permissions.
- The 'Principle of Least Privilege' is a fundamental best practice for designing IAM policies.
- Policies can be AWS-managed or customer-managed, with conditions for granular control (e.g., MFA requirement).
- Every action in AWS is implicitly denied unless explicitly allowed by a policy.
IAM Policy Structure and Best Practices
Understanding the structure of IAM policies as JSON documents and adhering to best practices is critical for implementing secure and manageable access control in AWS. This includes leveraging elements like Effect, Action, and Resource, alongside applying principles such as least privilege and using managed policies.
Key Facts:
- IAM policies are JSON documents containing elements like `Version`, `Statement`, `Effect`, `Action`, and `Resource`.
- `Effect` specifies whether an action is `Allow`ed or `Deny`ed, with an implicit deny as the default.
- `Action` defines specific AWS service API calls, while `Resource` specifies the target AWS ARN.
- Best practices include the Principle of Least Privilege, using AWS managed policies as a starting point, and implementing conditions.
- Regular monitoring, auditing, and policy validation are essential for maintaining a secure IAM posture.
IAM Users, Groups, and Roles
IAM Users, Groups, and Roles are fundamental components within AWS Identity and Access Management that define individual identities, collections of identities, and temporary permission sets, respectively. Understanding their distinct purposes is crucial for effective access management and implementing the principle of least privilege.
Key Facts:
- IAM Users are individual identities with long-term credentials for direct, continuous access by a person or application.
- IAM Groups are collections of Users, inheriting permissions assigned to the group, simplifying management for similar job functions.
- IAM Roles are designed for temporary, specific access, assumed by trusted entities to gain temporary security credentials.
- Roles are highly recommended for cross-account access and automated workflows to minimize long-term credential use.
- Users have permanent credentials (passwords, access keys), while Roles provide temporary credentials.