Week 2 — Networking & Cloud Building Blocks

Essentials for Cloud Projects with AI in the mix

In partnership with

Hi Inner Circle, 

Welcome to Week 2.

Last week we got our basics in place — Git, Linux, billing, and the mental models for cloud.

This week, we go deeper into the backbone of the cloud: networking and core services. Think of it as learning the “roads, locks, and maps” of the cloud. Once you understand these, building on top becomes much easier…

Networking Basics

Your cloud journey is nothing without understanding how systems talk. 5 things you should know:

  1. IP addresses & DNS → Every machine has an address; DNS is your internet’s phonebook.

  2. Subnets & routing → Decide how traffic flows inside your environment.

  3. Firewalls & ports → Control who gets in and out.

  4. Load Balancers → Distribute traffic across multiple machines.

  5. VPCs (Virtual Private Clouds) → Your own isolated network inside the cloud. Why it matters: Every cloud role — DevOps, Security, or Data — needs networking fluency. Without it, debugging becomes guesswork.

Resource:

Compute Services

The machines running your workloads. Know these three categories:

  • Virtual Machines (VMs): EC2 (AWS), Compute Engine (GCP), Azure VMs.

  • Containers: Lightweight, portable, faster to deploy. (Docker, Kubernetes).

  • Serverless: Functions that run on demand. (AWS Lambda, Cloud Functions).

Category

Virtual Machines (VMs)

Containers

Serverless Functions

Provisioning & Management

You provision a full instance, install the OS, and manage all software updates and patching.

You provision an orchestration service (like Kubernetes) and manage the container images and dependencies. The host OS is managed by the provider.

You just upload your code. The provider handles provisioning of underlying compute resources, OS, and scaling.

Use Cases

Legacy applications, custom or specific OS requirements, workloads that need full control over the infrastructure.

Microservices architectures, CI/CD pipelines, consistent development and production environments.

Event-driven APIs, intermittent tasks, scheduled jobs, and simple functions that don't require persistent resources.

Why it matters:

Picking the wrong compute option = wasted cost and poor performance.

Resource:

Storage & Databases

Think of this as your cloud’s memory.

  • Object Storage: S3, GCS, Blob — for files and media.

  • Block Storage: Disks attached to VMs.

  • File storage: Shared file systems.

  • Databases: Relational (Postgres, MySQL) vs NoSQL (MongoDB, DynamoDB).

Storage Type

Provisioning & Management

Use Cases

Object

You create a bucket and upload objects via an API. The provider manages all scaling, durability, and security.

Static websites, data lakes, backups, archives, media storage.

Block

You create a volume and attach it to a single compute instance (VM). You are responsible for file systems and data on the volume.

Boot volumes for VMs, hosting databases, and applications that need high-speed, low-latency disk access.

File

You create a shared file system that can be mounted by multiple VMs simultaneously. The provider manages the underlying infrastructure.

"Lift and shift" of legacy applications, shared content repositories, and general-purpose shared storage for a team.

How to think about the databases and their use cases:

Database Type

Provisioning & Management

Use Cases

Relational (SQL)

You create a database instance and define a fixed schema (tables, columns, relationships) before you add data.

E-commerce platforms, financial applications, and any system that requires structured, transactional data with strict data integrity (ACID compliance).

NoSQL

You create a database and add data without a fixed schema. This is a flexible, highly scalable option for unstructured data.

User profiles, IoT data, real-time analytics, social media feeds, and content management systems.

Why it matters:

Storing logs, application data, or backups incorrectly leads to scale nightmares later.

Mini Project: Build a photo-sharing app → Images go into S3 (or GCS) and metadata into a SQL database.

Identity & Access Management (IAM)

The lock and key of the cloud.

Authentication vs Authorization

Concept

What It Is

How It Works

Authentication

The process of verifying who a user or service is.

Verifies identity using credentials like passwords, API keys, or multi-factor authentication (MFA).

Authorization

The process of determining what a verified user or service is allowed to do.

Grants permissions to an authenticated identity via roles, policies, and resource-based access controls.

Why it matters: Most cloud breaches happen not from fancy hacks, but from weak IAM setups.

Example: giving developers full admin access when they only need read-only - I mean why to create extra trouble:)

Resource:

Monitoring & Logging Basics

You can’t fix what you can’t see. 3 things to focus on:

  1. Metrics: CPU, memory, network usage.

  2. Logs: Every request and error gets recorded.

  3. Alerts: Be notified before users complain.

Tracing While metrics tell you what's happening (e.g., "CPU is at 90%") and logs tell you why (e.g., "Request to database failed"), tracing shows you the end-to-end flow of a single request as it moves through a distributed system. It stitches together a timeline of events, helping you pinpoint latency and errors across multiple microservices.

Why it matters: Cloud infra without monitoring is like flying blind. 

Resource:

The AI Angle (What’s Different in 2025)

Five years ago, learning networking and IAM was enough. Today, with AI-native workloads everywhere, the same foundations apply in new ways:

  • Networking for AI: Training models needs high-bandwidth, low-latency interconnects. Think AWS EFA or Google Cloud’s HPC networking.

  • Compute for AI: Beyond CPUs, you’ll need to understand GPUs and TPUs. Picking the right accelerator = huge cost savings.

  • Storage for AI: Vector databases and data lakes are now as common as SQL. AI apps rely on embedding stores just like they do on object storage.

  • IAM for AI: Service accounts now authenticate AI pipelines. Least privilege matters even more when models interact with multiple datasets.

  • Monitoring for AI: You don’t just track CPU — you track GPU utilization, model latency, and drift.

Why it matters: If you want to stand out, don’t just say “I know VPCs and IAM.” Show how you can apply them to AI/ML pipelines.

Action Items for Week 2

  • Create your first VPC and subnet in AWS/GCP/Azure/OC.

  • Deploy a test app via both containers and serverless.

  • Create IAM users with restricted roles.

  • Set up a basic monitoring alert on a cloud VM.

Conclusion

That’s your TL;DR for Week 2. Networking, compute, storage, IAM, and monitoring aren’t optional — they are the backbone of every cloud system.

But in 2025, the twist is clear: the same foundations now power AI-native platforms. So this week, don’t just practice the basics.

Ask yourself: how would these choices change if I were running an AI workload? That perspective alone will set you apart from most beginners.

Next week, we’ll jump into automation and Infrastructure as Code — the shift from clicking in the console to managing cloud like a true engineer.

-V

Go from AI overwhelmed to AI savvy professional

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day