A Playbook for Side Projects


The following articles describes the way in which I like to build scalable, low-cost online services. This is perfect for spinning up small business ideas and exploring their viability without an ongoing commitment.

Overview

  • DynamoDB — simple data models, design for access patterns, use TTL for cleanup
  • Lambdas — split by resource profile, version tracking, health checks, debugging flags
  • Asset handling — process locally, sync to S3, reference via CDN
  • CDK — environment names in resources, context-based switching, safe removal policies
  • Admin dashboard — faster than cloud consoles for day-to-day operations

Serverless

This is an obvious point when cost and scale is a factor. However, I’d highly recommend the use of pure Lambdas and DynamoDB to fully realise this approach.

Working with DynamoDB

DynamoDB’s pricing model rewards simplicity. Complex data models with heavy indexing and frequent scans will erode your cost advantage. It’s important to keep the design of your system simple - this might just require a different mindset.

It’s also good to use TTL for automatic cleanup. Let DynamoDB delete old records rather than running cleanup jobs.

Working with Lambdas

When considering how to organise your lambdas (i.e. do you use one Lambda per endpoint, or a single Lambda with internal routing), a hybrid approach often works best. For example:

/api/status     → cheap function (128MB, 5s timeout)
/api/generate   → heavy function, backend invocation (1GB, 60s timeout)
/api/admin/*    → collection of functions (512MB)

In the above:

Separate Lambdas suit endpoints with different resource profiles—a generation job needing 1GB RAM shouldn’t share infrastructure with a lightweight status check. Independent scaling, fine-grained permissions, and isolated deployments are additional benefits.

Consolidated Lambdas suit clusters of related endpoints with similar resource needs. A single warm instance serves all routes, reducing cold starts. Frameworks like Express or Hono handle the routing.

Asset Handling

Web frameworks typically bundle assets into the build. This becomes awkward when:

  • Backend lambdas need access to the same assets
  • Assets need transformation (resizing, format conversion)
  • You want to update assets without redeploying code

Separate the Pipeline

Keep original assets in version control under a resources/ directory. A processing script transforms them (resize, compress, generate thumbnails) into a local folder that mirrors the S3 structure (e.g. s3-dist). A separate sync command uploads changes to S3. The webapp and lambdas both reference assets via S3/CDN URLs.

source files → local S3 folder → S3 → CDN

This two-step approach (process locally, then sync) keeps iteration fast during development and makes the upload step predictable.

Processing Script

The script walks the source directory and applies transformations as needed:

  • Resize images to web-appropriate dimensions
  • Generate thumbnails or preview variants
  • Compress and convert formats (e.g., PNG → WebP)
  • Skip files that haven’t changed (based on mtime or hash)

Tools like ImageMagick or Sharp (Node.js) handle the image operations. The script itself can be a simple shell script or Node script—whatever fits the project.

Naming Consistency

Maintain identical folder structures across all stages:

resources/product/poses/image.jpg          # source (full size)

  SCRIPT

s3-dist/product/poses/image-example.jpg    # local replica (processed)
s3-dist/product/poses/image-thumb.jpg

  SYNC

s3://bucket/product/poses/image-example.jpg
s3://bucket/product/poses/image-thumb.jpg

This keeps sync scripts trivial. When paths match, there’s less to think about.

CDK Deployment

CDK provides type-safe infrastructure, but certain patterns prevent common mistakes.

Environment in Resource Names

Always include the environment in deployed resource names:

business-photos-requests-dev
business-photos-requests-prod

This prevents any possibility of cross-environment contamination and makes AWS console navigation clearer.

Context-Based Environment Switching

Use CDK context rather than separate stacks for environments:

cdk deploy --context environment=dev
cdk deploy --context environment=prod

This keeps stack definitions DRY while enabling environment-specific configuration.

Removal Policies

Dev resources should use DESTROY with auto-deletion enabled. Production resources should use RETAIN. Accidentally destroying production data should be structurally difficult.

Admin Dashboard

Although it might seem overkill for a small project, I like to create a simple admin page to debug the inner workings of the system. In the advent of coding assistants, it is often faster than navigating cloud consoles.

What to Include

  • Data browser: View and filter records from your database
  • Queue status: Messages pending, in-flight, failed
  • Log viewer: Recent logs without leaving the app
  • Manual triggers: Invoke backend operations with custom inputs

Additional tips

For further advice on how to set up your system: