Managing the LocalStack Lifecycle


LocalStack is great for local AWS development, but it has a habit of accumulating invisible state that survives partial teardowns. Stale Lambda functions, phantom event source mappings, zombie queues — they lurk inside the container and cause confusing failures the next time you deploy. I tried being surgical about it (delete each resource in reverse order) and eventually gave up. Now I destroy the entire container every time.

Destroy-First Deploys

The deploy script’s first line calls the destroy script:

"$SCRIPT_DIR/destroy.sh" 2>/dev/null || true

The 2>/dev/null || true matters — deploy must not fail if there’s nothing to destroy (first run, already-clean state). This makes the deploy script fully idempotent: run it any number of times, from any starting state, and you get the same result.

The alternative — checking whether the container exists and conditionally cleaning up — introduces exactly the kind of state-dependent branching that causes problems. Just always destroy. It takes a few seconds.

Name-Based Container Cleanup

The naive approach to teardown is docker compose down localstack. The problem is that Docker Compose can lose track of containers. This happens more often than you’d expect: someone changes the Compose project name, an interrupted up leaves an orphan, or the .docker state drifts between restarts. When Compose doesn’t know about a container, docker compose down silently does nothing while the container keeps running.

The fix is a two-tier cleanup:

# Tier 1: Ask Compose to clean up what it knows about
if docker compose ps localstack --format "{{.Name}}" 2>/dev/null | grep -q localstack; then
    docker compose stop localstack
    docker compose rm -f localstack
fi

# Tier 2: Find anything Compose missed by container name
ORPHAN_CONTAINERS=$(docker ps -a --filter "name=my-localstack" --format "{{.Names}}" 2>/dev/null || true)
if [ -n "$ORPHAN_CONTAINERS" ]; then
    docker ps -a --filter "name=my-localstack" --format "{{.ID}}" | xargs -r docker rm -f 2>/dev/null || true
fi

A few things to note:

  • Use docker ps -a, not docker ps. Stopped containers still hold port bindings and old state. Without -a you’ll miss them.
  • Filter by name, not by Compose project. --filter "name=my-localstack" catches containers regardless of which Compose project created them. Give your container a distinctive, grep-friendly name in docker-compose.yml — it becomes your cleanup handle.
  • xargs -r prevents docker rm from running with no arguments. On macOS -r is the default, but it’s good practice to be explicit.
  • || true on every cleanup command. The destroy script should never fail. If a container is already gone, that’s fine.

Wait for the Port

Removing a container doesn’t instantly free its port. Docker needs a moment to release the network binding, and on macOS with Docker Desktop there’s an additional virtualisation layer that can delay it. If the deploy script immediately starts a new container, you’ll get “address already in use.”

for i in {1..10}; do
    if ! lsof -i :4566 >/dev/null 2>&1; then
        break
    fi
    sleep 1
done

# If something is still holding the port, force-kill it
if lsof -i :4566 >/dev/null 2>&1; then
    lsof -ti :4566 | xargs kill -9 2>/dev/null || true
fi

The detail that’s easy to miss: lsof -ti (with -t) outputs only PIDs, which is what xargs kill needs. Without -t, lsof outputs a table with headers, and piping that to kill does nothing useful.

The polling loop handles the normal case (port freed in 1–2 seconds). The force-kill is a fallback for when Docker Desktop’s VM gets stuck, which happens occasionally after macOS sleep/wake cycles.

Docker Desktop Socket on macOS

If your team uses Docker Desktop, the Docker socket isn’t always where scripts expect it:

if [ -S "$HOME/.docker/run/docker.sock" ]; then
    export DOCKER_HOST="unix://$HOME/.docker/run/docker.sock"
fi

Docker Desktop creates its socket at ~/.docker/run/docker.sock rather than the traditional /var/run/docker.sock. Without this check, docker commands silently connect to nothing and return confusing errors. The -S test checks that the path exists and is a socket file. Put this at the top of both your deploy and destroy scripts.

Why Not a Reverse Teardown?

The obvious alternative is a destroy script that mirrors the deploy: delete each Lambda, each queue, each EventBridge rule, each IAM role. Problems:

  • It drifts. Every time you add a resource to the deploy script, you have to remember to add the reverse to the destroy script. Someone always forgets.
  • It misses ad-hoc resources. During debugging, you create resources manually. The reverse script doesn’t know about these.
  • LocalStack state is inconsistent. Deleting a Lambda doesn’t always clean up its event source mapping, so the next deploy fails with a duplicate mapping error. Killing the container sidesteps all of this.

The only downside is that deploy takes a few seconds longer because it provisions from scratch. For a local dev tool, that’s an easy trade-off.

The core principle: treat LocalStack as disposable infrastructure, not as a service to maintain. Destroy the whole container. Don’t try to be clever about partial cleanup. Name your containers so you can find them outside of Compose. Wait for ports to actually be free. Put the destroy at the start of every deploy so nobody has to think about state.