Managing the LocalStack Lifecycle
LocalStack is great for local AWS development, but it has a habit of accumulating invisible state that survives partial teardowns. Stale Lambda functions, phantom event source mappings, zombie queues — they lurk inside the container and cause confusing failures the next time you deploy. I tried being surgical about it (delete each resource in reverse order) and eventually gave up. Now I destroy the entire container every time.
Destroy-First Deploys
The deploy script’s first line calls the destroy script:
"$SCRIPT_DIR/destroy.sh" 2>/dev/null || true
The 2>/dev/null || true matters — deploy must not fail if there’s nothing to destroy (first run, already-clean state). This makes the deploy script fully idempotent: run it any number of times, from any starting state, and you get the same result.
The alternative — checking whether the container exists and conditionally cleaning up — introduces exactly the kind of state-dependent branching that causes problems. Just always destroy. It takes a few seconds.
Name-Based Container Cleanup
The naive approach to teardown is docker compose down localstack. The problem is that Docker Compose can lose track of containers. This happens more often than you’d expect: someone changes the Compose project name, an interrupted up leaves an orphan, or the .docker state drifts between restarts. When Compose doesn’t know about a container, docker compose down silently does nothing while the container keeps running.
The fix is a two-tier cleanup:
# Tier 1: Ask Compose to clean up what it knows about
if docker compose ps localstack --format "{{.Name}}" 2>/dev/null | grep -q localstack; then
docker compose stop localstack
docker compose rm -f localstack
fi
# Tier 2: Find anything Compose missed by container name
ORPHAN_CONTAINERS=$(docker ps -a --filter "name=my-localstack" --format "{{.Names}}" 2>/dev/null || true)
if [ -n "$ORPHAN_CONTAINERS" ]; then
docker ps -a --filter "name=my-localstack" --format "{{.ID}}" | xargs -r docker rm -f 2>/dev/null || true
fi
A few things to note:
- Use
docker ps -a, notdocker ps. Stopped containers still hold port bindings and old state. Without-ayou’ll miss them. - Filter by name, not by Compose project.
--filter "name=my-localstack"catches containers regardless of which Compose project created them. Give your container a distinctive, grep-friendly name indocker-compose.yml— it becomes your cleanup handle. xargs -rpreventsdocker rmfrom running with no arguments. On macOS-ris the default, but it’s good practice to be explicit.|| trueon every cleanup command. The destroy script should never fail. If a container is already gone, that’s fine.
Wait for the Port
Removing a container doesn’t instantly free its port. Docker needs a moment to release the network binding, and on macOS with Docker Desktop there’s an additional virtualisation layer that can delay it. If the deploy script immediately starts a new container, you’ll get “address already in use.”
for i in {1..10}; do
if ! lsof -i :4566 >/dev/null 2>&1; then
break
fi
if [ "$i" -eq 10 ]; then
echo "Warning: port 4566 is still in use by another process"
fi
sleep 1
done
The polling loop handles the normal case (port freed in 1–2 seconds). If the port is still busy after 10 seconds, warn and let the deploy script decide what to do.
Don’t force-kill the port holder. It’s tempting to add lsof -ti :4566 | xargs kill -9 as a fallback, but if another project’s LocalStack container is using the same port, this kills Docker Desktop’s port forwarding process and crashes the Docker backend entirely. You end up with a Docker Desktop that shows containers in the UI but can’t actually communicate with them — and won’t shut down cleanly. The safer approach is to check port availability in the deploy script and exit with a clear message:
if lsof -i :4566 >/dev/null 2>&1; then
echo "Error: port 4566 is already in use"
exit 1
fi
Why Not a Reverse Teardown?
The obvious alternative is a destroy script that mirrors the deploy: delete each Lambda, each queue, each EventBridge rule, each IAM role. Problems:
- It drifts. Every time you add a resource to the deploy script, you have to remember to add the reverse to the destroy script. Someone always forgets.
- It misses ad-hoc resources. During debugging, you create resources manually. The reverse script doesn’t know about these.
- LocalStack state is inconsistent. Deleting a Lambda doesn’t always clean up its event source mapping, so the next deploy fails with a duplicate mapping error. Killing the container sidesteps all of this.
The only downside is that deploy takes a few seconds longer because it provisions from scratch. For a local dev tool, that’s an easy trade-off.
Verify After Deploy
Once lambdas are deployed, run a health check to catch misconfigurations early — broken env vars, bundling issues, or missing dependencies. Call it at the end of the deploy script so failures are obvious:
./test-lambda-health.sh || true
The output gives you an immediate summary of what’s working and what isn’t:
✓ my-app-local-orchestration: OK (v14)
✓ my-app-local-fetch-ingest: OK (v17)
✗ my-app-local-push-ingest: UNHEALTHY (status: error)
✓ my-app-local-recovery: OK (v13)
Results: 3 passed, 0 version mismatch, 1 failed (4 total)
See Lambda Tips for the health check pattern and a full verification script.
The core principle: treat LocalStack as disposable infrastructure, not as a service to maintain. Destroy the whole container. Don’t try to be clever about partial cleanup. Name your containers so you can find them outside of Compose. Wait for ports to actually be free. Put the destroy at the start of every deploy so nobody has to think about state.