Running out of disk space always happens at the worst time. The solution is not to delete blindly; It is cleaning with visibility.

Guided practical case

Real environment: dev laptop with 256GB, Docker, local builds and screen recordings. In two weeks, 90GB were consumed without anyone noticing exactly why.

Objective: recover space without breaking the environment.

Step 1: diagnosis

df -h
du -sh ~/* 2>/dev/null | sort -h

Step 2: Common sources of consumption

  • package manager caches,
  • old Docker images/volumes,
  • giant temporary files,
  • local logs.

Useful command for Docker:

docker system df
docker system prune -af --volumes

Step 3: controlled cleaning

Do gradual and valid cleaning after each block.

Report in TypeScript (optional)

If you want to automate weekly visibility:

import { execSync } from "node:child_process";

function run(cmd: string): string {
  return execSync(cmd, { encoding: "utf8" }).trim();
}

const report = {
  disk: run("df -h /"),
  homeTop: run("du -sh ~/* 2>/dev/null | sort -h | tail -n 10"),
  docker: run("docker system df || true"),
};

console.log(JSON.stringify(report, null, 2));

Prevention

  • monthly maintenance,
  • space alerts,
  • retention policy for artifacts.

Happy reading! ☕

Comments