Working with docker on Mac: Core Docker Concepts | Docker Desktop

Docker is an open‑source platform for packaging applications and their dependencies into lightweight, portable units called containers, enabling consistent behavior across development, testing, and production environments (Docker, Wikipedia). Its core component, the Docker Engine, powers container execution, while Docker Desktop—a user‑friendly application tailored for macOS—bundles the Docker Engine, Docker CLI, Docker Compose, Kubernetes integration, and a visual Dashboard into a seamless package (Docker Documentation).

On macOS, Docker Desktop simplifies container workflows by leveraging native virtualization (HyperKit on Intel Macs, Apple’s Hypervisor.framework on Apple Silicon), eliminating the need for cumbersome VMs like VirtualBox (The Mac Observer, Medium).

Installation is straightforward: simply download the appropriate .dmg installer (Intel or Apple Silicon), drag Docker into the Applications folder, and proceed through setup—granting permissions and accepting licensing as needed (LogicCore Digital Blog). Once up and running, you can verify your setup via commands like:

docker --version
docker run hello-world
docker compose version

These commands confirm successful installation and provide instant access to Docker’s ecosystem on your Mac.


Commands Executed in Local System:

➜ docker --version
zsh: command not found: docker

➜ docker ps
# check the exact container name:
➜ docker ps --format "table {{.Names}}\t{{.Image}}"

# rebuild the containers
➜ docker-compose down
➜ docker-compose up --build

# Error: target webpacker: failed to solve: error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``

➜ cat ~/.docker/config.json # remove "credsStore": "desktop"

# Remove all containers and images
➜ docker container prune -f
➜ docker image prune -f
➜ docker ps -a # check containers
➜ docker images # check images
➜ docker-compose up --build -d # build again

# postgres
➜ docker exec -it image-name psql -U username -d database_name

# rails console
➜ docker exec -it image-name rails c

# get into the docker container shell
➜ docker exec -it image-name bash

🐳 1. Difference between docker compose up --build and docker compose up

docker compose up

  • Just starts the containers using the existing images.
  • If the image for a service doesn’t exist locally, Docker will pull it from the registry (e.g., Docker Hub).
  • It will not rebuild your image unless you explicitly tell it to.

docker compose up --build

  • Forces Docker to rebuild the images from the Dockerfile before starting the containers.
  • Useful when you’ve changed:
    • The Dockerfile
    • Files copied into the image
    • Dependencies
  • This ensures your running containers reflect your latest code and build instructions.

📌 Example:

docker compose up         # Use existing images (fast startup)
docker compose up --build # Rebuild images before starting

If you changed your app code and your Docker setup uses bind mounts (volumes), you usually don’t need --build unless the image itself changed.
If you changed the Dockerfile, then you need --build.


🖥 2. Why we use Docker Desktop & can we use Docker without it?

Docker Desktop is basically a GUI + background service that makes Docker easy to run on macOS and Windows.
It includes:

  • Docker Engine (runs containers)
  • Docker CLI
  • Docker Compose
  • Kubernetes (optional)
  • Settings & resource controls (CPU, RAM)
  • Networking setup
  • A UI to view containers, images, logs, etc.

Why needed on macOS & Windows?

  • Docker requires Linux kernel features like cgroups & namespaces.
  • macOS and Windows don’t have these natively, so Docker Desktop runs a lightweight Linux VM behind the scenes (using HyperKit, WSL2, etc.).
  • Without Docker Desktop, you’d need to set up that Linux VM manually, install Docker inside it, and configure networking — which is more complex.

Can you use Docker without Docker Desktop?
Yes, but:

  • On macOS/Windows — you’d have to:
    • Install a Linux VM manually (VirtualBox, VMware, UTM, etc.)
    • SSH into it
    • Install Docker Engine
    • Expose ports and share files manually
  • On Linux — you don’t need Docker Desktop at all, you can install Docker Engine directly via: sudo apt install docker.io
  • For Windows, Microsoft has Docker on WSL2 which can run without the Docker Desktop GUI, but requires WSL2 setup.

💡 In short:

  • Use --build when you change something in the image definition.
  • Docker Desktop = easiest way to run Docker on macOS/Windows.
  • You can skip Docker Desktop, but then you must manually set up a Linux VM with Docker.

🧩 1. Core Docker Concepts

TermWhat it isKey analogy
ImageA read-only blueprint (template) that defines what your app needs to run (OS base, packages, configs, your code). Built from a Dockerfile.Like a recipe for a dish
ContainerA running instance of an image. Containers are isolated processes, not full OSes.Like a meal prepared from the recipe
VolumePersistent storage for containers. Survives container restarts or deletions.Like a pantry/fridge where food stays even after cooking is done
Docker ComposeA YAML-based tool to define & run multi-container apps. Lets you describe services, networks, and volumes in one file and start them all at once.Like a restaurant order sheet for multiple dishes at once
NetworkVirtual network that containers use to talk to each other or the outside world.Like a kitchen intercom system

2. Kubernetes in simple words

Kubernetes (K8s) is a container orchestration system. It’s what you use when you have many containers across many machines and you need to manage them automatically.

What it does:

  • Deploy containers on a cluster of machines
  • Restart them if they crash
  • Scale up/down automatically
  • Load balance traffic between them
  • Handle configuration and secrets
  • Do rolling updates with zero downtime

📌 Analogy
If Docker Compose is like cooking multiple dishes at home, Kubernetes is like running a huge automated kitchen in a restaurant chain — you don’t manually turn on each stove; the system manages resources and staff.


🍏 3. How Docker Works on macOS

Your assumption is right — Docker needs Linux kernel features (cgroups, namespaces, etc.), and macOS doesn’t have them.

So on macOS:

  • Docker Desktop runs a lightweight Linux virtual machine under the hood using Apple’s HyperKit (before) or Apple Virtualization Framework (newer versions).
  • That VM runs the Docker Engine.
  • Your docker CLI in macOS talks to that VM over a socket.
  • All containers run inside that Linux VM, not directly on macOS.

Workflow:

Mac Terminal → Docker CLI → Linux VM in background → Container runs inside VM


4. Hardware Needs for Docker on macOS

Yes, VMs can be heavy, but Docker’s VM for macOS is minimal — not like a full Windows or Ubuntu desktop VM.

Typical Docker Desktop VM:

  • Base OS: Tiny Linux distro (Alpine or LinuxKit)
  • Memory: Usually 2–4 GB (configurable)
  • CPU: 2–4 virtual cores (configurable)
  • Disk: ~1–2 GB base, plus images & volumes you pull

Recommended host machine for smooth Docker use on macOS:

  • RAM: At least 8 GB (16 GB is comfy)
  • CPU: Modern dual-core or quad-core
  • Disk: SSD (fast read/write for images & volumes)

💡 Reason it’s lighter than “normal” VMs:
Docker doesn’t need a full OS with GUI in its VM — just the kernel & minimal services to run containers.


Quick Recap Table:

TermPurposePersistent?
ImageApp blueprintYes (stored on disk)
ContainerRunning app from imageNo (dies when stopped unless data in volume)
VolumeData storage for containersYes
ComposeMulti-container managementYes (config file)
KubernetesCluster-level orchestrationN/A

Quickest way to see per-request Rails logs in Docker

  • Run app logs:
docker compose logs -f --tail=200 main-app
  • Run Sidekiq logs:
docker compose logs -f --tail=200 sidekiq
  • Filter for a single request by its request ID (see below):
docker compose logs -f main-app | rg 'request_id=YOUR_ID'

Ensure logs are emitted to STDOUT (so Docker can collect them)

Your images already set RAILS_LOG_TO_STDOUT=true and the app routes logs to STDOUT:

if ENV["RAILS_LOG_TO_STDOUT"].present?
  logger           = ActiveSupport::Logger.new(STDOUT)
  logger.formatter = config.log_formatter
  config.log_tags  = [:subdomain, :uuid]
  config.logger    = ActiveSupport::TaggedLogging.new(logger)
end

So the docker compose logs commands above are the right way. Tailing log files inside the container usually isn’t needed, but if you want to:

docker compose exec main-app bash -lc 'tail -f log/development.log'

Trace a single request end-to-end

  • Get the request ID from the response (Rails sets X-Request-Id):
REQ=$(curl -s -D - http://localhost:3001/your/path -o /dev/null | awk '/X-Request-Id/ {print $2}' | tr -d '\r')
docker compose logs -f main-app | rg "$REQ"
  • Your app tags logs with a UUID per request; switching to the canonical tag makes grepping easier. Optionally change tags to:
# in `website-v1/config/application.rb`
config.log_tags = [:request_id]

Make logs easier to read (optional, but highly recommended)

  • Add concise, per-request logging (1 line per request) with JSON output for easy grep/parse:
# Gemfile
gem 'lograge'

# config/initializers/lograge.rb
Rails.application.configure do
  config.lograge.enabled = true
  config.lograge.formatter = Lograge::Formatters::Json.new
  config.lograge.custom_options = lambda do |event|
    {
      request_id: event.payload[:request_id],
      params: event.payload[:params].except('controller','action'),
      user_id: event.payload[:user_id],
      time: Time.now.utc
    }
  end
end

Then:

bundle install
docker compose restart main-app
docker compose logs -f main-app

Now you can grep easily:

docker compose logs -f main-app | jq -r 'select(.request_id=="'"$REQ"'")'

Tuning

  • In development, you already have config.log_level = :debug. For production, consider :info instead of :warn to get request lines:
# config/environments/production.rb
config.log_level = :info
  • Sidekiq logs are a separate stream (service sidekiq); use its logs for background job tracing.
  • Noise reduction: you already have config.assets.quiet = true in development.

If you want, I can switch to :request_id tagging and add lograge with JSON in the repo now.

  • Made no code edits; provided commands to stream logs from main-app and sidekiq.
  • Confirmed logs are routed to STDOUT via RAILS_LOG_TO_STDOUT and TaggedLogging in application.rb.
  • Offered optional improvements (switch to :request_id, add lograge JSON) and how to grep a single request by ID.

The Container ⛴️ Revolution: How Docker 🐳 Transformed Dev & What’s Ahead

Containerization has reshaped the way we build, ship, and run applications. From simplifying dependency management to enabling micro-services architectures at scale, containers and orchestration platforms like Kubernetes have become cornerstones of modern DevOps. In this post, we’ll explore:

  1. What are containers, and what is containerization?
  2. A brief history and evolution of Kubernetes.
  3. Docker: what it is, how it works, and why it matters.
  4. When Docker emerged and the context before and after.
  5. The impact on the development lifecycle.
  6. Docker’s relevance in 2025 and beyond.
  7. Use cases: when to use—or avoid—Docker.
  8. Feature evolution in Docker.
  9. The future of Docker and containerization in web development.
  10. Where Do Developers Get Containers? How Do They Find the Right Ones?
  11. What Is Docker Hub? How Do You Use It?
  12. Can You Use Docker Without Docker Hub?

1. What Are Containers ⛴️ and What Is Containerization?

  • Containers are lightweight, standalone units that package an application’s code along with its dependencies (libraries, system tools, runtime) into a single image.
  • Containerization is the process of creating, deploying, and running applications within these containers.
  • Unlike virtual machines, containers share the host OS kernel and isolate applications at the process level, resulting in minimal overhead, rapid startup times, and consistent behavior across environments.

Key benefits of containerization

  • Portability: “Build once, run anywhere” consistency across dev, test, and prod.
  • Efficiency: Higher density—hundreds of containers can run on a single host.
  • Isolation: Separate dependencies and runtime environments per app.
  • Scalability: Containers can be replicated and orchestrated quickly.

2. ☸️ Kubernetes: A Brief History and Evolution

  • Origins (2014–2015): Google donated its internal Borg system concepts to the Cloud Native Computing Foundation (CNCF), and Kubernetes 1.0 was released in July 2015.
  • Key milestones:
    • 2016–2017: Rapid ecosystem growth—Helm (package manager), StatefulSets, DaemonSets.
    • 2018–2019: CRDs (Custom Resource Definitions) and Operators enabled richer automation.
    • 2020–2022: Focus on security (Pod Security Admission), multi-cluster federation (KubeFed), and edge computing.
    • 2023–2025: Simplified configurations (Kustomize built-in), tighter GitOps integrations, serverless frameworks (Knative).

Why Kubernetes matters

  • Automated scheduling & self-healing: Pods restart on failure; replicas ensure availability.
  • Declarative model: Desired state is continuously reconciled.
  • Extensibility: CRDs and Operators let you automate almost any workflow.

3. What Is Docker 🐳 and How It Works?

  • Docker is an open-source platform introduced in 2013 that automates container creation, distribution, and execution.
  • Core components:
    • Dockerfile: Text file defining how to build your image (base image, dependencies, commands).
    • Docker Engine: Runtime that builds, runs, and manages containers.
    • Docker Hub (and other registries): Repositories for sharing images.

How Docker works

  1. Build: docker build reads a Dockerfile, producing a layered image.
  2. Ship: docker push uploads the image to a registry.
  3. Run: docker run instantiates a container from the image, leveraging Linux kernel features (namespaces, cgroups) for isolation.

4. When Did Docker Emerge, and Why?

  • Launch: March 13, 2013, with the first open-source release of Docker 0.1 by dotCloud (now Docker, Inc.).
  • Why Docker?
    • Prior container tooling (LXC) was fragmented and complex.
    • Developers needed a standardized, user-friendly workflow for packaging apps.
    • Docker introduced a simple CLI, robust image layering, and a vibrant community ecosystem almost overnight.

5. Scenario Before and After Docker: Shifting the Development Lifecycle ♻️

AspectBefore DockerAfter Docker
Environment parity“It works on my machine” frustrations.Identical containers in dev, test, prod.
Dependency hellManual installs; conflicts between apps.Encapsulated in image layers; side-by-side.
CI/CD pipelinesCustom scripts per environment.Standard docker builddocker run steps.
ScalingVM spin-ups with heavy resource use.Rapid container spin-up, minimal overhead.
IsolationLesser isolation; port conflicts.Namespace and cgroup isolation per container.

Docker transformed workflows by making builds deterministic, tests repeatable, and deployments faster—key enablers of continuous delivery and microservices.


6. Should We Use Docker in 2025?

Absolutely—Docker (and its underlying container technologies) remains foundational in 2025:

  • Cloud-native architectures place containers at their core.
  • Serverless platforms often run functions inside containers (AWS Lambda, Azure Functions).
  • Edge deployments leverage containers for lightweight, consistent runtimes.
  • Developer expectations: Instant local environments via docker-compose.

However, the ecosystem has matured, and alternatives like Podman (daemonless) and lightweight sandboxing (Firecracker VMs) also coexist.


7. When to Use or Not Use Docker

Use CaseDocker Fits Well?Notes
Microservices / APIs✔ YesIndividual services packaged and scaled independently.
Monolithic apps✔ Generally beneficialSimplifies env setup, but added container overhead may be minimal.
High-load, high-latency apps✔ Yes, with orchestration (K8s).Autoscaling, rolling updates, resource quotas critical.
Simple frontend only apps✔ YesServe static assets via lightweight Nginx container.
Legacy desktop-style apps⚠️ MaybeMight add unnecessary complexity if no cloud target.

Key considerations

  • Use Docker for consistent environments, CI/CD integration, and horizontal scaling.
  • Avoid Docker when low latency on bare metal is paramount, or where container overhead cannot be tolerated (e.g., certain HPC workloads).

8. Is Docker Evolving? Key Feature 🧩Highlights

Docker continues to innovate:

  • Rootless mode (runs without root privileges) for enhanced security.
  • BuildKit improvements for faster, cache-efficient image builds.
  • Docker Extensions for community-driven tooling inside the Docker Desktop UI.
  • Improved Windows support with Windows containers and WSL2 integrations.
  • OCI compliance: Better compatibility with other runtimes (runc, crun) and registries.

9. Is Docker Needed for Future Web Development? What’s Next?

  • Containerization as standard: Even if Docker itself evolves or gives way to new runtimes, the model of packaging apps in isolated, immutable units is here to stay.
  • Serverless + containers: The blending of function-as-a-service and container workloads will deepen.
  • Edge computing: Tiny, specialized containers will power IoT and edge gateways.
  • Security focus: Sandboxing (gVisor, Firecracker) and supply-chain scanning will become default.

While tooling names may shift, the core paradigm—lightweight, reproducible application environments—remains indispensable.


10. Where Do Developers Get Containers? How Do They Find the Right Ones?

Developers get containers in the form of Docker images, which are blueprints for running containers.

These images can come from:

  • Docker Hub (most popular)
  • Private registries like GitHub Container Registry, AWS ECR, Google Container Registry, etc.
  • Custom-built images using Dockerfile

When looking for the right image, developers usually:

  • Search Docker Hub or other registries (e.g., redis, nginx, node, postgres)
  • Use official images, which are verified and maintained by Docker or vendors
  • Use community images, but carefully—check for:
    • Dockerfile transparency
    • Recent updates
    • Number of pulls and stars
    • Trust status (verified publisher)

Example search:
If you want Redis:

docker search redis


11. What Is Docker Hub? How Do You Use It?

Docker Hub is Docker’s official cloud-based registry service where:

  • Developers publish, store, share, and distribute container images.
  • It hosts both public (free and open) and private (restricted access) repositories.

Key Features:

  • Official images (e.g., python, mysql, ubuntu)
  • User & org accounts
  • Web UI for managing repositories
  • Pull/push image support
  • Image tags (e.g., node:18-alpine)

Basic usage:

🔍 Find an image
You can search on https://hub.docker.com or via CLI:

docker search nginx

📥 Pull an image

docker pull nginx:latest

▶️ Run a container from it

docker run -d -p 80:80 nginx

📤 Push your image

  1. Log in:
docker login

  1. Tag and push:
docker tag myapp myusername/myapp:1.0  
docker push myusername/myapp:1.0


12. Can You Use Docker Without Docker Hub?

Yes, absolutely!
You don’t have to use Docker Hub if you prefer alternatives or need a private environment.

Alternatives:

  • Private Docker Registry: Host your own with registry:2 image docker run -d -p 5000:5000 --name registry registry:2
  • GitHub Container Registry (GHCR)
  • Amazon ECR, Google GCR, Azure ACR
  • Harbor – open-source enterprise container registry

Use case examples:

  • Enterprise teams: often use private registries for security and control.
  • CI/CD pipelines: use cloud provider registries like ECR or GCR for tighter cloud integration.
  • Offline deployments: air-gapped environments use custom registries or local tarball image transfers.

✅ Summary

QuestionAnswer
Where do devs get containers?From Docker Hub, private registries, or by building their own images.
What is Docker Hub?A public registry for discovering, pulling, and sharing Docker images.
Can Docker work without Docker Hub?Yes—via self-hosted registries or cloud provider registries.

Conclusion

From Docker’s debut in 2013 to Kubernetes’ rise in 2015 and beyond, containerization has fundamentally altered software delivery. In 2025, containers are ubiquitous: in microservices, CI/CD, serverless platforms, and edge computing. Understanding when—and why—to use Docker (or its successors) is critical for modern developers. As the ecosystem evolves, containerization principles will underpin the next generation of web and cloud-native applications.

Happy Dockerizing! 🚀