← Writing

Building PortRadar: How I Used AI to Ship a Developer Tool in Hours

Last week I wanted a small terminal tool. This week I have a working project with Docker integration, a TUI interface, and a blog post about the experience.

Here's how I built PortRadar: and why this workflow represents something bigger than just "AI writing code."

The Origin Story

It started with a frustration. I had a Docker container running on my machine, something wasn't working, and I needed to figure out which port it was using. So I opened Terminal and typed the usual sequence:

ss -tulpn | grep 5432
# Nothing
docker ps
# Oh, it's postgres container
docker port postgres
# 5432->5432

Three commands. Two different contexts. And I realized: this is a solvable problem. Every developer has this workflow. What if there was a single view that showed both?

I mentioned this to my AI assistant in passing. The response was immediate: "Want me to help you build it?"

Starting Small (But Thinking Product)

Before writing any code, I made a deliberate choice: keep it niche.

There are dozens of process monitors, port scanners, and Docker GUIs. Building "another one" is pointless. But building one that specifically solves the "host + Docker in one view" problem? That's specific enough to be useful, different enough to be interesting.

The best side projects solve YOUR problems first. If it's useful to you, it might be useful to others. If it's only useful to others, you probably won't finish it.

This is something I believe strongly: product-market fit starts with personal pain. I needed this tool. So I would actually use it. That meant I wouldn't abandon it halfway through.

The Implementation

Phase 1: The Foundation

We started with host port scanning. The obvious approach would be calling ss or netstat, but those require elevated permissions on most systems, and parsing their output is fragile.

Instead, we went lower-level: reading directly from /proc/net/tcp. This is Linux's native interface for network information. No permissions needed. No external dependencies.

# /proc/net/tcp format:
# sl  local_address rem_address   st tx_queue rx_queue ...
# 0: 0100007F:9609 00000000:0000 0A 00000000:00000000 ...

# local_address is IP:PORT in hex
# state 0A = LISTEN

The tricky part was mapping ports to process names. Each connection has an inode number, and that inode connects to a process via /proc/<pid>/fd. We wrote a scanner that:

  1. Reads /proc/net/tcp and /proc/net/udp
  2. Extracts hex port numbers and converts to decimal
  3. Finds listening ports (state = 0A for TCP)
  4. Iterates through /proc/<pid>/fd looking for matching sockets
  5. Reads /proc/<pid>/comm to get the process name

It worked. First run found 6 ports on my machine, including the OpenClaw gateway ports I didn't even know were running.

Phase 2: Docker Integration

Adding Docker was conceptually simple but had a subtle challenge: Docker's port mapping works differently than host ports. A container might listen on port 80 internally, but only publish to host port 8080. Or not publish at all.

We needed to show both:

The Docker Engine API makes this straightforward. Using the dockerode library:

const containers = await docker.listContainers();
for (const c of containers) {
  for (const p of c.Ports) {
    // p.PublicPort (if exposed) + p.PrivatePort + p.Type
  }
}

The result: a unified table where you see host processes on top, Docker containers below, all sorted by port number. One screen. Zero context switching.

Working With AI: The Real Story

Here's what I want to be honest about: this wasn't "AI did everything and I watched."

It was collaborative. I provided the domain knowledge ("I want this specific workflow"), the constraints ("needs to work without sudo"), and the product sense ("keep it simple, focus on the core"). The AI provided implementation speed, boilerplate handling, and pattern recognition.

The meta-skill isn't prompting. It's knowing what to ask for.

I didn't say "build a port scanner." I said: "I want to see host processes and Docker containers in one view, filterable, searchable, with the ability to kill processes." That's a product spec, not a code request.

What AI Did Well

Where I Had to Guide

The Polish Phase (Which We Skipped)

In our original plan, there were 4 phases:

  1. Host port scanner ✓
  2. Docker integration ✓
  3. UI polish (skipped)
  4. Deploy and blog (now)

We skipped Phase 3. The TUI was functional. It had colors, filtering, search, kill/stop actions. Was it perfect? No. But it was useful.

Ship at 80%, polish at 100%. Not the other way around.

What This Signals

What's Next

PortRadar is live on my portfolio. I've used it almost daily since building it, already caught two port conflicts that would have taken longer to debug.

Future improvements: column sorting, process tree visualization, macOS compatibility, historical port usage logging. But those are nice-to-haves. The core is done.


PortRadar source code: github.com/gbose01/portradar