This isn't a methodology. It's a habit. Before I trust a metric, I want to understand how it was collected. Before I accept an assumption, I want to see the evidence. Before I advise on a system, I want to have touched it. The patterns below aren't techniques I invented. They're instincts developed by getting burned every time I skipped this step.

Pattern 01

Read the Logs

When something isn't working and nobody can explain why, the answer is almost always in the data that nobody is looking at. Not the dashboard. The raw data. Log files, transaction records, error traces. The layer below the summary where the actual behavior lives.

This requires a willingness to get technical enough to know what you're looking at, even if your role doesn't demand it. An advisor who can read a log file is an advisor who can't be misdirected by a status report.

In practice

A VoIP platform was failing its proof test with a major 911 call center. The team had been testing for days and couldn't find the issue. I sat down with the testers and went through the log files line by line. The routing logic was mishandling a specific case. I filed the bug, worked with engineering to get it fixed immediately, and we completed the call successfully. It was the first VoIP call automatically routed to a 911 call center in the United States.

The issue was visible in the logs the entire time. Nobody had looked at that level of detail.

Pattern 02

Use Your Own Product

There is no substitute for firsthand experience with the thing you're responsible for. Not a demo. Not a walkthrough from engineering. Actual use, under real conditions, with the same constraints your customers face. The gap between what the team thinks the experience is and what the experience actually is can be enormous.

This extends to knowing your competition at the same depth. If you can't articulate why your product is better from direct experience with both, you're selling on faith.

In practice

At a consumer voice service provider, I set up a lab in the office running every major competitor's service so the entire team could benchmark quality firsthand. I also installed our own service at home and tested it at random times throughout the week, varying connection speeds, running concurrent downloads, and stress-testing call quality under real conditions.

Separately, at a wireless broadband company, I drove around with our hotspot devices measuring connectivity and documenting the actual user experience on real roads in real coverage areas. The performance data from those drives directly informed what we communicated to customers and what we escalated to engineering.

Pattern 03

Be the Last Person Standing

Sometimes ground-truth validation isn't a choice. It's a survival requirement. When the team shrinks, the documentation is incomplete, and the system still needs to run, the person who understands the system at the deepest level becomes the most valuable person in the room.

The willingness to go that deep, not because someone asked you to but because the situation demands it, builds a kind of credibility that no title or certification can replicate. When I advise on systems today, it's informed by having kept systems alive with my own hands.

In practice

I was supporting a hosted transaction automation platform that required significant manual configuration and monitoring. The system wasn't built for the operational demands being placed on it. When the company went through a downsizing, I was the last person left keeping it running.

I worked with an engineer to understand what was possible, taught myself shell scripting, and built a menu-driven set of management scripts that simplified configuration, monitoring, and maintenance. Those scripts were eventually productized and sold as a management package for $150K. What started as a necessity became a product because the ground-truth understanding of the system's actual operational needs was deep enough to build something useful on top of it.

Strategy built on secondhand information is strategy built on sand. The willingness to go to the source is what separates advice that holds from advice that sounds good in a slide deck.

This principle isn't about distrusting your team. It's about respecting the complexity of the systems you're responsible for. Metrics are summaries. Summaries are lossy. The person who regularly touches the raw material develops an intuition that the person reading the dashboard never will.

Every time I've been right about something that surprised other people in the room, it was because I'd gone one level deeper than anyone expected me to. That's not brilliance. It's just showing up where the information actually lives.