How to Eat an Elephant: HIGH LEVEL Estimation

At the start of every new year, I find myself thinking less about goals and more about how I want to approach the work ahead. One lesson that keeps returning to me—quietly but insistently—is about estimation, uncertainty, and what it really means to take on something that feels impossibly large.

Several years ago, I was asked to help lead a project to move 53 applications from on-prem infrastructure to AWS. On paper, it sounded straightforward. In practice, it was anything but. These applications had grown over time, written in different languages—Python, a lot of Perl, some Java—each with its own history and quirks. None were containerized. None were cloud-native. And while we had ideas about what might eventually be retired or modernized, we also knew that many of those decisions could only be made once we got closer to the work.

We weren’t just moving software. We were learning AWS. We were learning containerization. And we were setting a stretch goal to be one of the first teams to run on Kubernetes. I remember sitting with that ask and feeling the weight of it. Not fear exactly, but respect for how big it really was. The kind of respect that makes you pause before pretending you have answers.

We started by slowing down. Before timelines or promises, we built an application inventory. We documented what each system did, where it lived, what it was written in, and what we thought might change over time. It wasn’t glamorous work, but it gave us something essential: a shared understanding of what was actually on the table. It didn’t reduce the size of the elephant, but at least we could see it clearly.

At one point, I found myself in a room with my developers, inventory spread out, trying to make the problem smaller. We chose one application—one that felt fairly average—and talked it through in detail. What would it take to containerize? How much refactoring might be involved? What would testing look like? How would we deploy it safely? How would we know it was performant for our users? There were plenty of “it depends” moments and a lot of uncertainty, but there was also something else happening: alignment. By the end of that conversation, we didn’t have a date, but we did have a shared sense of effort. That one application became our reference point.

From there, everything else was comparison. Is this harder than that one? Easier? About the same? And when we talked about difficulty, we weren’t just talking about code. We were talking about complexity in the fullest sense—development, testing, deployment, performance, operational risk. Whenever something felt too big or too fuzzy, it told us we didn’t understand it well enough yet. Sometimes that meant splitting the work. Sometimes it meant accepting that learning would come first.

We made a habit of writing down our assumptions and risks alongside each estimate. At the time, it felt like a small thing. Later, those notes became anchors—reminders of what we thought we knew and where we expected surprises. They mattered far more than the numbers ever did.

Eventually, I sat down with my operations partner to compare notes. I still remember him saying, almost apologetically, that he’d never estimated anything like this before. I had a bit of an advantage there. Earlier in my career, I’d done consulting work where large, high-level estimates were part of the job. I knew how uncomfortable it could feel to name something you knew wasn’t precise—and how important it was to do it anyway.

We laid our perspectives side by side and talked through what could happen in parallel, where dependencies lived, and where learning curves would slow us down. What we came away with wasn’t certainty. It was clarity. Our best estimate landed at just over three years. Not as a promise. Just as an honest sense of scale.

In the end, the work took almost four years. And if that sounds like a miss, I’d argue it wasn’t. During that time, other priorities came in. Fall Rush demanded focus to protect student experiences. Escalations happened. Staff changed. Learning took time. This team wasn’t working in isolation—we were supporting performance, upgrades, and production systems all along the way. Despite all of that, we moved every application successfully, and our users never felt the impact. That doesn’t happen by accident.

What made it sustainable was how we treated each step. We celebrated milestones. We talked openly about what worked and what didn’t. We ran retros and actually changed how we operated. We templated repeatable work. We grouped similar efforts so learning compounded. More experienced engineers tackled the harder problems first, then shared patterns so others could grow into the work. Over time, even as the work itself got more complex, it also got smoother.

People often ask me how to estimate something that big. I usually answer with a question of my own: How do you eat an elephant? One thoughtful bite at a time. Estimation, for me, isn’t about being right. It’s about being useful. It’s about creating enough shared understanding to decide whether the journey is worth taking, and then trusting the team to learn their way forward.

That’s what estimation gave us—not certainty, but alignment. And sometimes, that’s exactly what you need to begin. And when we treat estimation as an act of learning rather than a demand for certainty, we give teams room to grow.

Susan Dratwa

I’m Susie Dratwa a tech leader who believes that kindness scales. I will explore what happens when you lead with empathy and build with intention. I will talk about Agile, technology, servant leadership, and systems thinking.

https://kindness-2-scale.com
Next
Next

Four Lenses on Stress: Focus, Purpose, Presence, and Emotional Intelligence