Share

What I thought I was building

From Pi to Pattern – A Personal Geometry Shift (2/4)

Bounding boxes and the promise of visual logic

In my previous post, everything started with a circle and a square.
Not just any square, but a bounding box. A simple container around the circle.

What immediately intrigued me was the reciprocity between them.

The square defines the circle’s diameter.
The radius is half the side of the square.
And when the circle fits perfectly inside, it reflects something back: a fixed proportional relationship.

It wasn’t just a one-way definition.
It was a feedback loop. A kind of geometric dialogue.

And beyond the mathematics, there was something else that mattered.

The box could scale.

Not only numerically, but visually.

It made calculations simpler, yes.
But more importantly, it made them visible.

And in a digital world, visibility is not a luxury. It’s foundational.

That’s when a new question emerged:

If this structure scales cleanly, in space, in code, and on screen…
what else might it allow us to do?


What I thought I was building

At first, I believed I was working on a very small problem.

That stubborn remainder you encounter when you try to wrap a radius around a circle.
The part that π captures elegantly, but that never quite settles into a rational or visual form.

I was interested in closure.
In describing the circle not as an endless approximation, but as a complete presence.

Because in practice, a circle has a definite perimeter.
A definite area.
It occupies space fully and concretely.

So I leaned on a familiar insight from classical geometry:
a perfectly inscribed circle always occupies the same proportion of its square.

That relationship gave me a shortcut.

Instead of recalculating curves over and over again, I could scale from the square.
The container carried the information.

It was elegant.
It was useful.

And it was not new.

Yet something about it wouldn’t let go.


When pixels entered the picture

The real shift came when I started looking at this idea through a digital lens.

In digital environments, shapes are not continuous.
They are composed of discrete elements. Pixels. Units. On or off.

At first glance, that feels like a limitation.

But I began to see it differently.

Because discreteness allows counting.
And counting allows comparison.

Instead of describing shapes purely through formulas, it became possible to look at how much space they actually occupy within a frame.

Not theoretically.
But measurably.

Suddenly, proportion wasn’t just something you calculated.
It was something you could observe.

That realization quietly changed the role of the model for me.

GRM was no longer just a geometric curiosity.
It started to feel like a way for digital systems to reason about visual structure without needing to resolve everything into abstract constants.

A way to describe form through occupation.
And deviation through difference.


A shift I didn’t see coming

That’s when I noticed something unexpected.

The question had shifted.

It was no longer:
“How can I calculate shapes more efficiently?”

It had become:
“How can a system see structure before it interprets it?”

GRM began to feel less like a model and more like a lens.
A way of organizing visual information before labels, before names, before assumptions.

Not to eliminate deviation.
But to understand it.

Not as error.
But as information.

And if that line of thinking holds, then the implications go far beyond circles and squares.

They touch on how we reason about images.
How we interpret variation.
And how digital systems might one day distinguish between resemblance and structure.

That’s where this path leads next.

Blog 3: What I Actually Found
How GRM became a way of seeing, not just a way of calculating.