What I actually found
From Pi to pattern – A personal geometry shift (3/4)
Seeing differently
I thought I was designing a way to measure. Something small, contained, elegant: a ratio to simplify the relationship between circle and square. I had found that a circle perfectly inscribed in a square occupies 78.54% of its area. That seemed useful. It helped me scale. It helped me calculate.
But the more I worked with this ratio, the less it felt like a number.
It began to feel like a language.
A language of visual structure. One that describes not just how big something is, but how it behaves. How it deviates. How it fits. Or doesn’t.
That’s when I realized: I hadn’t built a tool. I had uncovered a new way of seeing geometric structure, not as fixed formulas, but as relational logic.
From measurement to structure
In the world of geometry, we often focus on exactness. Length, angle, radius, formula. But in the digital world, forms aren’t continuous. They’re composed of pixels. Bits. Finite points.
You can’t apply irrational numbers in a pixel-based system and expect perfect continuity. That’s not how digital reality works.
So what does work? Structure. Proportion. Pattern.
GRM started revealing forms not by naming them, but by measuring their structural consistency within a square. A form with a ratio of 0.7854 was likely a perfect circle. 0.76? Maybe an ellipse. 0.80? Something slightly inflated. The point is: we didn’t have to label it. We could see its relationship.
And that changes everything.
GRM as a way of seeing structure
When I started to view the GRM not as a tool but as a new way of seeing geometric structure, I began to see new possibilities. Not just in 2D, but in 3D. And beyond.
If a circle fits into a square at 78.54%, then a sphere fits into a cube at 52.36%. That’s not just an interesting coincidence. That’s a dimensionally consistent principle.
Suddenly, the GRM isn’t locked to shape or dimension. It was scale-independent. It can be applied to patterns in motion, in volume, in composite forms. In design systems, in machine vision, in virtual environments.
In short: GRM can offer structure before recognition. That’s a game-changer.
From Prediction to Understanding
Digital systems today often rely on recognition: Is this a circle? Is that a dog? Does this match a template?
But what if we taught AI not just to recognize, but to expect? If we could encode what shapes, or concepts, or structures, generally look like, we could also measure how far a given instance deviates from that expectation. And more than that: we might be able to classify the deviation itself.
What if AI could say: ’this almost fits the expected form, but here is where it slips’ — and even why?
But recognition is expensive. It takes processing power, trial and error, and millions of examples.
What if the system could see first? Could understand the structure before deciding what it is?
That’s what GRM might offer. It brings proportional expectation into the process. Not after, but before. It allows us to scale, compare, and interpret; not just detect.
Imagine what that could mean for CAD software: forms that align by inherent structure. For the metaverse: environments that render by ratio, not coordinate. For AI: systems that reduce their search space not by guessing, but by understanding.
It might sound ambitious. But every shift begins with seeing differently.
That’s what I actually found.
And if we can teach systems to work with structure, not just labels, then we might be ready to embed GRM where it belongs: in the code itself.
→ Blog 4: Why I Still Believe in It
From pixel logic to system-level efficiency, how GRM might reduce iterations and enhance AI comprehension.