# Weekend Project: Voronoi Mosaics

My employer, Groq, gave us a three day weekend for Juneteenth, so I decided to use the time for a fun weekend programming project. Here, I’ll explain what I did, why I did it, and share some intriguing results!

I’ve always thought Voronoi diagrams are oddly beautiful. A simple idea, choosing regions of a plane closer to one reference point than any other, leads to patterns that feel at the same time simple and geometric, but mysterious and enigmatic. While easy to define, these Voronoi cells can have striking and complex shapes, making them a great tool for mixing mathematics with artistic endeavors.

I set out to explore these structures in an applied way, by turning an arbitrary picture into a mosaic of Voronoi cells. The results? Mixed. The tool produces an output image composed of solid-colored Voronoi cells derived from the input image. While it isn’t perfect and there’s certainly room for refinement, I’m excited by both the initial results and the ideas I have for further progress.

In the upcoming sections, I will delve into the magic of Voronoi diagrams, the intricacies of image representations, and the complex decision-making involved in the image transformation process.

Whether you’re a programmer intrigued by the mix of computational geometry and image processing, a mathematician fascinated by applications of Voronoi diagrams, or simply someone who appreciates this blend of art and technology, I invite you to explore, or even contribute to, this fascinating project with me. The tool is an open exploration into how to combine art, mathematics, and programming to create unique and visually compelling images. I hope you find something here that entertains, intrigues, or perhaps inspires your own future projects.

You can find the latest code for this project at https://github.com/cdsmith/voronoizer

# The Voronoi diagram

A Voronoi diagram, named after the Russian mathematician Georgy Voronoi, is a partition of the plane into regions based on distance to a set of reference points. These points are also sometimes referred to as “sites” or “seeds”. Each region, known as a Voronoi cell, contains all points that are closer to their assigned reference point than to any other. A simple definition, to be sure, but the beauty of the Voronoi diagram lies in the intricate complexity that emerges from this simple rule.

While Voronoi diagrams can be profoundly useful for various modeling tasks — such as identifying ideal locations for new businesses based on geographic proximity — I am more interested in the mathematical structure and aesthetic appeal of the construction. In the spirit of mathematician G. H. Hardy, who famously asserted that the beauty of mathematics is intrinsic and not merely a byproduct of its usefulness (indeed, he might have facetiously said *in spite of* its usefulness) I will now set aside those practical uses and explore the patterns and artistic expression that arises from a set of reference points and their Voronoi diagram.

So why are these diagrams suited to image representation?

**Simplicity:**At its core, a Voronoi decomposition stems from a set of significant locations within an image, elegantly capturing structure and shape with a mere set of points. What could be simpler?**Efficiency:**A Voronoi diagram forms cells that can adapt their shape to follow the structure of the image, a feature not shared by techniques like uniform tiling. This leads to a more accurate and efficient representation of the image.**Aesthetic appeal:**The unpredictable yet structured shapes in a Voronoi diagram contribute a dynamic variety and surprise, adding their own captivating artistic elements to the image.

Through this project, I have found a renewed appreciation for the interplay of structure and randomness that characterizes Voronoi diagrams. It’s a testament to the power of simple rules to create captivating complexity.

# Choosing Reference Points

When it comes to creating a Voronoi mosaic, the selection of reference points is the essence of the task. While the allure of this art form lies in the peculiar shapes that appear in the Voronoi diagram, it’s your choice of reference points that brings that into being. Choosing them haphazardly will result in a Voronoi diagram, to be sure, but careful placement of these points dictates the level of detail, form, complexity, and aesthetics of the final result, making this the heart and soul of the endeavor.

So how do we select these pivotal points? Striking a balance between simplicity and fidelity is key. Choose too many points, and the cells skrink, causing the mosaic structure to disappear. Opt for too few, and the essence of the original image is lost. A smart placement of reference points resolves this tension, giving rise to fascinating results.

After experimenting with numerous strategies, I settled on a divide-and-conquer approach as a way to choose an effective distribution of reference points.

The overarching code is intuitive and predictable.

**chooseReferencePoints **::

DivisionParams -> Grid CIELab -> BoundingBox -> Set Point

**chooseReferencePoints **params colors box

| shouldStop params colors box = Set.singleton (centerPoint box)

| otherwise =

foldMap

(chooseReferencePoints params colors)

(divide params colors box)

There are some user-tunable parameters represented by `DivisionParams`

, and `Grid CIELab`

is a representation of the source image, which we’ll get back to later. But the main elements of this code are as follows.

- The algorithm selects a set of reference points within a bounding box — an axis-aligned rectangle that limits each branch of the divide-and-conquer algorithm.
- The
`shouldStop`

function determines if a portion of the image is simple enough to represent with a single reference point. If so, a reference point is placed at the center of the bounding box. - Otherwise, the
`divide`

function subdivides the bounding box into smaller boxes, recursively selecting reference points within each.

Both of these decisions — embodied by the `shouldStop`

and `divide`

functions, play vital roles in the process.

## The shouldStop function

Firstly, `shouldStop`

serves as the termination condition for the divide-and-conquer method. The code looks like this.

**shouldStop **:: DivisionParams -> Grid CIELab -> BoundingBox -> Bool

**shouldStop** params colors box =

boxedArea box < areaThreshold

|| meanSquaredError colors box < errorThreshold params colors box

where

areaThreshold = targetArea params / 10

The first condition involving `areaThreshold`

is really more of a safety net than anything else. The parameters include a target area for Voronoi cells, but individual cells can be larger or smaller that the target depending on the level of detail in the image. Nevertheless, if the bounding box gets smaller than 10% of the target area for a cell, it’s time to stop regardless of the image structure. This prevents Voronoi cells from getting unreasonably small.

But the main condition governing termination of the divide-and-conquer involves mean squared error, or MSE, that would result from approximating a region by it’s average color.

Actually calculating this mean squared error versus the average color looks about like you probably expect.

**meanSquaredError **:: Grid CIELab -> BoundingBox -> Float

**meanSquaredError **colors box =

sum (colorSquaredError avgColor <$> boxedColors)

/ fromIntegral (length boxedColors)

where

boxedColors = atPoint colors <$> boxedPixels box

avgColor = averageColor boxedColors

This uses a helper function to compute the squared error between two colors.

**colorSquaredError** :: CIELab -> CIELab -> Float

**colorSquaredError **(CIELab l1 a1 b1) (CIELab l2 a2 b2) =

dl * dl + da * da + db * db

where

(dl, da, db) = (l1 - l2, a1 - a2, b1 - b2)

The most notable thing about these computations, as I alluded to earlier, is that involve values of the type `CIELab`

, which is a color represented in the CIE L*a*b* color space.

While it’s commonplace to represent colors using red, green, and blue components, there are problems with doing so. There are colors that look very similar to the human eye, but have very different RGB coordinates; and conversely, colors that look very different but have similar RGB coordinates. For most applications, this isn’t a big deal, but when we’re very concerned with minimizing color distance, we’d really like to focus on the differences that people see.

This is where the CIE L*a*b* color space comes in. Although not perfect, it’s at least approximate *perceptually uniform*, meaning that the ordinary (“Euclidean”) notion of distance between two colors matches human perception much better than RGB does. Swapping from RGB to a perceptually uniform color space makes the result of averaging and measuring distance between colors far more reliable in representing what the human eye will see.

Once we’ve computed the MSE, it is compared to a threshold to decide whether to stop and place just one Voronoi reference point in the center of this region, or further subdivide the region to better represent the source image.

To determine the error threshold, we start by computing a target mean-squared-error (MSE) that we’re aiming for. This depends crucially on aspects of the image like noise and contrast, so it cannot be determined theoretically. Instead, it’s measured before the divide-and-conquer algorithm begins, by observing what mean-squared-error numbers we tend to observe from windows of the source image of a size close the target Voronoi cell area.

Using the observed target MSE as a threshold would look like this:

**errorThreshold** :: DivisionParams -> Grid CIELab -> BoundingBox -> Float

**errorThreshold **params colors box = targetMSE params

That doesn’t work out very well on its own, though. We tend to get too much variation in the sizes of Voronoi cells, and some regions of the image are subdivided regularly all the way down to that safety net area threshold we saw earlier, producing an unappealing sort of uniform-size grid pattern that doesn’t reflect the shape of the image at all.

To fix this, we add a parameter called *uniformity*, which tries to prevent the size of Voronoi cells from varying too much. Remember: we want *some* variation to keep the mosaic interesting and diverse. It’s just important to balance that with pressure in the other direction so it doesn’t get out of hand.

We’ll do this with the target area again. If the region is already smaller than the target area, we want to increase the error threshold to make it less likely to be further subdivided. If it’s larger than the target area, though, we want to decrease the error threshold so it’s more likely to be divided. Here’s what that looks like:

**errorThreshold** :: DivisionParams -> Grid CIELab -> BoundingBox -> Float

**errorThreshold **params colors box =

targetMSE params

* ((1 + targetArea params) / (1 + boxedArea box)) ** uniformity params

The uniformity parameter determines how the error threshold adjusts with region size. A uniformity of 0 is equivalent to just using the target MSE as the threshold, as we did originally. Higher uniformities consider the size difference to varying degrees. I’ve found that 0.5 is often a good choice.

There’s a final factor in the error threshold in the current version of the code, which looks at how close the region is to the center of the image. The idea here is that the center of the image is likely to be the focal point, and therefore should be rendered with a greater level of detail, while the edges are more likely background, and it’s okay to lose detail there. You can see how this is done by following the source repository link I shared above.

## The divide function

The next pivotal choice in our divide-and-conquer algorithm involves the division process once we determine the algorithm should not stop.

A straightforward approach such as dividing the region into four quadrants could work, but I found that making the division based on the effect on mean-squared-error (MSE) is more effective. It not only encourages division at natural boundaries into color-uniform pieces, but also enables less grid-like reference point selection, making the resulting Voronoi diagram more aesthetically appealing.

In general, making a region smaller tends to decrease the MSE, since each pixel in the image contributes a greater portion to the average color, so it ends up being closer. Conversely, making a region larger tends to increase the MSE. Using this observation, we use a binary search to find a division of the region into two parts such that MSE is evenly divided between the halves.

First, we choose whether to divide horizontally or vertically. This choice is made by just selecting the longer dimension, keeping the regions more square and less long and skinny.

**divide** :: DivisionParams -> Grid CIELab -> BoundingBox -> [BoundingBox]

**divide** params colors box

| boxWidth box > boxHeight box = divideHorizontally params colors box

| otherwise = divideVertically params colors box

The `divideHorizontally`

and `divideVertically`

functions are symmetrical, so I’ll only show you one of the two.

**divideHorizontally **::

DivisionParams -> Grid CIELab -> BoundingBox -> [BoundingBox]

**divideHorizontally **params colors box = go 0 (boxWidth box)

where

go x width

| width < 4 = [left, right]

| leftCost < rightCost = go (x + halfWidth) halfWidth

| otherwise = go x halfWidth

where

halfWidth = width `div` 2

(left, right) = splitAtXOffset box (x + halfWidth)

leftCost =

meanSquaredError colors left

/ errorThreshold params colors left

rightCost =

meanSquaredError colors right

/ errorThreshold params colors right

This is a pretty straightforward binary search. We’re using the same error threshold here as we did earlier, to scale the cost associated with the left and right subregions. Just as varying the threshold with box size helped to avoid extreme decisions about when to stop, it also helps avoid extreme decisions about where to subdivide regions.

## Results

Despite all the axis-aligned rectangles in the divide and conquer algorithm, don’t be deceived. The actual Voronoi diagram boundaries don’t align with the axes at all, as you can see here.

This is just the starting line. There’s vast potential for improvement. You might already spot instances where the decomposition could have chosen better. What if the division occurred on the shorter axis, despite producing elongated regions? Perhaps minimizing the sum of MSEs, even if the sides are uneven, would yield better results. Perhaps splits should happen along a diagonal line instead of aligning with the x and y axes. These future possibilities are all part of the journey in refining this method.

At its core, the Voronoi diagram isn’t simply an approximation of the image; it’s a reinterpretation. It breaks down the original image into its most critical regions, both large and small, and recomposed a new mosaic. It’s familiar, yet distinct. The balance between capturing enough detail to stay true to the source material while allowing the unique qualities of the Voronoi diagram to shine through is the essence of creating these intriguing mosaics.

# Crafting the Voronoi diagram

The creation of the Voronoi diagram is where our carefully chosen reference points finally come to fruition. We now generate the mosaic by coloring each Voronoi cell with the average of all pixel colors that fall within the cell. This assigns each cell the color that best represents the source image.

To construct the diagram, we’ll need to choose the nearest reference point to each pixel of the image. The efficiency of this operation relies on the use of an appropriate spatial data structure that allows quick location-based queries to locate the nearest reference point. For this purpose, I have employed a kd-tree, which is a commonly used spatial data structure. Although I used the *kdt* package for Haskell, there are many options available in Haskell and most other programming languages.

Nevertheless, efficient doesn’t mean free! To further optimize the process, I calculate the nearest reference point for each pixel once, and store the result in a new grid matching the original image dimensions.

**nearestNeighbors **:: Grid a -> KdTree Float Point -> Grid Point

**nearestNeighbors **grid tree = generateGrid width height pixel

where

width = gridWidth grid

height = gridHeight grid

pixel x y = KdTree.nearest tree (Point x y)

After finding the nearest neighbors, the next step is to compute the average color of each Voronoi cell. This is done by iterating over all pixels of the source image, summing the colors associated with each reference point, and then dividing each sum by the number of pixels.

**averageColors **::

(Ord key, Storable key) =>

Grid CIELab ->

Grid key ->

Map key CIELab

**averageColors** colors keys =

toAverage

<$> Map.fromListWith

(<>)

[ (atPoint keys p, trivialSum (atPoint colors p))

| p <- boxedPixels (wholeImage colors)

]

In this context, `SumAndCount`

is a monoid that accumulates the color sum and the count of pixels contributing to that sum. This allows us to calculate the average colors after processing all pixels. With the average colors for each reference point readily available, rendering the Voronoi diagram is a matter of expressing the definition in our chosen programming language.

**voronoize** :: Set Point -> Grid CIELab -> Grid CIELab

**voronoize **refPoints grid =

generateGrid (gridWidth grid) (gridHeight grid) $

\x y -> avgs Map.! gridAt nearestRefPoints x y

where

tree = KdTree.buildWithDist pointCoords pointSquareDist (Set.toList refPoints)

nearestRefPoints = nearestNeighbors grid tree

avgs = averageColors grid nearestRefPoints

In essence, this `voronoize`

function takes the original image and the set of reference points, and transforms the image into a Voronoi diagram. Each cell takes on the average color of the pixels nearest to its reference point, thus preserving the overall coloration. While this transformation loses some of the original image’s detail, the resulting Voronoi mosaic retains the key structural and color features, offering an engaging reinterpretation of the original image.

(Original images are all courtesy of DALL-E 2, OpenAI’s neural network based image generation model.)

# The future

That’s the current state of the Voronoizer project, the product of many dedicated weekend hours. The algorithm already creates compelling visuals that offer a unique spin on source images. You can see this transformative tool in action by cloning the repository at https://github.com/cdsmith/voronoizer and following the instructions in the README file.

Still, as with any creative endeavor — and particularly one developed in a couple of days — there’s plenty of room for future improvement.

## Opportunity: Iterative refinement

Originally, I envisioned the Voronoizer project to rely on iterative refinement and an evolutionary programming approach. The concept was to begin with a basic set of reference points — perhaps randomly selected or even an empty set — then iteratively add, delete, or modify points, retaining only the changes that improved the image’s accuracy or utility. Although computational costs and the quest for satisfactory results led me in a different direction, the potential benefits of targeted iterative refinement are apparent:

- Moving points to align cell boundaries with detected edges or color gradients in the source image.
- Merging reference points that are too close together and/or assigned very similar colors.
- Adding new points to divide specific large, high-error Voronoi cells.

## Opportunity: Enhanced quality measurement

Currently, the algorithm heavily leans on minimizing Mean Squared Error (MSE). However, it’s widely recognized that reducing MSE isn’t necessarily the best way to create perceptually similar images. We humans can usually tolerate substituting colors in abstract images like a mosaic. But we’re more sensitive to maintaining similarity in shapes, edges, and other structural information.

Implementing established measures of structural similarity, like the Structural Similarity Index (SSIM) or its variations, could produce better results. In particular, combining with iterative refinement with structural similarity measures could really elevate Voronoizer’s output quality by efficiently mimicking recognizable shapes in the source image.

## Opportunity: Algorithmic improvement

Interestingly, although the project’s central goal is to create a Voronoi diagram, it doesn’t *explicitly* construct the Voronoi diagram as a data structure. This approach has sufficed so far due to the adequate performance of the nearest-neighbor query method. However, as the project potentially explores the possibilities of iterative global refinement, it might need to switch gears. Explicit construction of a Voronoi diagram as a data structure would avoid expensive nearest-neighbor queries for every pixel, and make it easier to perform error calculations one cell at a time.

## Opportunity: User experience and customization

At present, Voronoizer is a command-line tool with limited customization. We can significantly enhance its usability with some improvements:

- Allowing users to manually adjust the image by dragging around generated reference points.
- Letting users select vital image regions where they’d like the algorithm to dedicate extra reference points for detail preservation.
- Enabling users to input a limited set of available colors (e.g., if making a physical glass mosaic!), allowing the algorithm to optimize the image with each Voronoi cell assigned a color from that set.

With these potential enhancements, Voronoizer could evolve into an even more exciting and engaging tool. I encourage you to give it a spin. Choose your favorite image and let Voronoizer reimagine it into a mosaic of color and geometry. Please share your results — I’d be thrilled to see them.

Contributions and pull requests to Voronoizer are, of course, welcome. Or, perhaps the project has sparked your own ideas and directions. Either way, I hope sharing my work has inspired or enriched your experiences.