Material Procedural Generation

Hello and welcome to our ongoing series of short posts about the cool technology we’re developing at Primer.

My name is Brennan and I’m the 3D artist responsible for all the materials that ultimately end up on your walls in the app, and also the overall pipeline for how those materials get designed, built and published.

In the last post we talked about what the texture maps are and how they get layered up to produce the final material, today we want to dive into how those textures are made.
Starting with some big news: we’ve moved our entire texture creation pipeline to procedural generation!

This means that we’re developing a backend that can generate any tile in any shape with any attributes using parameters and sliders instead of using photos, which vastly speeds up the workflow and also gives us some extra control and cleaner data to use when publishing the textures.

This post will explain those benefits in more detail, but here’s a short clip of some random examples demonstrating how flexible this system is:

The Old Method

Our initial set of textures, when we first started this project, were very laborious to make and so our pursuit was two-fold: maintaining / increasing quality while also drastically reducing the turnaround time of bringing clients online.

As described in the previous post, the original method was photo manipulation based: we’d take the product shots as they existed and add them to a grid, reduce highlights and shadows, reflections, random tile variations, etc. and then convert that with some programmatic magic into the roughness and normal maps.

This took a lot of time because any new change to the layout basically re-started the entire process - if you had already made a rectangular brick tile of a certain style and then now wanted a square version of that you were fundamentally going back to the grid layout phase and finding new photos, re-doing all of the steps that build up the texture and so on.

So any shape, any layout, any grout changes, any variation in texture was basically an entire new project, and obviously if a brand comes onboard with us they might have hundreds or thousands of SKUs to create, which would be thousands of hours of labour for us and just for time reasons alone that was untenable.

We could speed it up by cutting quality, by making faster versions of certain aspects, but that wasn’t acceptable either, the idea was to maintain quality as much as possible.

The other aspect is, photos actually aren’t that great at creating textures - the conversion process is getting better and better thanks to modern software specifically designed to do this sort of thing for the video game industry, but in the end you’re always converting things based on imperfect data: even with a great camera and lighting setup the physics of reflections, angles, autofocus, lens aberration, sensor noise, image compression and so on always mean that you’re guessing and averaging and softening to get results.

And, in the first place, the photos we get from clients are promotional, they simply weren’t designed or shot to be perfectly flat or evenly lit because that’s not what a marketing image’s goals are, flat is usually bad. If you’re selling a rough tile you’re trying to show that it’s rough.

So part of the quality improvement efforts were things to counteract that, or give us better normal data from conversions or automate it such that myself or another human artist didn’t have to be manually correcting and deciding these things en masse.

The ultimate end version of this method is dedicated scanning boxes - think of a flatbed scanner except that the light source can be spun around a tile sample such that you also capture all of the data in the dimples and roughness and surface when the camera ‘sees’ how the light and shadow interacts per the lamp direction.

This gives you much better data, but also means that we need a sample of every single tile (and multiple tiles per SKU) in order to scan and develop this cleaner source. Shipping pallets of physical products and setting this up would also take lead time and resources while still leaving us with the initial problem of putting images into an inflexible grid per shape and size and variation. Better quality, but perhaps even slower than when we started. So that’s no good.


Enter: Procedural Generation

In and around thinking about these things, a new / old idea came up: we could generate them from scratch instead of using photos and then we’d have perfect data to generate the texture maps from plus the ability to quickly change parameters like rectangle shape or grout size and it would simply… make a new perfect image instantly.

This is, on the whole, a great solution.

But it also requires a sufficiently advanced procedural setup to have the parameters to make the results that we need to make, to a degree that they can capture the spirit of the reference product’s texture and look.

And, to increase speed, we also need to codify these many settings into a nice list and have exposed sliders in a sort of dashboard that can make a spaghetti mess of settings and variables and generation into a much cleaner, more user-friendly experience. The ultimate goal being that it can be operated by any artist, and not just myself having internalized the madcap complexity that inevitably results from having so many settings linked in multiple levels.

So we made this, our work in progress:

Starting from scratch, every single box in here generates or modifies something from left to right towards the final maps: all the colors, all the tiles, all the little pocks and divots and grout and texture is made and controlled through these gates.

The time savings have been immediate and impressive - textures that used to take hours now take minutes, and any additional change is seconds. Going from a rectangle to a square is entering in two dimension numbers and it’s immediately rebuilt instead of having to start completely over again from photos.

Plus, it’s infinite: every tile is unique, every generation is new, we’re never recycling images in multiple places or photoshopping marks from one to the other in an effort to create more randomness. The matte black tile is fundamentally different from the gloss black tile because we can easily make a new one instead of adjusting only the roughness on a shared set of textures.

Another big other thing we gain is displacement.

Remember when I said that photographs don’t make super good normal maps? What’s happening in that conversion process is that we’re trying to guess from a 2D image with light what the 3D terrain might be. Humans are pretty good at this, we see a shadow and understand concavity, we see a highlight and understand protrusion - and the computer is doing something similar to the best of its ability, but again it’s only ever a guess and the results are based on pixel data which is altered a dozen times before they get seen by the machine.

But when we procedurally generate this data it’s just ‘real’ inside the process, the data is pure and clean because the source of it is right here and full fat.

So we can generate really good normal maps, that’s great, but we can also make an even more extreme version with this clean depth data: something called a displacement map.

As before, this is a grayscale image from zero to one - black to white - where each pixel height is the value of its gray.

So we can do cool things like keeping the grout low (darker values) and the tiles higher (brighter values) and when you look around the tile with your camera it ‘physically’ projects outward creating a very good depth illusion.

And since this depth is ‘real’ we can also do cool things like project tiles at different heights, so some tiles stick out more than others which is a real life thing we see with certain types of tile installations - now we demonstrate that in real time, in the app, all because of this additional texture map from this different process for making the textures themselves.

As of the time of this writing displacement is still being worked on at the technical level, so unfortunately if you go to look in the app right now it won’t happen, but we’re making and saving displacement textures with each and every new material now such that when the update goes live the whole library will come to life with this new mode.

We are very excited to share this with you all.