If a bank is creating new paper notes, what sizes should it issue to make transactions easiest for the public? This is one example of an type of problem I've become interested in. It is basically about how to divide a scale into discrete intervals, under various constraints. It seems very abstract, but it actually has a lot to do with real-world design and engineering.
Let me explain this example. Let us say that a central bank, in a newly formed government, needs to issue coins. Given the value of the currency, the coins' face values can reasonably range from 1 up to 100 (beyond that, bills will be used). Because it costs the bank to create and circulate each unit, there will be some limit on them: let's say four different types of coin will be made. What should the denominations be?
We must pick the value 1 in order to allow all possible change to be made (woe be to the country with only 2-cent coins and must round all transactions up or down). We can also assume a 100-cent coin as the upper limit. This leaves us with two intermediate denominations.
The Linear Approach
A naive approach would be to divide the range evenly -- that is, keep a uniform interval between each value -- to create 1, 33, 66, and 100 cent coins. This sort of linear distribution is one major choice for establishing a scale (as we have just done). But who would want to make change with this system? You would routinely have pickets full of pennies. (Although it would be good for paying for all those products with 99-cent prices.)
The problem arises from the use of the scale: our coin denominations are not just for measurement, or some other abstraction, but will actually interact with one another; and so what concerns us is how they divide into one another. Therefore, a linear definition of the scale (a difference of 33 between each), which is based on addition, is less appropriate than one based on multiplication.
Instead of adding 33 to each value to get the next, we can multiply by something. There is no nice, even number that will work perfectly to get us to 100, but 4.66 generates these values (after rounding up): 1, 5, 22, 100.
This would be rather nicer for working with on a daily basis: I only have to accept, at most, four pennies in change from a cashier before I will get a 5-cent coin instead. And similarly, I will get at most four of those before a 22-cent coin is given. This solves one major problem of the linear scale: too few values at the low end.
A second issue is being able to readily convert between denominations; and in this, both systems would need tweaking. 22 is an awkward value, as nothing goes into it evenly except 1. It would be much easier to round down to 20, so that both 1 and 5 will be factors -- and 20 is itself a factor of 100. This makes things much nicer.
Human psychology and the physicality of the situation matter: the scale is not just an abstraction within the bank -- in fact, for book-keeping, the bank hardly cares at all. In particular, the inability of most people to do quick math helps determine the types of coin we should use. We could imagine a system based on the number 13: with pennies, 13-cent coins, and 169-cent coins (13 times 13). But few people know their 13 times tables inside-and-out; and even though they might improve if they were forced to use this system, it would be slower than many other conceivable ones. Powers of 2, or 3, or 4, or 5, are much easier for us.
We can see all of these factors in historical systems of currency, before metric imposed greater regularity. Throughout the former Roman empire, denominations of 1, 12 and 20 were standard. In England these evolved into the penny, shilling, and pound, and were often supplemented with a 4-cent "groat" (among myriad others, less consistently). This creates a system where small divisors relating the different values. In fact, 12 and 20 have a very large number of factors (four and five, respectively) given how small they are, making them perfect choices for the mental math of currencies. The key metric value of 10 is not as good on that score, with just three divisors (1, 2, 5); but something like 13 is even worse (being prime, it has no divisors except 1).
Mathematically, I have talked primarily about the interval between values in a scale and how they can generate the next values, through addition or multiplication. The astute reader will realize that the values themselves can be generated with a simple function. For a constant interval (a linear series), the values come from multiplying n by a constant. For the increasing interval, some value is raised to the power of n, making a geometric series.
Other classes of function can work as well, but as many real-world scenarios can be addressed by just these two, I will restrict this essay to linear and geometric series.
Some Further Geometric Examples
Because coins must "fit inside" one another, in the mental math of making change, multiplication becomes a better basis for creating a scale. Different sizes of tupperware, or units of liquid measure, have the same qualities. But multiplication can be useful in other circumstances too.
I am a graphic designer by profession, and use various "paintbrushes" to color pixels in Photoshop, where each brush has a size defined by a radius. The default set of brushes follows something like a linear scale, but I have found a geometric progression much more useful: I start with a 1-pixel brush and multiply by the ratio 1.5, creating 2, 3, 5, 8, 12, and so on. This gives me a series good for small details, giant floods, and everything in between, without overwhelming me with too many choices.
Any domain with standardization, and where a large range of values must be covered with a minimum number of patterns, calls for a geometric progression over a linear one. Imagine you own a foundry that casts pipes. In theory, all diameters of pipes could be useful: one city needs a little more water in its main line, so orders a 3.1' pipe instead of a 3'. But in practice, you can only design, tool, efficiently store, and market a limited number of types. If you make everything from tiny gas lines to giant spillways, you will want more smaller sizes than large ones, because that's where small differences matter more. To save your customers from buying too much oversized pipe, and save your own business, you will want a geometric progression, not a linear one.
Is Anything Left for Linearity?
Straightforward, linear progressions are useful (or adequate) in many cases, especially where there is sensitivity to small changes, even around large values. Consider shoe sizes: manufacturers cannot make bespoke shoes, so a limited numbers of set sizes is made instead. People will tolerate some difference from their true shoe size, but only so much. This tolerance is probably best measured in inches, not as a fraction of the shoe size: if large sizes jumped from 18 to 24, many people would not be able to make do. Thus, we divide the range of common values evenly by units of 2 in America.
Of course, there are also many cases where the precise values in your series are just not that critical, especially where the items are "fungible," and can be combined or split easily. Image you sell paper, and must decide the range of paper products to market. Your large "bales" can be much, much larger than your smaller reams: because a customer can buy multiple reams to reach a desired amount, they do not have to buy the exact amount wanted. This drastically reduces the number of different values you probably need to produce. Even still, something like a geometric sequence is probably best; in the real world, paper is often sold in units of 25, 500, 1,000 and 5,000.