1. Elaborations

Elaborations have indeed produced better results. Simply having water flow over a surface was not terribly hard to implement, but each sophistication -- even within the limits of a cell-based, time-step-based system -- have paid off. Initially, I tried having water flow according to the landscape alone, without recomputing where water was pooling; adding it in allowed much longer rivers to develop, as well as lakes (the lakes were all I expected to gain). Adding even a very crude momentum mechanism -- to prevent immediate back-flow when water is evening out over an area -- eliminated a strange checkerboard artifact.

2. Parameter Tweaking

Testing and parameter tweaking can improve results substantially. I had added some major components of the model, but was stymied by short rivers. Rather than implement a new system, which I was loath to do, I tested and tweaked extensively. This in fact opened up a whole new space of results, with parameters I had not initially thought would be reasonable. My rivers got longer, and the results were generally more pleasing on complex landforms.

3. Useless Parameters

On the other hand, I also discovered that several of the equations I was using, such as those controlling erosion rates, were needlessly complex; the best results came with settings that effectively disabled them. In many cases, I had assumed I would want to massage the output to a fine degree, perhaps to approximate the real world. But in truth, the basic algorithmic operation produced 95% of the result, and adjustments to that logic were a waste.

4. Complexity

You can't simulate everything, and a crude hack may just be what you need (the opposite of (1)). Sometimes people talk about "phenomenological" models versus "process" models, but the difference is completely relative to the scale of output one is interested in: all models are phenomenological to someone, as there is always a deeper level to simulate, if one chose. So there are always simplifications and glosses. But I think it's a good lesson that a crude hack may be needed even at the scale of interest, if the process involved is just too involved, or too poorly understood, to model easily.

In my case, there was a difficulty with lower ground eroding too easily, to below sea level. In reality, such behavior is dictated by the way water flows, builds momentum, erodes surfaces, etc.; and also by the underlying geology of the land. My model includes no geology per se at all; just geography. The low-erosion problem alerted me to some problems with my flow model, and with tweaks I improved it. But the more basic issue is that fundamental lack of geology, which I'm not about to include, at least not in any detail at all. So I made a hack, that simply reduces the rate of erosion at low elevations. It's not perfect, but it works, and it took almost no time, instead of weeks.

5. Taking a Mixed Approach

The conclusion I draw from the above is one should take a mixed approach to complexity in a model. In many cases, a crude heuristic will get you 90% of the way to your target, in terms of accuracy or detail. But it will usually not give much variation in its output (being very simple, and unresponsive to other aspects of the model). The question is if you need the variation, or need some part of that missing 10% in detail. If you do, you need to model a more sophisticated process.

Even when contemplating a larger module though, there are many degrees of complexity, in particular around how the module will interact with other modules, and how adjustable it is. Both of these things can lure one into much greater complexity than is needed. The most important thing is getting a basic algorithm, which performs a process something like what is needed. To do this, the algorithm must bring together some key elements of the model, and manipulate them in a mathematically sensical way.

For instance, the erosion equation for river scouring needed to use water volume, and water speed, and they are essentially multiplied together. Elaborations on that equation -- like minima and maxima, or different curve adjustments -- produce no substantial effect. But I do not model water speed, so simply used the hack of steepness as a proxy. That was a far more important decision, and I am now running up against the limits of what can be achieved with it.

6. Planning for Addition

Not modeling momentum was a decision I made fairly early, because it represents a quite different way of looking at water in the model: not simply as a value in each tile, but an object that can have properties of its own -- which move with it. Adding it will require either a rewrite of the whole model (which is not happening) or tacking a layer on, and treating it separately -- with all the building and testing that entails.

I had hoped a crude approximation, lacking momentum, would suffice, but now I'm not sure. It might have been wiser to build a model architecture that could be expanded to include momentum more readily. Such expandability may be important for an essentially exploratory model, which is tested very iteratively: I'm building each peice, and trying to improve results each time, hoping to reach a threshold of realism. But it's hard to predict how complex the model must be to reach that goal.

An architecture that potentially allows much expansion is very helpful. Mine is simple (being grid-based) and therefore makes expansion easy. That's one virtue. But a more complex architecture, that would ultimately allow more expansion, might prove better if I'm forced to keep adding modules. At the same time though, such complexity is more difficult to work with, and slows down the iterative design process. Personal preference, skill, and one's best predictions about the model's possible directions, have to guide the balance.

Next Post Previous Post