**Fractal Camouflage**

**Figure 1—Rendered view of two Main Battle Tanks with scene-matched digital camouflage schemes.**

Digital Camouflage permits the numerical embodiment of a camouflage pattern, which can be analyzed, stored, and fed to a manufacturing process that applies the pattern to an article. Mathematical algorithms can generate patterns that optimally match certain environments based on digital imagery (snapshots) for a given palette of available colors. Since environments undergo seasonal changes, and military equipment deploys to wide-ranging theaters of operation, it would be convenient to efficiently change camouflage patterns accordingly. ARS is researching methods and processes that would allow a vehicle, for instance, to maintain the best pattern for its environment, regardless of theater or time of year. Pictured above are two “optimized” digital patterns matched to different backgrounds, as applied to a Main Battle Tank. The pattern on the right is a lower-resolution, limited color pattern in which pixels are clearly visible. The left pattern is a high-resolution pattern with a robust color palette in which individual pixels are not discernible at the display resolution.

Choice of pattern resolution and color palette depends largely on the pattern application process and the number of available colors. Painting would involve larger pixels and fewer colors; printing could involve much smaller pixels and millions of possible colors.

Generating scene-matched patterns can be performed in numerous ways. Below are examples of scenery images with patterns matched to them based on spatial Fourier transform algorithms developed at ARS.

Figure 2—A desert image (top) and a matched camouflage pattern computed from it (bottom).

Figure 3—A fall woodland image (top) and a matched camouflage pattern computed from it (bottom).

**1.0 CAMOUFLAGE PATTERN GENERATION PROCESS**

Numerous mathematical techniques have been developed to generate camouflage patterns based on real-world scene imagery. Features of scene imagery such as color content, spatial texture, gray-level statistics, pixel correlation and spatial frequency content can all be used to construct artificial images that closely match in one or more respects the input scenery. One such method is based on fractional-dimensioned shapes, or fractals. This page describes fractal aspects of natural imagery and methods employed to generate fractal camouflage patterns from real images.

**1.1 Definition of a Fractal**

A formal mathematical definition for the term fractal exists, but it provides little meaning to the non-mathematician. For the purposes of this application, a fractal is a geometric form that possesses self-similarity across ranges of scale. In other words, when viewed from afar, a fractal will reveal a set of geometric features; and when viewed from up close, a similar set of features will emerge. Consider a leafless tree. From a distance its trunk and main branches are discernible. As one moves closer the secondary branches come into focus and appear very similar in relationship to the primary branches. Move closer still, and the twigs are resolved, which relate to the secondary branches in much the same way as the secondary branches relate to the primary branches, which are similarly related to the trunk. Other naturally-occurring geometries exhibiting self-similarity are river basins, snowflakes, crystals, mountain ranges, blood vessels, cloud formations and coastlines.

Self similarity comes in varying degrees and it may apply to a limited or unbounded range of scales. In general, naturally-occurring fractals are self-similar over only a limited range of scale, and their self-similarity is weak, meaning that the statistics that describe the geometry are preserved under changes in scale, not necessarily the geometry itself. The topography of a mountain range is an example. From afar, the coarse protuberances of the main peaks and ridges are evident. A view at smaller scale (say by zooming to 100X) might show only one portion of one of the main mountains. Its topography is by no means a miniature replica of the full range, but it is quite similar in its standard deviation of altitude and autocorrelation length (by which it is implied that each statistic is about 1/100 of its value in the original, when expressed in absolute units [meters, for instance]). Such geometries are classified as random fractals, since they result from stochastic processes, and they are statistically self-similar, to distinguish them from exactly or quasi self-similar fractals.

**1.2 Fractal Dimension**

Consider the classic example of performing a measurement on a fractal curve—in this case the coastline of Britain—offered by Mandelbrot [3]. The method of measurement is to use a ruler of length ε and lay it so that both ends are on the curve defined by the coast, then walk the ruler along the coast by flipping it, always keeping both ends on the curve, and counting the number of steps (*N*) required to circumscribe the island. The resulting estimate of the length of the coastline (*L*) is thus *L* = *N*ε. Of course, the method described involves some error, which is attributed to the length of the ruler, as detail of the curve on a scale smaller than ε cannot be resolved and thereby does not contribute to the length measurement.

With a very large ruler, say 1 km, the measurement will capture the important geographic detail, such as that found on a good map, and will accurately portray the major bays, inlets and peninsulas. With a smaller ruler, perhaps 10 m, finer detail will emerge, like rock formations and boulders, sub-bays, jetties and inlet tributaries. And the resulting measurement *L* = *N*ε will be greater with the 10-m ruler than with the 1-km ruler. Continuing to smaller scales, individual rocks are profiled. Ultimately, single pebbles and grains of sand must be contoured in performing the measurement. And each time the ruler is shortened, the resulting coastline length measurement is greater than the last.

This fact in itself is not surprising, since the same method applied to a regular curve, such as a circle of radius R, will also exhibit increasing *L* with decreasing ε. But for a circle, and similar curves, *L* converges to a definite value (2πR) as the length of the ruler (ε) approaches zero. This requires that the number of steps taken with the ruler (*N*) must increase as 1/ε as the ruler is shortened to vanishing lengths. Curves for which these conditions hold are described by the term *rectifiable*.

On the other hand, the coastline of Britain has an unbounded length, since the value *L* determined from the method does not converge in the limit of vanishing ε, but becomes infinite. The measurement diverges because, as ε approaches zero, *N* increases faster than 1/ε. In fact, *N* increases at a rate of 1/ε* ^{D}*, where

*D*> 1. When this last condition is true,

*D*greater than unity, the curve is a

*fractal*.

Now, to achieve a measurement that does not blow up as ε decreases, the method must be altered. In this case, the number of steps (*N*) is determined the same way, but the resulting measurement is computed as follows:

That is, the number of steps is multiplied by ε* ^{D}*, rather than ε

^{1}. Thus, as ε approaches zero, and

*N*increases as 1/ε

*, the product will converge to a definite limit. The measurement*

^{D}*L*is a measure of the curve in the dimension

_{D}*D*. For any curve, there is a single value of

*D*that will produce a finite value of the measure

*L*in the limit of vanishing ruler length ε. That unique value of

_{D}*D*is known as the

*fractal dimension*when it is greater than 1. To find

*D*, take the natural log of both sides of the measurement equation and solve for

*D*in the limit of vanishing ε, resulting in the following:

To this point the discussion has been concerned only with curves and measurements of length. Curves for which a measurement in the dimension *D* = 1 converge to a finite limit as the ruler length approaches zero are rectifiable; and those for which convergence requires *D* > 1 are fractal curves. *D* need not be integer valued—in fact, the term fractal was coined from the notion of “fractional dimension.” But fractal geometry is not limited to curves. It can apply to surfaces, volumes, or higher-dimensional sets. For the application under consideration, however, an extension to surfaces is all that is required.

The previously described mountain range is a good surface to explore. Points on such a surface may be located in a 3-coordinate space, whereby the x and y coordinates specify projection onto the horizontal plane, and the z-coordinate describes height, or altitude. It is desired to measure the surface area of the mountain. As before, a measurement unit is chosen; in this case, it is a square patch of side ε. The minimum number of patches (*N*) required to cover the mountain is determined, and the resulting measurement of area is thus:

Again, it is determined that the use of smaller patches results in larger values of *A*. If the mountain were a rectifiable shape, such as a pyramid, the measured area would ultimately converge to a finite limit as ε decreased, and *N* behaved as 1/ε^{2}. But a real mountain, with its ever finer levels of detail, yields an infinite area when measured with an infinitesimally small patch element. In this limit, *N* is increasing faster than 1/ε^{2}. Once more, the solution is to redefine the measurement as

where *D* > 2, and determine the dimension *D* that causes the measurement to converge to a finite value as ε vanishes. It would be found that *D* > 2, and the mountain is considered a fractal surface of dimension *D*.

The mountain surface is illustrative because it is similar to an image. In an image, two coordinates are used to specify location, and the third represents image intensity via the gray level. Color or multispectral images are generally split into several channels, each of which may be treated as a separate surface. And it is true that many images of natural scenes are fractal surfaces, with dimension *D* > 2.

**1.3 Determination of the Fractal Dimension of an Image**

Although the expression defining *D* is simple enough, it cannot be computed practically for most surfaces, so a number of approximate methods exist. Rather than describe those in common use, which can be found elsewhere [1, 3, 4, 5], only the method chosen for this application will be discussed. This consists of estimating the fractal dimension from the form of the Fourier spectrum of the image surface, which is an accurate method commonly used for imagery [1].

Before proceeding, a few points about Fourier transforms and their properties as they relate to digital imagery will be discussed. A transform projects a function from one domain onto another. The Fourier transform maps a function in the space domain (x and y coordinates) onto the frequency domain (*ν*_{x} and *ν*_{y} coordinates). A corresponding inverse transform exists to accomplish the reverse. The frequencies (*ν*) are spatial frequencies, which characterize functions that oscillate in space, rather than temporal frequencies, which characterize functions oscillating in time. Thus, the Fourier transform represents a space-domain function as a sum of oscillatory functions (sines and cosines) each characterized by a frequency (number of oscillations per unit length). The form of the spatial frequency *ν* is

Here, *λ* is the wavelength of the oscillatory function, *ω* is the angular frequency (radians/length), and *k* is the wavenumber (2*π*/*λ*). The angular frequency and wavenumber are merely other common ways of expressing a frequency. In a 2 dimensional (2-D) image, the oscillatory functions can propagate in any direction in the x-y plane, so the spatial frequencies may be decomposed into x and y components, *ν*_{x} and *ν*_{y}, and the magnitude of a particular component is

In general, a function requires an infinite number of frequency components in its Fourier transform to represent it exactly. However, in a digital image, which is a discrete sampling of a 2-D image plane, only *m* × *n* samples (pixels) are involved, where *m* and *n* are the vertical and horizontal extents, in pixels, of the captured image. A discrete Fourier transform (DFT) is therefore performed, resulting in *m* × *n* frequency components. The extent of the DFT in the frequency domain is the rectangular box, centered on the origin, bounded by ±*ν*_{x-max} and ±*ν*_{y-max}, where the maximum frequency in either direction is the number of pixels in that direction (*m* or *n*) divided by twice the spatial extent of the image in that direction (width or height). An illustration of the digital arrays containing an image and its DFT appears in Figure 1.

Note: *m* and *n* are the numbers of rows and columns, respectively, in the stored arrays; square pixels are assumed.

**Figure 1. Geometric representation of a digital image and its discrete Fourier transform**

Each box in Figure 1 represents a pixel in the space domain (wherein it contains a grayscale value in the range from 0 to 255) and a frequency component in the DFT (wherein it contains an amplitude and phase value for that component). If the DFT array is indexed with *r* and *c* for rows and columns, then the values take the form

**Equation 1**

where *A _{rc}* is the amplitude of the

*rc*frequency component,

*φ*

*is its phase and*

_{rc}*i*is the imaginary unit

Recall the Euler identity that

and it is confirmed that the DFT represents oscillatory functions (sines and cosines). Since each element of the DFT contains two values, the DFT array can be thought of as two spectra: an amplitude spectrum and a phase spectrum. The amplitude spectrum indicates how strong a particular spatial frequency is represented in the image, relative to other frequencies. The phase spectrum indicates how each frequency component is shifted along its propagation axis (i.e., the placements of the nodes and antinodes), which is determined by the spatial distribution of features in the image.

In many applications involving Fourier transforms, one is interested in the magnitude squared of the amplitude spectrum, which is known as the power spectrum—a term linked to analysis of electrical circuits and signals where power content is proportional to the square of the amplitude of an oscillatory signal. It happens that the power spectrum of a fractal surface follows a particular form, which is simply related to the fractal dimension of the surface [1, 2]. A fractal surface has a power spectrum (*P*) that follows the relationship:

**Equation 2**

where *c* is a constant,

and *β* is an exponent that is related to the fractal dimension of the image surface. Obviously, Equation 2 cannot hold at zero frequency since it would require an infinite power for the static (direct current [DC]) component of the DFT. (For imagery, the DC component of the DFT is the average grayscale level of the image.) And elsewhere, an actual power spectrum will deviate, sometimes significantly, from the smooth surface defined by Equation 2. But fractal surfaces and images of natural scenery will produce power spectra that follow the general form of Equation 2 and exhibit the characteristic |*ν*|^{–β} dependence.

Therefore, an estimate of the fractal dimension of a surface, or image, can be made by determining the two parameters, *c* and *β*, that characterize the power spectrum; the latter is not the fractal dimension exactly, but it is very simply related to it. The parameters *c* and *β* are computed from a least-squares fit of the power spectrum to the right-hand side of Equation 2. It is often simpler, however, to first take the log of both sides of Equation 2, giving:

or

**Equation 3**

where the constant ln *c* is replaced by *C*. Using the logarithmic form of Equation 3, the right-hand side defines a line for scalar *ν*, with slope –*β*; or a conical surface for 2-D vector *ν*, with the same slope. Now, *C* and *β* may be computed by any of a number of methods, taking care to exclude the |*ν*| = 0 component from the fit. In addition, the standard deviation of the power spectrum (*σ*) from the form of Equation 3 is recorded for later use.

As an example, consider the image shown in Figure 2, a grayscale picture from a grassy meadow. Mathematically, this image is a surface, with pixel coordinates in the width and height directions, and an altitude represented by the grayscale level (0 to 255). It is stored as a 2-D array the rows of which correspond to horizontal lines in the image, and columns of which are vertical lines. After taking the DFT of the image and performing the fit according to Equation 3, parameter values are obtained.

**Figure 2. Grayscale image from a meadow scene**

For the surface represented by Figure 2, *C* is determined to be 4.553, *β* is 1.108, and the standard deviation (*σ*) of ln*P*(|*ν*| ) with respect to 4.553-1.108 ln|*ν*| is 1.309. At this point the objective of deriving a camouflage pattern from the fractal properties of an image may be entertained.

**1.4 Using Fractal Properties to Generate Fractal Camouflage Patterns**

The basic procedure is to generate an artificial, natural log power spectrum using Equation 3 and the parameters *C* and *β* determined from the least-squares fit. For the DC component of the power spectrum, the DC value from the image power spectrum is adopted without modification. This ensures that the average grayscale level of the pattern will match that of the input image. At this point we have an *m* × *n* array representing the power spectrum of the fractal pattern; call it *P _{f}*.

Since the power spectrum of the image did not follow exactly the form of Equation 2 or Equation 3, but departed with a certain statistical probability, a similar sort of “noise” should be introduced to the artificial spectrum. Using a random number generator, create another *m* × *n* array with random values normally distributed about zero with a standard deviation of *σ*, as computed from the fit procedure. Set the value in the noise array that would correspond to the DC component of the power spectrum to zero. This ensures that no noise is added to the DC component of the power spectrum. Call the array *n _{f}*. To introduce the random noise and convert from a log power spectrum back to an amplitude spectrum, add the noise array to the artificial log power spectrum and convert the sum to an amplitude spectrum (

*A*) according to Equation 4.

_{f}**Equation 4**

The artificial amplitude spectrum (*A _{f}*) thus computed carries the same fractal properties and random character as the original image. Recall, however, that the DFT of the original image comprised both an amplitude spectrum and a phase spectrum, the latter of which has been lately ignored. Since the phase spectrum encodes much of the detail of spatial feature distribution in an image, the input image’s phase spectrum might be preserved and connected with the amplitude spectrum, as in Equation 1. The resulting array could be inverse transformed (IDFT) resulting in an artificial image with the correct fractal properties and the original image phase spectrum. This image appears in Figure 3.

**Figure 3. Artificial image generated from fractal and random characteristics of the input image (Figure 2), but with the phase spectrum of the original preserved**

Upon studying Figure 3, it is apparent that it closely resembles the input image, Figure 2. In fact, all that we have done to the original image is modify the power spectrum by conforming it to the |*ν*|^{–β} dependence, and added some random noise. And the output resembles a noisy version of the input image. It is not desired, however, to make replicas of the input imagery for use as camouflage patterns. Rather, the fractal and statistical characteristics of the input image should be reflected in the output, which should otherwise differ from the input just as all natural objects differ from one another.

The detailed distribution of image features can be removed from the artificial image by randomizing the phase spectrum attached to the artificial amplitude spectrum. To accomplish this, an *m* × *n* array is generated wherein each element is set to

where g is a random number uniformly distributed on the interval 0 to 2π. Call this array Φ* _{f}*. Now perform an element by element multiplication of

*A*and Φ

_{f}*and do an IDFT operation on the result. This yields the artificial image shown in Figure 4, which again has the correct fractal dimension and noise, plus a random distribution of image features.*

_{f}**Figure 4. Fractal-based image derived from Figure 2 with a completely randomized phase spectrum**

Figure 4 is not strongly textured, at least in comparison to the input image. This is attributable to the low value of *β* (1.108) derived from the scene, which is to say that it does not exhibit strong fractal features. Values approaching 2 and greater are more indicative of fractal geometry and tend to exhibit better texture relative to the input image. Figure 5, below, has a fractal exponent *β* of 2.063. A fractal pattern generated with the input scene’s statistics accompanies the image.

**Figure 5. Image of rock field with fractal exponent ***β* > 2.0 and the resulting fractal pattern

*β*> 2.0 and the resulting fractal pattern

To this point, a procedure has been developed for creating fractal patterns based on digital imagery. Fractal parameters can be computed for any image, regardless of the image surface’s fractal characteristics, or lack thereof. If the input image is weakly fractal, the resulting pattern will not likely provide a good textural match. If it is strongly fractal, the opposite should hold. Nonetheless, the fractal pattern will exhibit statistical self-similarity, meaning that its texture will remain stable over changes of scale. For camouflage, this means that a target view from long range should reveal coarse features, similar to those of the background at the viewing resolution; and views from closer range should reveal finer detail, just as the background (clutter) does at closer range. This is not the case with traditional “blob” camouflage, which imitates clutter only at long ranges.

**1.5 Multi-Channel and Color Pattern Generation**

Color images are encoded in at least three color channels. The most common format is red-green-blue (RGB). A pixel in a color image is specified by three levels (R, G and B) each of which is on a 0–255 scale for 24-bit encoding. Thus a color image results in at least three arrays, each of which can be a fractal surface. An example of the decomposition of color into separate channels is illustrated in Figure 6.

**Figure 6. A color image and its three color channels red, green and blue**

For each RGB channel, the fractal parameters and statistics can be computed and the pattern generation process can be performed independently. But since this involves randomization of the amplitudes and phases of the Fourier spatial frequencies, independent randomization de-correlates the spatial distribution of features and thus, the resulting colors, which depend on the relative strengths of the three components. Figure 7 is an example of the consequences of independent randomization, where the normally-distributed noise added to the logarithmic fractal power spectrum was done independently for the three color channels. Many of the colors appearing in the pattern are outside the gamut of the input image.

**Figure 7. Color image and color fractal pattern generated with independent randomization of the R, G and B components, resulting in many out-of-gamut colors**

The solution to color and feature de-correlation is to perform the randomizations on the fractal power spectra and phase spectra in a way that preserves the relationships between the three color components. This is accomplished by generating a single noise array for the power spectra, and adding it to each of the three color power spectra; and by preserving the original phase spectra of each color component, connecting them with their respective power spectra, and multiplying each by the same random phase array, which preserves the relative phases of the components. The result of dependent randomization is shown in Figure 8, which contains colors within the gamut of the input image.

**Figure 8. Color fractal pattern generated from Figure 7 in which the amplitude and phase spectra of each color component were randomized identically, preserving the relative amplitude and phase relationships of the Fourier spectra of the three components, and producing in-gamut colors**

So the handling of multi-channel or color imagery involves a few more steps than for single-channel inputs, but the key precaution is to apply the same randomization to each channel’s power spectrum, and a single randomization to each channel’s phase spectrum. The pattern is then assembled by performing an IDFT on each channel and combining them for display in a multi-channel pattern. A number of color fractal patterns have been generated from available imagery, and they appear in Figures 9 through 17.

**Figure 9. Image from Everglades National Park (top) and resulting fractal camouflage pattern**

Note: the blue tones in the pattern are from inclusion of a portion of the lake in the lower left corner of the input image.

**Figure 10. Image from Acadia National Park (top) and resulting fractal camouflage pattern**

**Figure 11. Winter scene from Acadia National Park (left) and resulting fractal camouflage pattern**

**Figure 12. Summer scene from Acadia National Park (top) and resulting fractal camouflage pattern**

**Figure 13. Rocky beach from Acadia National Park and resulting fractal camouflage pattern**

**Figure 14. Mountain rocks and trees at Acadia National Park (top) and fractal camouflage pattern**

**Figure 15. Autumn scene # 1 at Acadia National Park (top) and fractal camouflage pattern**

**Figure 16. Autumn scene # 2 at Acadia National Park (top) and fractal camouflage pattern**

**Figure 17. Autumn ferns at Acadia National Park (left) and fractal camouflage pattern**

In all of the examples worked and presented thus far, the output image is the same size and resolution as the input. It may be desired to generate patterns that are larger than the original image or that have a different aspect ratio. This is possible using the fractal parameters to populate arrays of any dimension. However, once either dimension of the original image is exceeded, one must resort to interpolation in the spatial frequency domain, which is equivalent to extrapolation in the space domain. It would seem easier to increase the size of the imagery used at the cost of reduced resolution.

If the pixel size in the original image equates to three inches, for example, the fractal pattern pixels will be three inches. Should this prove too coarse for close viewing, extrapolations can be performed in the spatial frequency domain, which equate to interpolations in the space domain. Either or both of these techniques could be used to customize the size and resolution of the output camouflage pattern.

**2.0 REFERENCES**

##### **Fractals and Pattern Generation**

- Andrews, Patrick R., Jonathan M. Blackledge, and Martin J. Turner. Fractal Geometry in Digital Imaging. New York: Academic Press, 1998.
- Billock, Vincent A., Douglas W. Cunningham, and Brian H. Tsou. “What Visual Discrimination of Fractal Textures Can Tell Us About Discrimination of Camouflaged Targets.” Gold Canyon: Human Factors Issues in Combat Identification Workshop, August 24 2009.
- Mandelbrot, Benoit B. The Fractal Geometry of Nature. New York: W.H. Freeman & Company, 1983.
- Pleshanov V. S., A. A. Napryushkin, and V.V. Kibitkin. “Use of the Theory of Fractals in Image Analysis Tasks.” Optoelectronics, Instrumentation, and Data Processing 46 No.1 (2010):70-78.
- Saupe, Dietmar. “Random Fractals in Image Synthesis.” In
*Fractals and Chaos,*89-118. New York: Springer-Vertag, 1991.