Saturday, January 23, 2016

Formula 1 strategy and Nash equilibrium

At first sight, Formula 1 race strategy seems to be an ideal domain for the application of game-theory. There are a collection of non-cooperative agents, each seeking to anticipate the decisions of their competitors, and choose a strategy which maximizes their pay-off. The immediate pay-off at each Grand Prix is championship points.

However, there's a subtlety of game-theory which needs to be appreciated before its most famous concept, that of Nash equilibrium, can be applied.

Let's begin with the game-theory. John Nash demonstrated that a non-cooperative n-player game, in which each player has a finite set of possible strategies, must have at least one point of equilibrium.

This equilibrium is a state in which each player's choice of strategy cannot be improved, given every other player's choice of strategy. In game-theoretic language, each player's pay-off is maximized, given every other player's choice of strategy.

In formal terms, there must be an n-tuple of strategies σ = (σ1,...,σn) in which the pay-off for each player, vi, is maximized:

vi(σ) = max vi1,...,σn),   for i = 1,...,n

where the maximum is taken over all the player-i strategies, σi.

The set of strategies adopted by the teams at each Grand Prix should possess at least one such state of Nash equilibrium, (irrespective of whether the competitors are capable of finding that optimal state). However, it's possible to define a simple and realistic scenario which, at first sight, undermines Nash equilibrium.

Suppose that a Ferrari is ahead of a Mercedes in the early laps of a race, but the Mercedes has a pace advantage. Suppose, however, that the pace delta between the cars is less than the minimum threshold for a non-zero probability of the Mercedes overtaking the Ferrari.

Now, for the sake of argument, suppose that due to aerodynamic interference from the wake of the car ahead, the Mercedes cannot follow closer than 1.5 seconds behind the Ferrari, and suppose that tyre degradation is sufficiently low that new tyres provide a 1-lap undercut worth less than 1 second. Even if Mercedes pit first, Ferrari can respond the next lap, and (assuming an error-free stop) will emerge still in the lead.

Clearly, if Mercedes are to beat Ferrari they will need to use a different strategy. Let's make this interesting by postulating that whilst a 1-stop strategy is the fastest 'deterministic' race, a 2-stop strategy is only a second or so slower.

Now, if the Mercedes switches to a 2-stop strategy, it will be out of sync with the Ferrari, will be able to circulate at its true pace, and will be able to beat the Ferrari if the Scuderia remain on a 1-stop. (For the sake of argument, we assume that there are traffic-free gaps into which the Ferrari can pit, without being delayed by other competitors).

However, if Ferrari anticipates this, and plan a 2-stop strategy, it will still win the race. If both cars are on the 2-stop strategy, Mercedes cannot utilise its superior pace.

However, however, if Mercedes anticipates that, it can win the race by sticking to the original 1-stop strategy...which Ferrari, again, can head off by sticking with the 1-stop. And so on, ad infinitum.

Clearly, there is no Nash equilibrium here. Each possible combination of strategies is such that at least one competitor can improve their pay-off by changing strategy, if the other competitor's strategy remains fixed. This structure is depicted graphically above. Each of the four cells represents a possible combination of 1-stop and 2-stop strategies. The pair of numbers in each cell represents the pay-off, in championship points, for Ferrari and Mercedes, respectively.

The coloured arrows indicate how one competitor can always improve their pay-off. For example, the top-left cell represents the case in which Ferrari and Mercedes both pursue a 1-stop strategy. The blue arrow reaching across to the top-right cell indicates that Mercedes can improve their pay-off by switching to a 2-stop strategy, if Ferrari remain wedded to the 1-stop. However, the downward red arrow in the top-right cell indicates that Ferrari can improve their pay-off by switching to a 2-stop if Mercedes remain committed to a 2-stop.

The problem here is that the strategies considered are termed 'pure' strategies in game-theoretic terms. Nash's theorem pertains not to pure strategies, but to probabilistic combinations of pure strategies, called 'mixed' strategies. If there are two possible pure strategies, A and B, a mixed strategy is one in which, for example, you resolve to follow strategy-A 30% of the time, and strategy-B 70% of the time. You must also use a random number generator to enforce the probabilistic split.

A mixed strategy, then, is a rather abstract thing, and not necessarily something which represents human strategic thinking. People often have contingency plans, alternative strategies that they will adopt if certain events occur, but they rarely frame their original strategy in terms of probabilistic mixtures.

In terms of the Formula 1 strategy scenario defined above, there is a state of Nash equilibrium: if Ferrari and Mercedes both adopt the mixed strategy of pursuing a 1-stop with 50% probability and a 2-stop with 50% probability, then neither competitor has a mixed strategy which offers an improvement in terms of their average pay-off.

However, Formula 1 teams are unlikely to adopt such a coin-tossing approach to strategy, so a Grand Prix potentially offers an interesting case study of a non-cooperative n-player game far from Nash equilibrium.

Sunday, January 17, 2016

Pantheism and religion

Sandwiched between articles on human flatulence and the hazard posed by pigeon-droppings to electricity pylons, the 2015 Christmas/New Year edition of New Scientist contained an article by theologian Mary-Jane Rubenstein. The main thrust of the article attempts to draw parallels between some ancient philosophies and modern multiverse proposals in cosmology.

Specifically, Mary-Jane argues that the atomists were proposing a type of spatial multiverse, whilst the stoics were advocating a temporal one. Although it's stretching the point somewhat, the majority of the article is quite interesting.

However, as we reach the final paragraphs, Mary-Jane can be found citing a type of pantheism advocated by Nicholas of Cusa:

"Traditionally, Christian doctrine has taught that humans are made in the image of God. Cusa disrupted this idea by saying that the universe, not man, bears the image of God. And if humans are not particularly godlike, then God is not particularly humanoid. God doesn't look like a patriarch in the sky: he looks like the universe."

Now, pantheism is a rather strange notion. It's as if one has responded to the question 'Do unicorns exist as well as horses?' by replying 'Yes, they do, but they don't have horns, and can be identified with, or considered to resemble horses.'

But that's not the main problem with the article. The main problem comes in the final paragraph, where Mary-Jane concludes that because pantheisms "change what it means to be God...we don't need to chose between God and the multiverse...Is it possible that modern cosmology is asking us, not to abandon religion, but to think differently about what it is that gives life, what it is that's sacred, where it is we come from - and where we'll go?"

Whoa! Hold on a cotton-picking minute there, Mary-Jane. Perhaps there were some readers whose blood-flow was devoted more towards the stomach than the brain over the Christmas period, and under such conditions it might be possible to miss the sleight-of-hand here. Under most other conditions it's not too difficult to spot the sudden jump from the abstract metaphysical concept of pantheism to the introduction of religion.

The term 'religion' doesn't just entail a bundle of metaphysical concepts: it means a human institution; it means scripture, liturgy, a priesthood, a dogmatic moral code, the indoctrination of children, and the amplification of tribal behaviour.

That's rather more than pantheism suggests, I fear, and certainly not the answer to any of the questions posed by multiverse cosmology.

Newspaper journalists and the Met Office

It's been a relatively mild winter in Britain this year, and this has deprived newspaper journalists of their normal opportunity for hysterical exaggeration and over-reaction to wintry weather. However, temperatures have fallen this weekend, and, taking a cue from the Met Office's ludicrously patronising weather-warning system, the hyperbole has been flowing:

"Snow and ice sweep across Britain," yells The Guardian, claiming that "A 100-mile wide corridor of snow stretched from north-west Scotland to south-east England overnight." 

"Treacherous driving conditions as snow and ice alert covers more of Britain," shouts The Telegraph headline, "Drivers warned of hazardous conditions after mercury falls to -10C amid 100-mile snow corridor."

The Telegraph article, however, begins to equivocate its message after no more than a couple of sentences, alluding to "many Britons waking up to frosty conditions on Sunday."

Frosty conditions, eh? There's quite a difference between waking up to snowy conditions and waking up to frosty conditions. For a start, whilst people sometimes have to dig their car out of a snow-drift, it's somewhat rarer to dig your car out of a frost-drift.

Scanning further down the page, we find that the Met Office had previously said 'snow had been "expected to fall along a relatively narrow corridor, perhaps only 100 miles wide" and forecaster Sophie Yeomans said "that band of sleet and snow is staying over the country, but it is dying out".'

This reveals that the newspaper journalists have misunderstood the dimensions of the purported 'snow corridor'. The Met Office are using 100 miles as a diminutive term, not as an expansive term. 100 miles is quite a short distance in meteorological terms. The purported 'corridor of snow' is "only 100 miles wide." What's more, it is the width of the corridor which spans 100 miles, not its length.

If we actually scrutinise the shape of the snow corridor in the graphic supplied by the Met Office (below), we can see that its length is much greater than its width. It is much longer than 100 miles. If the journalists were seeking an impressive-sounding length scale to exaggerate the severity of the wintry conditions, they should have quoted its length, not its width. But that would have required additional effort. The Met Office have quoted 100 miles, and it's a nice round number, so that's the length-scale the newspapers are going to quote.

But just look at the length of that snow corridor. That's a lot of snow isn't it? We can tell it's a region of snowfall because there's a snowflake icon, and a sliding-car icon adjoined to the top of the yellow band.

Oh, but hold on, there's also a legend down below which tells us what the yellow shading actually means. It turns out that yellow means 'Be Aware'. Which is a useful piece of advice. Thanks for that. But what exactly does 'Be Aware' mean in this context?

Following the links on the Met Office website to their Weather Warning page, we discover the following definition:

Yellow: Be aware. Severe weather is possible over the next few days and could affect you. Yellow means that you should plan ahead thinking about possible travel delays, or the disruption of your day to day activities. The Met Office is monitoring the developing weather situation and Yellow means keep an eye on the latest forecast and be aware that the weather may change or worsen, leading to disruption of your plans in the next few days. 

So that's not a corridor of predicted snowfall; that's merely a corridor in which the Met Office recommend people should "plan ahead" and "keep an eye on the latest forecast."

In effect, then, The Met Office is saying the following: 'Things are possible, and they may affect you! Don't treat us as an occasional source of information; be dependent upon us; raise your anxiety levels when we tell you to. We are monitoring the developing situation; you need us.'

Hence, whilst The Telegraph article reports that "Parts of the North West were hit by snow overnight, ranging from a light dusting in Manchester to heavier snowfalls on the Pennines and rural Cheshire and Cumbria," we're subsequently informed "that band of sleet and snow is staying over the country, but it is dying out...There is a lot of rain in that but in parts of London there will be sleet falling as well."

So in other words, the story should really be:

'A band of precipitation will cause localised snowfall in the North-West, with sleet turning to rain in other regions.' 

But remember, keep watching and reading those weather forecasts!

Monday, January 11, 2016

Lotus or Shadow?

The February 2016 edition of RaceTech magazine has an interesting article on wind-tunnels by their F1 insider, 'Expert Witness'. However, there may be an error in the caption to one of the photos which accompanies the article, (below).

The caption claims that the photo depicts a Lotus wind-tunnel test from 1972. Which would be surprising, because one would expect a 1972 car to sport a much taller airbox. In fact, not only is the airbox wrong, but the nose doesn't look like a Lotus nose at all.

If I were pressed to identify the model, I would suggest that it is actually a version of Tony Southgate's Shadow DN8 (below), probably from 1976-1977.

Certainly, if it does transpire to be a Lotus wind-tunnel test from 1972, it opens a whole new time-travelling window on F1 espionage in the 1970s...

Saturday, January 09, 2016

The tortuosity of modern F1 circuits

In recent decades, Formula 1 circuits have tended to lose their character; flowing tracks, sculpted by the contours of the land, have been supplanted by clinical autodromes designed to maximise sponsorship exposure and minimise running costs. 

Perhaps surprisingly, it is soil physics which offers a means of quantifying this loss of flow. Specifically, we need to adapt a quantity called the tortuosity factor, which is used to analyse the permeability of soil to the flow of water.

As Daniel Hillel explains, "The actual length of the path traversed by an average parcel of liquid is greater than the soil column length L, owing to the labyrinthine, or tortuous, nature of the pore passages...Tortuosity can be defined as the average ratio of the actual roundabout path to the apparent, or straight, flow path," (p177, Fundamentals of Soil Physics, Academic Press, 1980).

So, let's calculate and compare the tortuosity of a corner complex on a traditional F1 circuit to that on a modern Hermann Tilke designed circuit. In particular, let's compare the Becketts sequence at Silverstone with the Turn 1/2/3 complex at Shanghai.

Courtesy of Google Maps, the distance between the entry and exit of the Becketts sequence, as the crow flies, is about 517m. The distance along the path of the track is about 600m. Hence the tortuosity of Becketts is T = 600/517 = 1.2

The distance between the entry to Turn 1, and Turn 4 at Shanghai, as the crow flies, is about 145m. The distance along the path of the track is about 650m. Hence, the tortuosity of the opening corner sequence at Shanghai is T = 650/145 = 4.5

So, the tortuosity factor of a modern F1 corner complex can be as much as 3.75 times greater than that of a more traditional sequence. Which is a way of placing a number on how much F1 has lost its soul.

Wednesday, December 23, 2015

Tornados and the Y250 wing-tip vortex

Streamwise vortices occur when fluid spirals around an axis which points in the same direction as the overall direction of fluid flow. In particular, streamwise vortices are generated by aircraft wing-tips, and by the front-wing of a Formula 1 car at the inboard transition between the neutral central section and the inner tip of the main-plane and flap(s). The latter is the so-called Y250 vortex. Surprisingly, the method by which such streamwise vorticity is generated also plays a crucial role in the generation of atmospheric tornados.

Let's begin with the meteorology. A tornado is a funnel of concentrated vertical vorticity in the atmosphere. Most tornados are generated within supercell thunderstorms when the updraft of the storm combines with the horizontal vorticity generated by vertical wind shear. The updraft tilts the horizontal vorticity into vertical vorticity, generating a rotating updraft.

However, there are two distinct types of vertical wind shear: Unidirectional and directional. The former generates crosswise vorticity, whilst the latter generates streamwise vorticity.

When the wind shear associated with a storm is unidirectional, the updraft acquires no net rotation. The updraft raises the crosswise vorticity into a hairpin shape, with one cyclonically rotating leg, on the right as one looks downstream, and an anticyclonic leg on the left. Updrafts only acquire net cyclonic rotation when the horizontal vorticity has a streamwise component. (Diagrams above and below from St Andrews University Climate and Weather Systems website).

Specifically, cyclonic tornado formation requires that the wind veers with vertical height, (meaning that its direction rotates in a clockwise sense).

In effect, the flow of air through the updraft becomes analagous to flow over a hill (personal communication with Robert Davies-Jones): the flow into the updraft has cyclonic vorticity, and the flow velocity there reinforces the vertical velocity of the updraft; the downward flow on the other side, where the anticyclonic vorticity exists, partially cancels the vertical velocity of the updraft. Hence, the cyclonic part of the updraft becomes dominant.

Before we turn to consider wing-tip vortices, we need to recall the mathematical definition of vorticity, and the vorticity transport equation.

Let's start with some notation. In what follows, we shall denote the streamwise direction as x, the lateral (aka 'spanwise' or 'crosswise') direction as y, and the vertical direction as z. The velocity vector field U has components in these directions, denoted respectively as Ux, Uy, and Uz, There is also a vorticity vector field, whose components will be denoted as ωx, ωy, and ωz.

The vorticity vector field ω is defined as the curl of the velocity vector field:

ω = (ωx , ωy, ωz)

= (∂Uz/∂y − ∂Uy/∂z , ∂Ux/∂z − ∂Uz/∂x , ∂Uy/∂x − ∂Ux/∂y)

We're also interested here in the Vorticity Transport Equation (VTE) for ωx, the streamwise component of vorticity. In this context we can simplify the VTE by omitting turbulent, viscous and baroclinic terms to obtain:

x/Dt = ωx(∂Ux/∂x) + ωy(∂Ux/∂y) + ωz(∂Ux/∂z)

The left-hand side here, Dωx/Dt, is the material derivative of the x-component of vorticity; it denotes the change of ωx in material fluid elements convected downstream by the flow.

Now, for a racecar, streamwise vorticity can be created by at least two distinct front-wing mechanisms:

1) A combination of initial lateral vorticity ωy, and a lateral gradient in streamwise velocity, ∂Ux/∂y ≠ 0.

2) A vertical gradient in the lateral component of velocity, ∂Uy/∂z ≠ 0, (corresponding to directional vertical wind shear in meteorology).

In the case of the first mechanism, one can vary the chord, camber, or angle of attack possessed by sections of the wing to create a lateral gradient in the streamwise velocity ∂Ux/∂y ≠ 0. Given that ωy ≠ 0 in the boundary layer of the wing, combining this with ∂Ux/∂y ≠ 0 entails that the second term on the right-hand side in the VTE is non-zero, which entails that Dωx/Dt ≠ 0. Thus, the creation of the spanwise-gradient in the streamwise velocity skews the initially spanwise vortex lines until they possess a significant component ωx in a streamwise direction.

However, it is perhaps the second mechanism which provides the best insight into the formation of wing-tip vortices. As the diagram above illustrates for the case of an aircraft wing (G.A.Tokaty, A History and Philosophy of Fluid Mechanics), the spanwise component of the flow varies above and below the wing. This corresponds to a non-zero value of ∂Uy/∂z, and such a non-zero value plugs straight into the definition of the curl of the velocity vector field, yielding a non-zero value for the streamwise vorticity ωx:

ωx = ∂Uz/∂y − ∂Uy/∂z

Putting this in meteorological terms, looking from the front of a Formula 1 car (with inverted wing-sections, remember), the left-hand-side of the front-wing has a veering flow-field at the junction between the flap/main-plane and the neutral section. The streamlines are, in meteorological terms, South-Easterlies under the wing, veering to South-Westerlies above. This produces streamwise vorticity of positive sign.

On the right-hand side, the flow-field is backing with increasing vertical height z. The streamlines are South-Westerlies under the wing, backing to South-Easterlies above. This produces streamwise vorticity with a negative sign.

Thus, we have demonstrated that the generation of the Y250 vortex employs the same mechanism for streamwise vorticity formation as that required for tornadogenesis.

Monday, December 21, 2015

The open-tailed box effect

The modern understanding of racecar aerodynamics holds that copious amounts of downforce can be produced by accelerating the airflow under the car, in effect turning the region between the underbody and ground plane into a mobile nozzle.

The Lotus 78 of 1977 famously introduced venturi profiles beneath the car, and sliding skirts to seal the low pressure area thereby created. However, it is less well-known that underbody skirts had fitfully appeared on various cars earlier in the decade. Moreover, it is slightly disconcerting to hear the explanations proffered by several F1 designers from the middle 1970s for the function of these devices.

Gordon Murray introduced inch-deep skirts on the underside of the 1975 Brabham BT44 in conjunction with an overall 'upturned saucer' design, and explains his thinking as follows:

"With any moving form you have a stagnation point where air meets it and decides how much is going to flow over, below or around it...I decided, instead of presenting some sort of parabolic-shaped bluff body to the air, I wouldn't give the air a chance." He sketches a triangular shape. "That way the stagnation point was there," he says, pointing to the leading edge of the triangle's base, which is very low to the ground. "So all the air had to go over the top and you had the minimum coming under the car," (F1 Magazine, May 2001, p140-141).

Gordon Coppuck, however, had already experimented with skirts on the McLaren M23:

"In 1974 at Dijon-Prenois, vertical plastic skirts around the under-periphery of the car were tried, but they quickly wore away on contact with the track. The idea was to exclude air from underneath the car and so minimise lift," (p49, McLaren M23, Ian Wagstaff, Haynes 2013). The skirts were fitted again to the M23 at some races in early 1976, this time provoking complaints from competitors such as Colin Chapman (!) and Ken Tyrrell.

Talk of minimising lift by forcing air over the top of the car seems misguided because the upper surface of a racecar is generally convex, and the air will tend to be accelerated by a convex surface, producing low pressure on the upper surfaces, somewhat counter to the overall objective.

Nevertheless, it seems that there actually was a beneficial effect to be had from partially excluding air from the underbody, and this is clearly explained by Ian Bamsey in his fantastic book The Anatomy and Development of the Sports Prototype Racing Car (Haynes, 1991):

"The [Shadow] DN8 had conventional wings and a flat bottom and, following the fashion of 1976, it was fitted with skirts along the side of its monocoque, these joined in a vee under the nose. Under certain conditions the skirts rubbed on the track and their general effect was to sweep the air aside, in snowplough fashion. Thus, the overall effect was not one of spatial acceleration of the underbody air, it was one of exclusion. The flow blockage allowed the forward migration of the naturally low pressure air at the back of the car into the skirt's exclusion zone. This was the principle of the so-called open tailed box. A box with the road forming its bottom and only its tail open will experience a pressure reduction within as it progresses along the track," (p59).

So, although the effect may be quite weak, it is possible to generate downforce by excluding air from the underbody. 

Sunday, December 20, 2015

The L'Oreal Women in Science initiative

"Much remains to be done with regard to gender balance in science. Most tellingly, women account for only 30% of the world’s researchers. There are still great barriers that discourage women from entering the profession and obstacles continue to block progress for those already in the field."

So complains the L'Oreal-UNESCO 'For Women in Science' initiative. Since 1998 this programme has "strived to support and recognize accomplished women researchers, to encourage more young women to enter the profession and to assist them once their careers are in progress," by means of awards, fellowships, and advertising campaigns declaring that 'Science Needs Women'.

In comparison, the plight of men employed in the nursing profession has received little attention. To place this in the type of quantitative context which should appeal to 'Women in Science', the UK Office for National Statistics compiles an Annual Survey of Hours and Earnings (ASHE), based upon a sample taken from HM Revenue and Customs' Pay As You Earn (PAYE) records. Amongst other information, this reveals the number of men and women employed in different professions. The 2015 results estimate that the number of men and women employed in nursing are as follows:

Women in nursing: 673,000

Men in nursing: 109,000

Hence, only 14% of nurses in the UK are men, a figure somewhat lower than the 30% of 'Women in Science' worldwide. This shocking gender imbalance suggests that men are systematically discouraged from entering the nursing profession, are discriminated against within the profession, and have their progress blocked within the field. 

Now, some people might argue that this is only natural because men have a tendency to be more aggressive and competitive than women, a characteristic which makes women rather more suited to the caring professions.

This, however, is merely one of the phony arguments used by the nursing matriarchy to preserve the pre-eminent status of women within the profession. Men have evolved by sexual selection to be more aggressive and competitive in order to make themselves more attractive to women, and thereby enhance their prospects of being chosen for mating. It is therefore women and their mating criteria which are ultimately responsible for the aggressive and competitive nature of men.

Hence, it is about time that L'Oreal expanded its concerns over professional gender imbalance, and initiated a range of awards and fellowships to assist the cause of Men in Nursing (MIN). If possible, the assistance of the BBC should be sought to promulgate a range of positive Male Nursing stereotypes within its programming; for example, all hospital scenes should feature male nurses in prominent roles, leading and directing their female colleagues.

But hold on: what's this on the L'Oreal website?

"More women scientists should also be able to obtain positions of responsibility, just like their male counterparts, so that future generations will have role models to inspire them. The current situation, however, indicates that, well into the third millennium, a considerable discrepancy exists between what society professes to believe and what we actually do."

The third millennium? The third millennium since what exactly? The 'Women in Science' will be able to tell you that Homo Sapiens have been around for approximately 1.8 million years, so that's about one thousand eight hundred millennia. Not three.

Perhaps we should only consider the period of time which has elapsed since Homo Sapiens made the transition from the hunter-gatherer lifestyle to agriculture and settlement. But that would still be about 12,000 years, four times the number of millennia that L'Oreal are willing to acknowledge.

It's the type of error one would expect of a cosmetics-oriented organisation, rather than a scientifically-oriented one. Perhaps, then, we shall have to cast our net more widely to find a suitable sponsor for MIN...

Tuesday, September 15, 2015

Tyrrell 008 and Thunderbird 2

Patrick Depailler at the entry to Mirabeau Bas, Monaco 1978
Perhaps unexpectedly, the 1978 Tyrrell 008 shares an aerodynamic concept with Thunderbird 2: they both employ forward-swept wings.

The conventional wisdom on such wings is that they induce a span-wise component to the wing-flow, directed towards the roots of the wing. This has two consequences:

(i) The strength of the wing-tip vortices is reduced, decreasing vortex-drag.

(ii) Yaw instability is increased. As the vehicle begins to yaw, the effective forward-sweep is increased on the outer-wing, and the effective sweep is reduced on the inner wing. The further reduces the drag on the outer wing, and increases the drag on the inner wing, and this differential drag creates a torque which further increases the yaw angle.

So perhaps the Tyrrell 008 front-wing was designed to improve turn-in response.

Detailed information on the aerodynamic performance of Thunderbird 2 appears to have been lost when Century 21 Productions closed its studio on the Slough Trading Estate in late 1970.

Sunday, September 13, 2015

Tyre friction and self-affine surfaces

The friction generated by an automobile tyre is crucially dependent upon the roughness of the road surface over which the tyre is moving. The theoretical representation of this phenomenon developed by the academic community over the past 20 years has been largely predicated on the assumption that the road can be represented as a statistically self-affine fractal surface. The purpose of this article is to explain what this means, but also to question whether this assumption is in need of some generalisation.

To begin, we need to understand two concepts: the correlation function and the power spectrum of a surface.

Surfaces in the real world are not perfectly smooth, they're rough. Such surfaces are mathematically represented as realisations of a random field. This means that the height of the surface at each point is effectively sampled from a statistical distribution. Each realisation of a random field is unique, but one can classify surface types by the properties of the random field from which their realisations are drawn. For example, each sheet of titanium manufactured by a certain process will share the same statistical properties, even though the precise surface morphology of each particular sheet is unique.

Let us denote the height of a surface at a point x as h(x). The height function will have a mean <h(x)> and a variance. (Here and below, we use angular brackets to denote the mean value of the variable within the brackets). The variance measures the amount of dispersion either side of the mean. Typically, the variance is calculated as:

Var = <h(x)2> − <h(x)>2

Mathematically, the height at any pair of points, x and x+r, could be totally independent. In this event, the following equation would hold:

<h(x)h(x+r)> = <h(x)2>

The magnitude of the difference between <h(x)h(x+r)> and <h(x)2> therefore indicates the level of correlation between the height at points x and x+r. This information is encapsulated in the height auto-correlation function:

ξ(r) = <h(x)h(x+r)> − <h(x)2>

Now the auto-correlation function has an alter-ego called the power spectrum. This is the Fourier transform of the auto-correlation function. It contains the same information as the auto-correlation function, but enables you to view the correlation function as a superposition of waves with different amplitudes and wavelengths. Each of the component waves is called a mode, and if the power spectrum has a peak at a particular mode, it shows that the height of the surface has a degree of correlation at certain regular intervals.

Related to the auto-correlation function is the height-difference correlation function:

C(r) = <(h(x+r)−h(x))2>

This is essentially the variance in height as a function of distance from an arbitrary point x. This is a useful function to plot graphically because it represents the difference between the auto-correlation function and the overall variance, as a function of distance r from an arbitrary point x:

C(r) = 2(Var−ξ(r))

Which brings us to self-affine fractal surfaces. For such a surface, a typical height-difference correlation function is plotted below, (Evaluation of self-affine surfaces and their implications for frictional dynamics as indicated by a Rouse material, G.Heinrich, M.Kluppel, T.A.Vilgis, Computational and Theoretical Polymer Science 10 (2000), pp53-61).

Points only a small distance away from an arbitrary starting point x can be expected to have a height closely correlated with the height at x, hence C(r) is small to begin with. However, as r increases, so C(r) also increases, until at a critical distance ξ||, C(r) equals the variance to be found across the entire surface. Above ξ||, C(r) tends to a constant and ξ(r) tends to zero. ξ|| can be dubbed the lateral correlation length. In road surfaces, it corresponds to the average diameter of the aggregate stones.

To understand what a self-affine fractal surface is, first recall that a self-similar fractal surface is a surface which is invariant under magnification. In other words, the application of a scale factor x → a⋅x leaves the surface unchanged.

In contrast, a self-affine surface is invariant if a separate scale factor is applied to the horizontal and vertical directions. Specifically, the scale factor applied in the vertical direction must be suppressed by a power between 0 and 1. If x represents the horizontal components of a point in 3-dimensional space, and z represents the vertical component, then it is mapped by a self-affine transformation to x → a⋅x and z → aH⋅z, where H is the Hurst exponent. In the height-difference correlation function plotted above, the initial slope is equal to 2H, twice the value of the Hurst exponent.

Note, however, that road surfaces are considered to be statistically self-affine surfaces, which is not the same thing as being exactly self-affine. If you zoomed in on such a surface with the specified horizontal and vertical scale-factors, the magnified subset would not coincide exactly with the parent surface. It would, however, be drawn from a random field possessing the same properties as the parent surface, hence such a surface is said to be statistically self-affine.

A yet further adaptation is necessary to make the self-affine model applicable to road surfaces. Roads are known to be characterised by two distinct length-scales: the macroscopic one determined by the size of aggregate stones, and the microscopic one determined by the surface properties of those stones, (see diagram below).

One attempt to adapt the self-affine model to road surfaces introduces two distinct Hurst exponents, one for the micro-roughness and one (purportedly) for the macro-roughness, as shown below, (Investigation and modelling of rubber stationary friction on rough surfaces, A.Le Gal and M.Kluppel, Journal of Physics: Condensed Matter 20 (2008)):

This, however, doesn't seem quite right. The macro-roughness of a road surface is defined by the morphology of the largest asperities in the road, the stone aggregate. Yet, as Le Gal and Kluppel state, a road surface only displays self-affine behaviour "within a defined wave length interval. The upper cut-off length is identified with the largest surface corrugations: for road surfaces, this corresponds to the limit of macrotexture, e.g. the aggregate size."

It's not totally clear, then, whether the macro-roughness of a road surface falls within the limits of self-affine behaviour, or whether it actually defines the upper limit of this behaviour.

So whilst the notion that a road surface is statistically self-affine appears, at first sight, to have been empirically verified by the correlation functions and power spectra taken of road surfaces, perhaps there's still some elbow-room to suggest a generalisation of this concept.

For example, consider mounded surfaces. These are surfaces in which there are asperities at fairly regular intervals. In the case of road surfaces, this corresponds to the presence of aggregate stones at regular intervals. Such as surface resembles a self-affine surface in the sense that it has a lateral correlation length ξ||. However, there is an additional length-scale λ defining the typical spacing between the asperities, as represented in the diagram below, (Evolution of thin film morphology: Modelling and Simulations, M.Pelliccione and T-M.Lu, 2008, p50).

In terms of a road surface, whilst ξ|| characterizes the average size of the aggregate stones, λ characterizes the average distance between the stones.

In terms of the height-difference correlation function C(r), a mounded surface resembles a self-affine surface below the lateral correlation length, r < ξ||. However, above ξ||, where the self-affine surface has a constant profile for C(r), the profile for a mounded surface is oscillatory (see example plot below, ibid. p51). Correspondingly, the power spectrum for a mounded surface has a peak at wavelength λ, where no peak exists for a self-affine surface.

The difference between a mounded surface and a genuinely self-affine surface is something which will only manifest itself empirically by taking multiple samples from the surface. Individual samples from a self-affine surface will show oscillations in the height-difference correlation function above the lateral correlation length, but the oscillations will randomly vary from one sample to another. In contrast, the samples from a mounded surface will have oscillations of a similar wavelength, (see plots below, from Characterization of crystalline and amorphous rough surface, Y.Zhao, G.C.Wang, T.M.Lu, Academic Press, 2000, p101).

Conceptually, what's particularly interesting about mounded surfaces is that they're generalisations of the self-affine surfaces normally assumed in tyre friction studies. Below the lateral correlation length-scale ξ||, a mounded surface is self-affine (M.Pelliccione and T-M.Lu, p52). One can say that a mounded surface is locally self-affine, but not globally self-affine. Note that whilst every globally-affine surface is locally self-affine, not every locally self-affine surface is globally self-affine.

A self-affine road surface will have aggregate stones of various sizes and separations, whilst a mounded road surface will have aggregate stones of similar size and regular separation.

In fact, one might hypothesise that many actual road surfaces in the world are indeed locally self-affine, but not globally self-affine. For this to be true, it is merely necessary for there to be some regularity in the separation of aggregate within the asphalt. If the distance between aggregate stones is random, then a road surface can indeed be represented as globally self-affine. However, if there is any regularity to the separation of aggregate, then the surface will merely be locally self-affine. If true, then existing academic studies of tyre friction have fixated on a special case which is a good first approximation, but which does not in general obtain.