Wednesday, June 23, 2010

Sources of turbulence in F1

Whilst this season's F1 World Championship is shaping up into a fascinating pentahedral battle between Jenson Button, Lewis Hamilton, Sebastien Vettel, Mark Webber, and Fernando Alonso, the sport's underlying aerodynamic problems remain. The turbulent wake created by a Formula 1 car, and the loss of downforce induced in a following vehicle, continue to mitigate against good racing. The recommendations of the FIA's Overtaking Working Group, implemented for the 2009 season, were intended to both reduce downforce, and to reduce the amount of turbulence to which a following car's downforce-generating devices would be sensitive. These recommendations were, of course, promptly undermined by the development of double and multi-deck diffusers. Hence, the regulations for the 2011 season seek to improve the opportunities for overtaking by banning multi-deck diffusers, and by also introducing driver-adjustable rear wings.

Perhaps, however, the regulations could go still further, and to this end it's worth noting that the rear end of a Formula 1 car generates turbulence by several different mechanisms. Any wing profile will, of course, generate a turbulent wake, but more generally, any surface will possess a boundary layer, and when that boundary layer detaches, it will inevitably create turbulence. In addition, it may be that modern F1 aerodynamics create some degree of Kelvin-Helmholtz turbulent instability. If two parallel adjacent layers of airflow have different velocities, then the velocity shear will induce such turbulent instability. The different levels of a multi-deck diffuser may well create some degree of Kelvin-Helmholtz instability, unless they discharge their airflows at exactly the same velocity.

Note that this is a different source of turbulence from that created by wing section profiles, which induce turbulence because they require a circulatory airflow component to operate. To understand this, consider the idealised situation where the airflow around a wing is a superposition of (i) a uniform 'freestream' flow from left to right, where the streamlines are parallel straight lines, and (ii) a pure circulatory flow, where the streamlines are anti-clockwise concentric circles.

Taking the sum of the velocity vector fields for the uniform and circulatory flow at each point, the anti-clockwise circulation is added to the freestream velocity below the wing, and subtracted from it above. Hence, the airflow beneath the wing will be faster than that above, and in accordance with the Bernoulli principle, the pressure beneath the wing will be lower than that above. This pressure differential causes a net downward force. The presence of circulation is also consistent with the use of Newton's third law ('action equals reaction') to explain the creation of downforce, because the circulatory flow adds an upward component to the airflow in the wake of the wing, corresponding to the upward deflection of air, and a downward reaction force.

If multi-deck diffusers do indeed induce Kelvin-Helmholtz instability, then whilst this particular source of turbulence will be eliminated next year, the teams will continue to use the airflow over the rear deck and beam wing to help pull the airflow out of a single diffuser, and these airflows presumably have different velocities. If so, it will remain a source of Kelvin-Helmholtz turbulent instability, and will continue to mitigate against overtaking.

So why not ban beam wings?

Wednesday, June 09, 2010

Initial thoughts on turbulence

Whilst a general theory of turbulence continues to elude mathematical physics, the phenomenon can actually be characterised in rather simple terms:

Turbulence is an intermediate state of a fluid between laminar flow and thermodynamic equilibrium.

Thermodynamic equilibrium is the state of maximum entropy, the state in which no information is embedded in the fluid. In such a state, all the energy that may once have been carried by the streamlines of the fluid, has been dissipated into thermal energy, the random heat energy of the molecules in the fluid. The direction of motion of the molecules in thermodynamic equilibrium is isotropically distributed, and the speed of motion of the molecules possesses the Maxwell-Boltzmann distribution.

In contrast, laminar flow is a low-entropy state of a fluid, in which the fluid carries both directional information, and speed of motion information, defined by the streamlines and trajectories of the fluid flow velocity field.

Turbulent flow is intermediate between laminar flow and thermodynamic equilibrium because it constitutes a flow regime in which the directional information of laminar flow has been degraded, but the speed of motion information has been at least partially retained. In this respect, turbulent flow is defined by its possession of the following two characteristics:

(i) Chaotic motion.
(ii) Vorticity.

In chaotic motion, the distance between a pair of particle trajectories, initially close together, can diverge as a function of time. This property destroys the 'parallel' nature of the particle trajectories in laminar flow.

'Vorticity' is the fancy word for rotation in a fluid, and turbulent flow is typically characterised by a cascade of vortices of different sizes. Each vortex transfers its angular momentum to vortices of a smaller size, which then, in turn, transfer their angular momentum to yet smaller vortices. (In the case of turbulent aerodynamics, one can think of the cascade as a type of pneumatic clockwork mechanism). The smallest vortices are sufficiently small that the viscosity of the fluid is able to transform their rotational energy into heat energy.

Chaotic motion and vorticity both clearly degrade the directional information carried by laminar fluid flow. However, the distribution of particle speeds carried by the laminar flow can be partially preserved in turbulent motion. Hence, it is in this sense that turbulent motion is an intermediate state between laminar flow and thermodynamic equilibrium.

Friday, June 04, 2010

Axioms for the many-worlds interpretation

Quantum theory is conventionally thought to be basis-independent. The state of a quantum system can be represented by a vector Ψ in a special type of vector space, called a Hilbert space H, and a basis is simply a collection of vectors {ψ} which enable any element to be decomposed as a linear combination of those basis vectors:

Ψ = c1 ψ1 + ⋅ ⋅ ⋅ + cn ψn.

The elements of the general linear group GL(H) transform from one basis to another, and by virtue of being basis-independent, the same quantum state Ψ can be expressed as a different linear combination in a different basis {ψ'}:

Ψ = c1' ψ1' + ⋅ ⋅ ⋅ + cn' ψn'.

In this sense, quantum theory can be said to be a GL(H)-invariant theory.

The many-worlds interpretation of quantum theory, however, suggests that: (a) there is a process called decoherence which selects a preferred basis; and (b) the universe splits into the branches selected by decoherence. Thus, whilst quantum theory per se does not identify a branching structure for the universe, the many-worlds interpretation does. A measurement-like interaction, as a special case of decoherence, can be said to extrude a collection of branches from a GL(H)-invariant structure, much like a rose-bush emerging from a thicket of brambles.

To elaborate, let us attempt to define some axioms for the many-worlds interpretation of quantum theory:

(i) Quantum theory is fundamental and universal.

(ii) A pure quantum state provides a maximal specification of the state (or history, in the Heisenberg picture) of a physical system.

(iii) Each type of physical system is represented by a unitary representation of the local space-time symmetry group on a Hilbert Space H.

(iv) The time evolution of a physical system in a local reference frame is represented by a continuous one-parameter group of unitary linear transformations U(t):H → H of the Hilbert space. This corresponds to the representation of the time-translation subgroup of the local space-time symmetry group.

(v) The interaction Hamiltonian between a macroscopic system and its environment is such that any macroscopic observable commutes with the interaction Hamiltonian.

(vi) Approximate GL(H) symmetry-breaking selects a preferred basis in the Hilbert space. Given a superposition of macroscopically distinguishable eigenstates of a macroscopic observable, the interaction between the macroscopic system and its environment is such that the reduced state of the macroscopic system evolves very rapidly towards a state which is empirically indistinguishable from a mixture of the states which were initially superposed. In other words, the eigenbasis of the macroscopic observable almost diagonalizes the reduced density operator. This process is referred to as decoherence. In effect, a preferred basis is selected by the interaction Hamiltonian. Given that there is a one-one mapping between the bases of a Hilbert space and the general linear group GL(H), decoherence approximately breaks the GL(H) symmetry of quantum theory.

(vii) Each decohering macroscopic state can be treated as a Gaussian wave-packet Ψ(x,p), of mean position 〈x〉 and mean momentum 〈p〉. A free Gaussian wave-packet initially minimises the position and momentum uncertainty. i.e., Δx Δ p = 1/2 ℏ. By virtue of being a wave-packet, the mean 〈x〉 of the position probability distribution will move with a velocity equal to the mean 〈p〉/m of the velocity probability distribution. A free Gaussian wave-packet is such that Δp remains constant, but Δx increases with time, a process referred to as the spreading of the wave-packet. Macroscopic Gaussian wave-packets, however, are constrained from spreading due to the continual interaction of the macroscopic system with its environment.

(viii) For each measurement-like interaction, the universe branches, and it is the branching which transforms potentiality into actuality. The branches are those selected by decoherence, and each branch realises one and only one of the states in the mixed state produced by decoherence.

(ix) The squared modulus of the complex amplitudes in the initial superposition correspond to the relative frequencies with which the different outcomes occur in most branches of the universe.

Wednesday, June 02, 2010

A frozen universe?

Philosopher of Physics Craig Callender discusses the arguments for considering that time is an illusion in the June edition of Scientific American. Particularly striking is the following analogy Callender draws between time and money:

We might describe the variation in the location of a satellite around Earth in terms of the ticks of a clock in my kitchen, or vice versa. What we are doing is describing the correlations between two physical objects, minus any global time as intermediary...Instead of saying a baseball accelerates at 20 meters per second per second, we can describe it in terms of the change of a glacier...Time becomes redundant. Change can be described without it.

This vast network of correlations is neatly organized, so that we can define something called "time" and relate everything to it, relieving ourselves of the burden of keeping track of all those direct relations...Money, too, makes life easier than negotiating a barter transaction every time you want to buy coffee. But it is an invented placeholder for the things we value, not something we value in and of itself. Similarly, time allows us to relate physical systems to one another without trying to figure out how a glacier relates to a baseball. But it, too, is a convenient fiction that no more exists fundamentally in the natural world than money does.


In terms of direct relations between physical objects, one could say that x generations of bacteria reproduce in one's intestine for every rotation of the Earth, and one could exchange y cups of coffee for an iPod. In terms of more abstract concepts, in the first case one could say how many seconds it takes for one rotation of the Earth, and how many seconds it takes for the bacteria in one's intestine to reproduce, and in the second case one could express the value of a cup of coffee in pounds, and the value of an iPod in pounds.

However, whilst it might well be possible to describe change without time, a static universe is a universe without time or change, hence the eliminability of time does not entail that the universe is static. Consider again the economic analogy. Money is a common means of expressing the relative values of different goods and services. If we refer to goods and services as economic objects, then money can be said to be an abstraction from the network of direct relative values of all the various pairs of economic objects. Time, by analogy, is a common means of expressing the relative change of different pairs of physical objects.

If time is to physical objects as money is to economic objects, then it must be an abstraction from a network of direct relations between pairs of physical objects. And what is that direct relation, if it isn't the relative amount of change? Conversely, relative change is to time as relative value is to money. The notion of money makes no sense without the concept of value, and the notion of time makes no sense without the concept of change.

As Callender asserts, [relative] change can be described without time, just as one can imagine an economy which operates without money. However, an economy without money is clearly not an economy in which economic objects have no value, and a timeless universe is not necessarily a universe without change. To eliminate change, and to reduce it to mere correlations between variables, an independent argument is required.

Here, Callender turns to canonical quantum gravity, in which the wave-function of the universe is represented by an apparently time-independent solution to the Wheeler-DeWitt equation. It is this fact which has been the primary spur behind the modern arguments for a static universe. To reconcile the time-independence of the wave-function of the universe with our perception of change, the concept of intrinsic time has been proposed.

The wave function Ψ in quantum theory is considered to be a function of various degrees of freedom: Ψ(x1,...,xj,...xn). (In quantum cosmology, there are an infinite number of such degrees of freedom, but to keep things simple, let us suppose that there are only a finite number). The idea of intrinsic time is to identify at least one degree of freedom xj, which behaves like a clock, and can be used as a surrogate time variable. Thence, one can denote xj as t, and treat the wave-function as a time-dependent function Ψt(x1,...,xj-1,xj+1,...xn) of the remaining degrees of freedom.

On this view, time is an internal, approximate, emergent property of certain physical systems.