# The 1999 World Cup Semi-Final Ratings – More Asymmetry

*Between 27 May, 1999 and 16 March, 2011, Australia played 34 completed Cricket World Cup matches. They won 33 of them.*

*Here are the ratings for the one game they didn’t win.*

## More Asymmetry

Grade: D

We discussed a little earlier one of the fundamental asymmetries of limited overs cricket. That particular asymmetry was between the probabilistic nature of the first innings where you’re never quite sure what a winning total might be and the deterministic nature of the second where you’re very sure what it is.

But there’s an even more fundamental asymmetry that’s part of all forms of the game. One that’s so fundamental that we rarely even think about it. I’m referring to the differences between the nature of success in bowling vs batting

For example, consider a theoretical over where a bowler bowls a poor ball, two okay balls, a good ball, another okay ball, then a great ball. The result might be an over that looks like 411.1W, for an end result of 1/7 from the over.

If the bowler bowled those same balls in the reverse order – great ball, okay ball, good ball, two okay balls and a poor ball – then the result would be W1.114, still 1/7.

And, of course, you could shuffle those balls in *any* order and you’d get the same figures at the end of the over.

Now, obviously, this is just theoretical. In a real environment, there would be different batsmen on strike and other factors that could mean bowling different quality balls in different orders might end up with different results. But the general principle holds. From a bowler’s perspective, it doesn’t make a skerrick of difference to his final figures where his wicket-taking balls come from. Three unplayable balls at *any* point in your spell will see you end with a three-fer.

Batsmen face a very different situation. It matters very much in what order their poor shots come. If a batsman has a hundred great shots in them for the day and one shot that costs them their wicket, then if that wicket-losing shot comes first, they don’t even get the chance to play their hundred great shots.

This is perhaps *the* fundamental asymmetry of the game and while it’s pretty self-evident, it does result in a number of perhaps less obvious consequences.

For example, it means that bowlers should, on the whole, be more consistent than batsmen. From a pure probability-theory perspective, if a bowler such as Allan Donald has an ODI strike rate of 31.4, that means he takes a wicket every 31.4 balls. Or, to put it another way, his probability of taking a wicket on any *particular* ball is 1/31.4, or roughly 3.18%.

From there, it’s pretty trivial to calculate that, theoretically, if he bowls the full sixty balls he’s allowed in his spell, he’s got a 14% chance to take no wickets, a 28% chance to take one wicket, a 27% chance to take two, a 17% chance to take three, an 8% chance to take four, and a 4% chance to take five or more. Nerds out there will note that this is just an implementation of what probability theorists call the binomial distribution.

In fact, from the 77 instances in his career where Donald bowled his full ten overs, he took zero wickets 17% of the time, one wicket 29% of the time, two 26% of the time, three 22% of the time, four 6% of the time, with no five wicket hauls. This is pretty much spot on when compared to the theoretical prediction, with perhaps just a few more three wicket hauls than expected and slightly fewer of four or more. Cheers, nerds!

On the other hand, if we look at, say, Darren Lehmann, his ODI average of 38.96 and his strike rate of 81.34 suggests on average, he’s got a 2.09% chance of being dismissed on any given ball. This time, our nerd comrades will point to basic algebra, rearranging the definitions of what a strike rate and an average is until we get to a relationship between wicket chance per ball.

(To be more precise: Average = runs scored per wicket. Strike rate = runs scored per hundred balls. So strike rate divided by average = wickets per hundred balls. Which is the same as the percentage chance of losing your wicket on any particular ball. In Lehmann’s case, 81.34/38.96 equals a 2.09% chance.)

But, as mentioned above, for batsmen, the fact that it only takes one bad roll of the wicket-falling dice to bring an end to their innings means that batsmen’s scores will always be skewed towards failure over success. To again look at Lehmann’s career, almost *seventy percent* of his completed ODI innings were for a score below his career average. Including, obviously, this semi-final innings.

This tendency towards underperforming one’s average isn’t a quirk of Lehmann’s career. The shape of his career innings is subject to what our now-showing-off nerd friends will assure us is a geometric distribution. This distribution is common to all batsmen.

So you can talk about getting your eye in, and cashing in when you’re in form and whatever other relevant cliches apply for a batsman. And some of those cliches genuinely do apply to some degree. But the main reason why batsmen are less consistent than bowlers is primarily down to the fundamental asymmetry in how their skills are measured. And how that asymmetry is reflected according to the mathematics of probability theory.

Probability theory is a bitch.

Fortunately for Australia, they had somebody coming to the wicket who had built an entire career out of bloody-mindedly undermining the axiomatic foundations of probability theory whenever he damn well felt the need to do so.

Somebody who’d done so in the most recent game between the two sides just four days earlier.

Their captain, Steve Waugh.

*(Here’s the 1999 World Cup Semi-Final Ratings master page.)*