Reserves and Resources


1. Introduction

Two factors dominate the economics of oil and mining. The first is price, which is beyond any one producer’s ability to fully control. The second is how much resource a field or a mine contains, of what quality, and producible at what cost. While producers also do not control the physical presence of the resource, they expend enormous efforts to detect, evaluate and classify the extent of the resource before they take a decision to develop a project, and constantly monitor it throughout a project’s lifetime.

Throughout most of history exploration was led by the naked eye. Gold rushes started when artisanal miners noticed glints of gold or diamond in a river bed, or, slightly more evolved, tints of colour in rock formations which denoted the presence of iron ore or copper. Discoveries in the oil industry when it started in the late nineteenth century were often triggered by natural seepages which occurred at ground level, such as in Romania or Azerbaijan, or the appropriately name “Oil Creek” in Pennsylvania, where Edwin Drake drilled the first oil well after noticing seepages on the ground. Even in the twentieth century, production was dominated by geologists locating formations which were visible by the naked eye – so-called “structural traps”, where the folding of rocks over millions of years created dome-like structures. The Texas oil industry evolved by wild catters combing the landscape for hills.

Exploration today bears no comparison to these early techniques, and has been in a state of constant revolution. These days prospectors take readings from overhead flights which measure the relative infinitesimally small variations in the gravitational and magnetic fields of the formations they fly over. Ships searching for oil offshore trail dozens of strings of sensors, each a mile long, which each emit sonic booms and then measure by radar the strength and direction of the echoes from the sea bed which come back. Engineers drill exploration wells using drill bits with embedded sensors which relay information in real-time from thousands of metres under the earth. The flow of data follows Moore’s Law of information, increasing exponentially over time, and requires ever more sophisticated interpretation software.

With all this, however, two major constraints remain. The first is that despite the dazzling advances of science and technology, exploration remains risky. Oil companies still drill “dry holes” and plenty of mining companies relinquish prospecting licenses after long and expensive exploration efforts, not having found what they had hoped. The history of ever rising global demand has boosted both the incentive and the necessity of tackling ever more challenging resources, as easier formation in more welcoming environments get mined out. The exploration challenge is thus one of constant escalation both in terms of the challenge, and the ability to meet that challenge.

The second is that the increasing technical complexity of the industries has left the gap in resources and knowledge between producing companies and host governments as wide as ever. While most countries have state-run geological services in theory, many are incapable of acquiring their own data, or even of interpreting data acquired by companies. As a result, right of access to that information, guaranteed in most extractives contracts in recent years, is rarely even exercised systematically by governments.

The first step for governments to gain independent insight into their own natural resources is then to understand the systems of resource classification which are used by the industry, and which form a key part of their valuations as companies. Such systems are not the actual data, as we shall shortly see, or even the geological interpretation of that data. They are, rather, the categorisation of the results of the analysis. Not, so to speak, the individual questions and answers of an examination but the overall percentage and grade of the result.