The validity of construction safety statistics is being challenged by a US-based research project, involving CIOB member Fred Sherratt. CM reports.
Incident frequency rates are a familiar benchmark for safety, enabling comparisons of firms on a simple numerical basis that is readily understandable on site andin the boardroom.
As a result, they are used to measure safety performance across the world. Clients and contractors use them to appoint their supply chains for projects, with lower rates assumed to predict safer performance. They are used to measure the impact of safety initiatives, and can be set as a KPI for continuous organisational improvement, linked to safety leadership.
Although there are slight differences in the ways incident frequency rates are calculated in different countries, they all involve a count of injuries over worker hours to produce a rate. The mathematics operate in the same way. What varies are definitions of a qualifying injury and the scalar factor applied.
Despite their ubiquity, the use of incident frequency rates as a measure of safety has not gone unchallenged. The Construction Safety Research Alliance (CSRA) recently began studying the validity of Total Recordable Incident Rate (TRIR) statistics. Based at the University of Colorado, Boulder, the CSRA aims to eliminate serious incidents and fatalities in the construction industry with transformative research and defendable science.
How incident frequency rates vary in measurement
In the UK, the Accident Frequency Rate (AFR) is calculated as:
No. injuries (per year) x 1,000,000 divided by No. of worker hrs (per year)
In the US, the Total Recordable Incident Rate (TRIR) for a specific period is calculated as:
No. recordable incidents x 200,000 divided by No of worker hrs
Unfairly biased
Dr Fred Sherratt MCIOB, associate director of research at the CSRA, explains the context: “There has been plenty of criticism that TRIR statistics are unfairly biased against smaller firms who work fewer hours, thus skewing the calculation. There are also criticisms of incident rates because they lead to reactive approaches to safety management, their incentivisation potentially encouraging underreporting of injuries and poor case management.
“Furthermore, frequency rates do not account for the severity of the incidents and resultant injuries involved – a bad cut to the finger can be counted as equal to a fatality. Nor do they count or include near misses, including those with the potential to be fatal. They are also by necessity a lagging indicator of safety, meaning their use in performance prediction can easily be challenged.”
The CSRA focused on answering a specific question: Given the way it’s used, to what extent is TRIR statistically valid?
Using 3.2 trillion worker-hours from 17 years of data, the CSRA focused on the fundamental mathematics that underpins frequency rates, carrying out parametric and non-parametric statistical analysis of TRIR.
The findings were as follows:
- It requires tens of millions of worker hours to return valid data – making considerations of internal performance or comparisons between firms or projects utterly meaningless. At best frequency rates could be used for industry comparisons over long periods of time.
- Short-term fluctuations in incident rates are mostly random and do not necessarily mirror changes in safety – they should certainly not be used to measure the impact of safety interventions.
- Reporting incident rates to decimal points is highly disingenuous, as it suggests a mathematical certainty that is meaningless in the face of such random variation.
- There is no discernible association between TRIR and fatalities.
- In nearly every practical circumstance, it is statistically invalid to use TRIR to compare firms, projects or teams.
Although this work was undertaken using TRIR, the number of worker-hours needed for statistical stability remain the same whatever the calculation formula, as do the conclusions of random variability and lack of predictive capacity.
“Frequency rates do not account for the severity of the incidents and resultant injuries involved – a bad cut to the finger can be counted as equal to a fatality.”
“What this research most boldly concludes is that we need a new approach to safety measurement, one able to provide richer data and insights,” says professor Matt Hallowell, executive director of the CSRA.
Potential alternatives
“Research and practice are already exploring a number of potential alternatives. The use of leading indicators able to track safety performance before an incident occurs is becoming more common; for example, firms often log the amount of safety training provided or the frequency of site safety observations.
“However, there has yet to be a safety metric that is as standardised and ubiquitous as incident frequency rate. Whatever approach emerges in its place, any solution will require that the industry adopts a consistent approach to allow for fair comparison among firms.”
Hallowell believes the CSRA research shows the need to challenge the orthodoxy of how construction measures safety.
“Simply because something has been used for a long time, because it is familiar, and because it is easy to understand, does not mean we should not test whether it is actually fit for purpose,” he reasons. “In the case of incident frequency rates, there is overwhelming evidence that they do not measure what we think they measure, and should actually be treated with extreme caution.”
For the full statistical analysis of TRIR, visit www.csra.colorado.edu.