Speed as a Risk Factor in Serious Run-off-Road Crashes: Bayesian Case-Control Analysis with Case Speed Uncertainty
In the United States, the imposition and subsequent repeal of the 55 mph speed limit has led to an energetic debate on the relationship between speed and the risk of being in a (fatal) crash. In addition, research done in the 1960s and 1970s suggested that crash risk is a U-shaped function of speed, with risk increasing as one travels both faster and slower than what is average on a road. This paper describes two case-control analyses of run-off-road crashes, one using data collected in Adelaide, Australia, and the other using data from Minnesota. In both analyses the speeds of the case vehicles were estimated using accident reconstruction techniques while the speeds of the controls were measured for vehicles traveling the crash site under similar conditions. Bayesian relative risk regression was then used to relate speed to crash risk, and uncertainty in the case speeds was accounted for by treating these as additional unknowns with informative priors. Neither dataset supported the existence of a U-shaped relationship, although risk of a serious or fatal run-off-road crash clearly tended to increase as speed increased.
KEYWORDS: Case-control, speed limits, logit model, Markov Chain Monte Carlo.
Determining appropriate speed limits is a problem that continues to exercise engineers, elected officials, and interested citizens, and at first glance this issue seems fairly simple. Compared to a slower vehicle, a vehicle traveling at high speed will go farther while the driver is reacting, take longer to stop, be more likely to sideslip for a given steering angle, and need to absorb more kinetic energy to protect its occupants. This suggests that, other things equal, slower speeds are safer, but complicating this issue is a series of observational studies which claim to find that the crash risk for slow moving vehicles is as high or higher than that of speeding vehicles (Solomon 1964; Cirillo 1968; West and Dunn 1971; Harkey et al. 1990). Each of these studies employs what is essentially a case-control design, where estimates of speeds from a sample of crash-involved vehicles (the cases) are compared with speeds from a sample of vehicles not involved in crashes (the controls). These studies have been subjected to a range of methodological criticisms, and there is no consensus on whether the observed associations between low speed and crash risk reflect actual causal processes, or are simply methodological artifacts.
In this paper, we begin by briefly reviewing these studies along with some more recent work. Based on this review we identify three related methodological issues that should be addressed in a case-control study of speed and crash risk. The first issue arises because the causal role of high (and low) speed probably differs for different crash processes, and to understand this causal role we should conduct separate studies of different types of crashes. This leads to the second issue because breaking a dataset down by type of crash often leads to small samples for which statistical methods based on large-sample asymptotics may not be applicable. The third issue stems from the fact that estimating the speeds of the case vehicles most often requires an after-the-fact reconstruction of the crash. The speed estimates produced by a crash reconstruction are to some degree uncertain, and this uncertainty should be allowed for in a case-control analysis. We then illustrate how a Bayesian analysis can address all of these issues by testing for the existence of a U-shaped relationship between speed and relative crash risk using two small case-control samples of serious and fatal run-off-road crashes.
Review of Case-Control Studies of Speed and Crash Risk
Summaries and detailed critiques of the studies by Solomon (1964), Cirillo (1968), and West and Dunn (1971) have been given by Shinar (1998) and Kloeden et al. (1997). In Solomon's (1964) study the pre-crash speeds of approximately 10,000 vehicles involved in crashes on some 600 miles (960 km) of two-lane and four-lane highways were obtained from crash records, from reports by the drivers involved, from witness statements, or from estimates provided by the police officers who were called to the scenes post hoc. Speed and traffic volume data were collected on these same highway stretches, and the ratios of the fraction of crash-involved vehicles with speeds in a given range to that of vehicle-miles of travel for that range were computed. These involvement rates were then plotted against the speed ranges' deviations from the average speed, producing a striking U-shaped relationship. For daytime crashes, the involvement rates were lowest for speeds about 10 mph (16 km/h) faster than the average, while speeds 30 mph (48 km/h) lower than average had involvement rates about 300 times greater than the lowest values. Involvement rates also increased for speeds greater than 10 mph (16 km/h) above the average but not as dramatically as for the lower speeds. In a later study Cirillo (1968) found a similar pattern for rear-end, angle, and sideswipe crashes on U.S. Interstate highways, and more recently Harkey et al. (1990) again found a U-shaped relationship between involvement rate and deviation from average speed for a sample of highway sites in North Carolina and Colorado.
These are provocative findings, but their importance depends on whether or not they correctly identify low speed as a cause of crashes. One potential limitation of these studies, which was pointed out in Shinar (1998) and Kloeden et al. (1997), concerns the data collection procedure. In Solomon's original study the speeds of vehicles making turns were included for the case vehicles but not for the control vehicles. In Cirillo's study attention was restricted to crashes involving multiple vehicles traveling in the same direction, and on freeways these sorts of crashes tend to occur in congested conditions, which are characterized by reduced speeds. It is not clear what traffic conditions were present when the control speed data were collected but if traffic was uncongested then a situation similar to that of Solomon's study could arise. Finally, Harkey et al. (1990) explicitly stated that their control speed data were collected so as to guarantee that drivers were traveling at essentially freeflow speeds. Clearly this sampling procedure will tend to produce a higher fraction of low speeds in the case sample, without necessarily illuminating the role of low speed as a causal factor. For instance, a driver who slows down in order to turn or because of traffic congestion and is then involved in a crash will obviously have a lower speed than will freely moving drivers, but this does not entail that the slow speed actually caused the crash.
In addition, in each of these studies the procedure used to estimate the speeds of the case vehicles differed from that used to estimate speeds for the controls. Because measurement operations are almost always subject to some degree of error, this use of different estimation procedures means that the measurement error effect on the cases may be different than that for the controls. It is not now possible to assess the extent of measurement error in these studies, and so we cannot say with certainty whether or not measurement error effects have biased their results. However, a striking example of the potential effect of measurement error has been given by White and Wilson (1970), who showed that involvement rate curves similar to Solomon's can be produced by making plausible assumptions about speed measurement errors, even if actual crash risk is independent of speed.
We can hope, but probably should not expect, that a single study will provide the conclusive answer to a policy question. A more realistic expectation is that after a study's findings are presented, a process of critical discussion will identify potential weaknesses, and new studies addressing these weaknesses will then be conducted. Over time then our strongest findings should more closely approximate truth. Some limitations of the Solomon and Cirillo studies were in fact known by 1970, and West and Dunn (1971) described an effort to improve on this earlier work. In West and Dunn's study, investigation teams visited the crash sites and estimated the case vehicle speeds using crash reconstruction methods. An attempt was also made at using data from nearby magnetic loop detectors to determine some case vehicle speeds, but this was apparently successful only in 9 of 36 attempts. What is interesting about West and Dunn's results is that although they still found a U-shaped relation between deviation from average speed and involvement rate, the estimated involvement rates were all of the same order of magnitude. That is, the estimated rates for the slowest vehicles were only about six times larger than for vehicles traveling near the average speed, compared to the increase of several hundred times found in the Solomon, Cirillo, and Harkey et al. studies.
More recently, important advances in the application of case-control methods to study crash risk have been made by the former Road Accident Research Unit (RARU) at the University of Adelaide (Moore et al. 1995; Kloeden et al., 1997, 2001, 2002). The RARU studies explicitly used case-control designs, where the speeds of the case vehicles were estimated using crash reconstruction methods, while the speeds of the controls were obtained by sampling vehicles traversing the crash locations under conditions similar to those present when the crashes occurred. In Kloeden et al. (1997) all crash locations had posted speed limits of 60 km/h (38 mph), all crashes occurred during daylight and dry weather conditions, and vehicles slowing to make turning maneuvers were excluded from the sample. Nonparametric estimates of relative crash risk were then computed for speeds ranging from 35 km/h to 85 km/h (22 mph to 53 mph), and although relative risk tended to increase as speeds increased above 60 km/h (38 mph) there was no evidence of increased risk for lower speeds. In Kloeden et al. (2001) this approach was applied to crashes occurring on rural roads, and in Kloeden et al. (2002) the original data were reanalyzed using parametric logit models. In both later studies, relative crash risk tended to increase as speed increased but no heightened risk for lower speeds was found.
Carrying on the process of critical discussion and improvement, we can identify two ways in which these findings might be further strengthened. The first arises from the fact that in all the studies considered so far, crashes of different types were combined to compute estimates of relative risk. Because one can reasonably expect that the causal effect of speed might differ for different types of crashes, it is possible that estimates computed by aggregating crash types will be influenced by the relative frequency of the different crash types in the sample. Inspection of table 4.4 in Kloeden et al. (1997) suggests that the difference between case and control speeds does vary across crash types with, for example, run-off-road crashes showing a pronounced difference while pedestrian crashes show little difference. One should consider then whether different types of crashes show different relationships between relative risk and speed, but simply disaggregating the RARU's data by crash type leads to relatively small numbers of cases for each type. Standard statistical methods (e.g., Hosmer and Lemeshow 2000), which are based on the large-sample asymptotic properties of estimators, will not necessarily be applicable.
The second avenue for improvement arises from the use of crash reconstruction methods to estimate the case speeds. Although this is a substantial improvement over what was done in earlier studies, in practice the evidence available from a crash investigation is rarely sufficient to determine all quantities needed to compute an estimate of speed. For example, the formula
where g = gravitational acceleration, v = vehicle speed, d = measured skid mark length, and μ = tire/pavement friction coefficient, can be used to estimate a vehicle's speed, but only if one also has an estimate for μ . Knowing the composition of the road and that it was dry can allow one to arrive at a plausible range for μ (Fricke 1990), but the actual value characterizing an actual crash will still be to some degree uncertain. This situation becomes even more complicated if we allow that the measured skid mark length is at best an uncertain estimate of the actual braking distance. The estimates produced by a crash reconstruction are thus subject to uncertainties, and the appropriate way to account for these uncertainties is still something of an open question. In Davis (1999; 2003) we have illustrated how this can be accomplished by treating crash reconstruction as an exercise in Bayesian inference. Here the reconstructionist's expert opinion regarding plausible values for crash variables is captured using prior probability distributions, and a model of the crash can then be combined with measurements to update these prior distributions, via Bayes theorem. The result is a posterior probability distribution over the values of the crash variables.
Both of these issues, accounting for the differential uncertainty in the case speed estimates and analysis of small samples, can be addressed using Bayesian methods. We will illustrate this using two datasets, where the cases were vehicles involved in fatal or severe run-off-road crashes, and where Bayesian crash reconstruction methods were used to compute posterior probabilities for each case vehicle's initial speed. The case vehicle speeds were then combined with speed measurements for vehicles not involved in crashes (controls), leading to a case-control problem. The posterior speed distributions from the crash reconstructions were used as informative prior distributions for the case vehicle speeds, and logistic regression modeling was then used to test whether or not a U-shaped relationship existed between speed and risk in run-off-road crashes.
A "Failure Rate" Model for Run-off-Road Crashes
As noted above, we will use logistic regression to test for the possibility of a U-shaped relation between speed and crash risk, but before proceeding we would like to show how standard assumptions used in crash analysis lead to the logit model. To see this, assume first that run-off-road crashes arise when (a) a driver finds himself or herself in a crash avoiding situation, and (b) the driver's evasive action is not successful. As a driver traverses a section of road, crash avoidance situations are assumed to arise randomly, with density λ . The success of the evasive action is assumed to depend on the vehicle's speed, denoted by v , in a manner such that the probability of crash given v is approximately proportional to exp( g (v)), for some function g (.). This leads to a proportional hazards model with hazard function
h (v) = λ exp ( g ( v ) ) (2)
Now if X denotes a random variable giving the distance traveled until being in a crash, the probability of being in a crash while traversing a section of road of length x is simply
P [ X ≤ x | v ] = 1 − e − ( λ x ) exp ( g ( v ) ) (3)
If run-off-road crashes are rare (i.e., 0 <λ x <<1) (3) can be approximated as
( λ x ) exp ( g ( v ) ) e − ( λ x ) exp ( g ( v ) ) (4)
which is the probability that the value 1 is taken on by a Poisson random variable Y with expected value
E [ Y ] = ( λ x ) exp ( g ( v ) ) (5)
If we then condition on Y = 0 or 1 (so that no one can crash more than once), we get
a logit model.
Two sources provided the data on run-off-road crashes used in this study. The first was the case-control study conducted by the Road Accident Research Unit (RARU) at the University of Adelaide (Kloeden et al., 1997), which, as we noted earlier, reported data for 151 case vehicles involved in serious or fatal crashes on roads with 60 km/h speed limits. For each case vehicle, four control vehicles were selected by randomly sampling vehicles using the crash site at times when conditions were similar to those when the crash occurred. Control speeds were measured using laser speed guns while the case speeds were estimated using crash reconstruction techniques. Of the 151 cases, 14 were single vehicle run-off-road crashes, and of these 8 involved collisions with objects, where it was possible to measure the deformation (crush) suffered by the vehicles. For two others the case vehicles left measurable yaw marks near the points where the drivers lost control of their vehicles.
As noted above, crash reconstructions are subject to nontrivial uncertainties, and the probability calculus can be used as a logic for reasoning about these. Our general approach to estimating case vehicle speeds for the RARU data was to develop probabilistic versions of the deterministic methods used by the RARU researchers, and this was done by supplementing their measurements with training data, treating the case vehicle speeds as missing values to be estimated. For the fixed-object crashes, the training sample consisted of 19 staged collisions conducted by the National Highway Traffic Safety Administration (NHTSA), reported in Nystrom and Kost (1992). For the yaw-mark crashes, the training sample was 40 measured speeds and yaw radii tabulated in Semon (1995).
First, for the fixed-object crashes, the following variant of Nystrom and Kost's (1992) model was used to relate measured crush to impact speed
c = ( v − v0 ) / (a0 + a1 * w ) + ε (7)
c = measured crush
v = impact speed
v0 = highest impact speed producing no crush (taken to be 5 mph)
w = vehicle weight
a0 , a1 = coefficients to be estimated
ε = error.
The error term ε allows for differences between measured and predicted crush, and was assumed to be normally distributed with mean equal to 0 and unknown variance σ2 . Because six of the case vehicles left measurable skid-marks prior to collision, it was also necessary to account for speed lost while skidding. Treating the measured skid-mark as an error-prone observation, its expected value was computed using the RARU's formula
vt = denotes the vehicles initial speed
L = fraction of kinetic energy retained between the initiation of braking and the point where the skid-mark begins (taken by the RARU to be 0.8),
μ = coefficient of tire/pavement friction.
Garrot and Guenther (1982) conducted an extensive comparison of measured versus theoretical skid-marks, and the differences between these showed a coefficient of variation approximately equal to 0.11. Following the approach described in Davis (2003), the measured skid-mark was assumed to have a log normal distribution, with the mean equal to the natural log of the theoretical length given in equation (8), and a normal variance of 0.01. This gives a coefficient of variation for the measurement error equal to approximately 0.1.
In addition to the likelihood functions for the measured crush and skid lengths, Bayesian analysis requires a prior distribution for the unknown quantities. For estimating the speeds of the fixed-object crash vehicles, the following hierarchical prior distribution was used:
a0 ~ Normal (0, 10 6 ),
a1 ~ Normal (0, 10 6 ),
σ2 ~ Inverse Gamma (0.001, 0.001),
v and vt ~ Normal (α, π),
α ~ Normal (40 mph,10 6 ),
π ~ Inverse Gamma (0.001, 0.001), and
μ ~ Uniform (0.45, 1.0).
With the exception of μ , all these are commonly used "uninformative" priors. For μ , the lower bound characterizes a dry, travel polished asphalt pavement while the upper bound characterized a dry, new concrete pavement (Fricke 1990). As noted earlier, all crashes in the RARU sample occurred in dry weather.
Compared to the fixed-object crash model, the yaw-mark model was simpler, but still based on the principle of imputing unknown speeds. Treating the radius of the yaw mark as an error-prone measurement caused by the speed, the standard critical speed formula leads to
r = measured yaw radius,
v = vehicle's speed,
μ = friction coefficient, and
ε = measurement error.
The error term ε was assumed to be normally distributed with mean equal to zero, and unknown variance σ2 . As stated earlier, 40 experimental tests having information on observed speed and radius of curvature were used as a training dataset for estimating the value of μ . The two RARU cases were then treated as similar to the 40 tests but with missing speeds. The following priors were used:
v ~ Normal (α, π),
α ~ Normal (50,10 6 ),
π ~ Inverse Gamma (0.001, 0.001), and
μ ~ Uniform (0.45,1).
Posterior distributions for the case vehicle speeds were then computed using the Markov Chain Monte Carlo program WinBUGS (Spiegelhalter et al., 2000), and details of the WinBUGS models have been given in Davis and Davuluri (2002). Table 1 summarizes the case and control data for the 10 RARU crashes.
The second dataset was taken from a set of 46 fatal crashes occurring on Minnesota state highways between January 1, 1997 and June 30, 2000. These were all fatal crashes reported during this time period which occurred near a location where the Minnesota Department of Transportation collected automatic vehicle speed data, and for which crash investigation data could be obtained from the Minnesota State Patrol. The automatic speed data were used to produce control speeds by randomly sampling from the speed measurements taken during an hour when conditions were judged to be similar to those present when the crash occurred. Of the 46 crashes, 22 involved loss of control and running off the road, and of these 9 resulted in collisions with other vehicles, 10 resulted in rollover, and 3 resulted in collisions with fixed objects. For 10 of these it was possible to use crash reconstruction methods to estimate initial speeds. For two, initial speeds were estimated from measured yaw marks using the method described above, while for five a tripped rollover model described in Cooperrider et al. (1990) and Martinez and Schlueter (1996) was adapted to estimate initial speeds. This method divides the roll-over into rolling, tripping, and pre-tripping phases, and then works backward from the vehicle's rest position to estimate the speed at the beginning of each phase. For the three remaining Minnesota crashes, straightforward application of either the yaw-mark mark method or the tripped rollover model was not possible, but special features of these crashes still permitted estimates of initial speeds. In one crash, where the case vehicle jumped a ditch, the fall equation (Fricke 1990) was used to estimate speed at the ditch's edge. In another, where the driver was thrown from the vehicle upon its striking a fence, Searle's (1993) throw equation was used to estimate the vehicle's speed at the fence. In the third, where a driver lost control and rolled his vehicle after being rear-ended, we were able to estimate an initial speed for the rear-ending vehicle from skid and yaw marks. Table 2 summarizes the case and control data for the 10 Minnesota run-off-road crashes. Example code illustrating our reconstruction methods is available from the first author on request.
As indicated in the Introduction, one of the unresolved issues in the debate on speed versus crash risk concerns whether or not crash risk is a U-shaped function of speed, with vehicles traveling at atypically low and high speeds having increased crash risk. If we accept that the role of speed may vary for different types of crashes, depending on the operative processes and circumstances, then appropriate tests for the possibility of a U-shaped relationship should be carried out using data disaggregated by crash type. Otherwise, there is the possibility of obscuring the speed effect by combining processes where speed is and is not causal, or of producing an apparent U-shaped relationship by mixing situations where high speed is causal with other situations where low speed is causal. Disaggregating by type of crash reduces sample sizes however, but as argued earlier, a simple proportional hazards model relating speed, distance traveled, and crash risk leads to a prospective logit model
P [ crash | v ] = exp (b0 + g (v, b) ) / ( 1 + exp (b0 + g (v,b) ) ) (10)
The parameter b 0 can be taken as summarizing the effects of those features shared by the cases and controls at a given location, while the function g ( v , b ) describes how crash risk varies with speed and a vector of parameters b . Assuming first that both the case and the control speeds are known without error, the fact that the cases and controls are matched by location means that a matched case-control approach can be used, which leads to a likelihood contribution from site k of the form
P [ yk, 0 = 1, yk, j = 0, j = 1,…,m ] = exp ( g ( vk, 0 , b ) ) / ( Σ exp ( g ( vk, j , b ) ) ) (11)
(e.g., Hosmer and Lemeshow 2000). Here vk, 0 denotes the case vehicle speed at site k , while vk, j j > 0 denotes the corresponding speeds for the control vehicles. The likelihood function obtained as the product of equation (11) over all case-control sets would then provide the basis for either a Bayesian or a classical approach to estimation. If the case speeds are only known up to some probability distribution, they then become additional quantities to be estimated, and by using those distributions as priors Bayesian estimation is in principle straightforward. In all our analyses, the priors for the case vehicle speeds were taken to be normal distributions, with means and standard deviations as given in tables 1 and 2.
Estimating the Risk Functions
In the simplest case, a test as to whether or not the risk function is U-shaped can be carried out by comparing a quadratic form for the function g (.)
g ( v, b ) = b1 ( v − vm ) + b2 ( v − vm ) 2 (12)
to a linear form
g ( v, b ) = b1 ( v − vm ). (13)
Looking first at the Minnesota crashes, Bayes estimates for the linear model (13) were computed using the Markov Chain Monte Carlo routine WinBUGS with vm being fixed, for each case-control set, to the average speed for that set's control population. When we attempted to estimate the quadratic model, however, the MCMC routine was unstable, producing chains with poor mixing properties, and the simulated values for b1 and b2 tended to be highly correlated with each other. At least one reason for this can be seen in figure 1, which shows a contour plot of the marginal log-likelihood as a function of b1 and b2 . The narrow ridge-shape of this log-likelihood indicates that the data tend to be uninformative about b2 over a range of values, including zero. For the relative risk function to be U-shaped, however, b2 must be positive, so additional MCMC runs were conducted with the prior for b2 constrained to have support only on the non-negative real numbers. Table 3 displays posterior estimation summaries for the linear and constrained quadratic models as fit to the Minnesota run-off-road data.
The results shown in table 3 indicate that the linear and constrained quadratic models provided roughly equivalent fits to the Minnesota data. The posterior deviances have similar distributions, and the values of the deviance information criteria (DIC) (Spiegelhalter et al. 2002) were approximately equal. Because a convex parabola and a straight line have different implications for the relationship between speed and crash risk, the rough equivalence of these two models may seem contradictory. The contradiction is resolved by looking at the point of minimum risk, which for the quadratic model occurs at a speed equal to vm − b1 / (2b2). Substituting the posterior means for b1 and b2 into this expression reveals that for the quadratic model minimum risk occurs at about a speed between 10 and 11 mph below the average for the controls. Because most (97 out of 100) of the control speeds in table 2 were above these values, what happened was that the quadratic model achieved parity with the linear model simply by being monotonically increasing over the range of the available data. More particularly, the results in table 3 do not appear to support earlier claims that minimum risk tends to occur near the mean or median of the control speeds.
For the RARU data, the Markov Chain Monte Carlo simulations showed poor mixing even when attempting to fit the linear model, and again studying a plot of the marginal log-likelihood function was informative. Figure 2 shows this, and the distinctive feature here is how the log-likelihood flattens out for higher values of b1 , indicating that the matched case-control data contain no information about how large b1 is. The reason for this is found by inspecting table 1, where it can be seen that most controls speeds are below the posterior mean speed for their corresponding cases, and none are greater than two standard deviations above this mean. To work around this problem, it was decided to supplement the control speeds with those obtained at all other sites in the RARU study. We considered this acceptable because, unlike the Minnesota data, the RARU data were collected under similar road and weather conditions. This produced an unmatched case control study with 10 cases and 604 controls. As with the Minnesota data, WinBUGS was used to compute Bayesian estimates of the parameters for the linear and quadratic models, along with measures of goodness of fit, and these are displayed in table 4. Again, the linear and quadratic models appeared to fit the data about equally well, but the interpretation is more clear cut. For both the linear and quadratic models the estimates of the intercept and the coefficient for the linear term were essentially equal, while the coefficient for the quadratic term in the quadratic model was essentially centered at zero. The linear and quadratic models thus achieved comparable fits by relying only on the linear terms.
To summarize, for both the Minnesota crashes and the RARU's crashes it appeared that at least over a typical range of speeds, risk of being in a serious or fatal run-off-road crash increases as speed increases. If, in fact, there are situations where low speeds are dangerous, these likely involve processes or conditions different from those that characterize the crashes in these samples.
SUMMARY AND CONCLUSION
In the introduction we indicated that a salient issue with regard to the role of speed in road crashes concerns the existence of a U-shaped relationship between speed crash risk. Despite extensive research, a clear resolution of this issue has yet to be achieved. The view we have adopted is that at least some of the current confusion may result from: 1) aggregating crashes that are caused by fundamentally different processes, and 2) failure to account for uncertainty in an analysis. In this paper, we have showed how pioneering work conducted at the RARU could be combined with recent advances in computation for Bayesian statistical models in order to apply case-control methods to studies with relatively small numbers of cases. Applying the method to two case-control samples, each with 10 serious or fatal run-off-road crashes, we found that these data did not support the existence of a U-shaped relationship between speed and crash risk, although risk did tend to increase as a function of speed.
One implication of this study appears to be that, as common sense tells us, high speed in and of itself is not sufficient to cause a crash. For the 10 Minnesota crashes, other drivers were observed traveling the same road under the same conditions as fast or faster than the crash-involved drivers without being involved in a fatal crash. A reasonable interpretation would be that some type of triggering event, which places the driver in a crash-avoiding situation, is also necessary. This is consistent with Hauer's pyramid (1997, p. 19), which distinguishes normal driving from conflict situations, and from those situations resulting in crashes. Study of the Minnesota crash reports revealed events such as the appearance of a deer in the driver's path, the merging of a slower moving vehicle into the driver's lane, driver distraction leading to a need to avoid a rear-ending collision, and loss of control following the driver's turning to interact with a child in the back seat. The logit model used in this paper assumed that such situations arise randomly with a rate that perhaps differs for different roadways. This parameter was absorbed into the logit model's constant term, and it is well-known that this term cannot be identified from case-control data alone. Because this parameter was not needed in testing for a U-shaped relation between speed and risk, this did not handicap our analysis. (In more traditional crash reconstruction, one in essence conditions on the occurrence of the crash avoidance event, so again it is not necessary to determine how this event arose.) A more complete understanding of how crashes occur however will eventually require determining how crash-avoidance situations arise. At present, though, our ability to model these does not appear to be as well-developed as our ability to model what happens once a crash sequence has started.
Finally, these results should caution us against using aggregated data to make overly general statements about crash causation. It may be that in other scenarios low speed is a causal factor of crashes. The challenge now is first to identify those scenarios, and then to demonstrate in actual instances how low speed caused these crashes.
This research was supported in part by Bureau of Transportation Statistics Contract DTTS-00-G-B004-MN, and in part by the Intelligent Transportation Systems Institute at the University of Minnesota. The authors would like to thank Don Schmaltzbauer of the Minnesota State Patrol and Dan Brannon of the Minnesota Department of Transportation for their assistance in obtaining the Minnesota data.
Cirillo, J. 1968. Interstate System Accident Research: Study II. Public Roads 35:7175.
Cooperrider, N., T. Thomas, and S. Hammoud. 1990. Testing and Analysis of Vehicle Rollover Behavior, SAE Technical Paper 900366, SAE, Inc, Warrendale, PA.
Davis, G. 1999. Using Graphical Markov Models and Gibbs Sampling to Reconstruct Vehicle/Pedestrian Accidents. Proceedings of the Conference on Traffic Safety on Two Continents.Linkoping, Sweden: Swedish National Road and Transport Institute.
______. 2003. Bayesian Reconstruction of Traffic Accidents and Causal Effect of Speed in Intersection and Pedestrian Accidents. Law, Probability and Risk 2:6989.
Davis, G. and S. Davuluri. 2002. Development and Application of Case-Control Methods to Traffic Safety Problems, prepared for the U.S. Department of Transportation, Bureau of Transportation Statistics, Washington, DC.
Fricke, L. 1990. Traffic Crash Reconstruction. Traffic Institute, Northwestern University, Evanston, IL.
Garrott, W. and D. Guenther. 1982. Determination of Precrash Parameters from Skid Mark Analysis. Transportation Research Record 893:3836.
Harkey, D., D. Robertson, and S. Davis. 1990. Assessment of Current Speed Zoning Criteria. Transportation Research Record 1281:4051.
Hauer, E. 1997. Observational Before-After Studies in Road Safety. New York, NY: Elsevier.
Hosmer, D.W. and S. Lemeshow. 2000. Applied Logistic Regression, 2nd ed. New York, NY: Wiley.
Kloeden, C., A. McLean, V. Moore, and G. Ponte. 1997. Traveling Speed and the Risk of Crash Involvement. NHMRC Road Accident Research Unit, University of Adelaide, Adelaide, Australia.
Kloeden, C., G. Ponte, and J. McLean. 2001. Traveling Speed and the Risk of Crash Involvement on Rural Roads. Road Accident Research Unit, Adelaide, University of Adelaide, Adelaide, Australia.
Kloeden, C., J. McLean, and G. Glonek. 2002. Reanalysis of Traveling Speed and the Risk of Crash Involvement in Adelaide South Australia. Road Accident Research Unit, University of Adelaide, Adelaide, Australia.
Martinez, J., and R. Schlueter. 1996. A Primer on the Reconstruction and Presentation of Rollover Accidents, SAE Technical Paper 960647, SAE Inc., Warrendale, PA.
Moore, V., J. Dolinis, and A. Woodward. 1995. Vehicle Speed and Risk of a Severe Crash. Epidemiology 6:258262.
Nystrom, G. and G. Kost. 1992. Application of NHTSA Crash Database to Pole Impact Prediction SAE Technical Paper 920605, SAE Inc., Warrendale, PA.
Searle, J. 1993. The Physics of Throw Distance in Accident Reconstruction. Accident Reconstruction: Technology and Animation III. Warrendale, PA: SAE Inc.
Semon, M. 1995. Determination of Speed from Yaw Marks. Forensic Accident Investigation: Motor Vehicles.Edited by T. Bohan and A. Damask. Charlottesville, VA: Michie Butterworth, 149198.
Shinar, D. 1998. Speed and Crashes: A Controversial Topic and an Elusive Relationship. Managing Speed, Special Report 254.Washington, DC: Transportation Research Board, 221276.
Solomon, D. 1964. Accidents on Main Rural Highways Related to Speed, Driver and Vehicle.Washington, DC: Bureau of Public Roads.
Spiegelhalter, D., N. Best, B. Carlin, and A. van der Linde. 2002. Bayesian Measures of Model Complexity and Fit. Journal of the Royal Statistical Society B 64:583616.
Spiegelhalter, D., A. Tomas and N. Best. 2000. WinBUGS Version 1.3 User Manual.Cambridge, UK: Cambridge University. MRC Biostatistics Unit.
West, L. and J. Dunn. 1971. Accidents, Speed Deviation and Speed Limits. Traffic Engineering 5255, July.
White, S. and A. Wilson. 1970. Some Effects of Measurement Errors in Estimating Involvement Rate as a Function of Deviation from Mean Traffic Speed. Journal of Safety Research 2:6772.
ADDRESSES FOR CORRESPONDENCE
1 Corresponding author: G. Davis, Department of Civil Engineeringm, University of Minnesota, 122 CivE, 500 Pillsbury Drive SE, Minneapolis, MN 55455. E-mail: email@example.com
2S. Davuluri, Parsons Brinckerhoff, 999 Third Ave, Suite 2200, Seattle, WA 98104. E-mail: Davuluri@pbworld.com
3J. Pei, Department of Civil Engineering, University of Minnesota, 122 CivE, 500 Pillsbury Drive SE, Minneapolis, MN 55455. E-mail: firstname.lastname@example.org