## 1. Introduction

Like most agricultural industries, the success of the Australian sugar industry is heavily impacted by climate. Most of Australia’s sugarcane is grown along the narrow coastal strip between the latitudes of 15° and 30°S (Fig. 1). This region experiences wet summers and dry winters with considerable rainfall variability from one year to the next, much of this interannual variability being due to the El Niño–Southern Oscillation (ENSO) phenomenon. The ability to forecast future ENSO conditions and understand how this relates to industry planning is of vital practical and financial importance to the Australian sugar industry.

Many industry practitioners vividly recall the La Niña events that dominated the years from 1998 to 2000. The 1998/99 La Niña in particular was reported to cost the Australian sugar industry in excess of $175 million (Australian dollars; Everingham et al. 2007). For an industry that generates between $1 and $2 billion annually, this cost represents a significant sacrifice of revenue. These costs can be attributed to many factors, but ultimately most blame can be placed on a wet end to the harvest season and the inability to forecast and plan for it with a sufficient lead time.

The harvest season runs from approximately the middle (May–June) to near the end of the year (November–December). The objective is to complete the harvest before the onset of the summer rains, when conditions are too boggy for machinery to operate. Harvesting in wet conditions damages the soil by compaction and limits the regrowth of the crop in future seasons. This is particularly harmful for sugarcane crops because the plant is typically allowed to regrow for five successive seasons before being completely ploughed out and replanted. Failure to harvest all of the cane can mean lost profit. Harvesting in wet conditions creates additional problems at the mill level because more dirt must be removed from the cane as part of the milling process. At the marketing level, harvest disruption contributes to the logistical nightmare of delivering sugar that has been forward sold on the world sugar market. It is without question that wet harvest seasons are undesirable for the industry.

During the harvest season, most of the rainfall occurs during the Southern Hemisphere spring months of September–November (SON), with the winter months of June–August being much drier. However, owing to natural climate variability, some springs are wetter than others while some are drier. Advance knowledge about spring conditions could be used to help decide the start date for the harvest season. If there were a higher risk of spring rainfall, industry decision makers could consider starting the harvest season earlier. Conversely, if there were a lower risk of rainfall during spring, the industry could give consideration to starting the harvest season at a similar or later time than usual. To give the industry time to complete the mill maintenance needed for the commencement of harvest, the industry would need to know the chance of spring rainfall as early as January of the same year and no later than March.

Spring rainfall for Australian sugarcane-growing regions is known to be influenced by ENSO (e.g., Everingham et al. 2002, 2007), so the ability to forecast ENSO conditions would help in deciding when to harvest. Knowledge of ENSO conditions that were to emerge during the harvest would be needed early in the year so that the industry had sufficient lead time to implement preharvest preparations. This requires forecasting across the austral autumn, the most difficult time of the year to forecast ENSO. Clarke and Van Gorder (2003) developed a statistical model to predict the commonly used Niño-3.4 index, which is defined as the sea surface temperature anomaly averaged over the equatorial Pacific region 5°S–5°N, 170°–120°W. Apart from the simplicity of the model, a distinct advantage of the Clarke and Van Gorder model to the Australian sugar industry is the ability of the model to forecast post-autumn Niño-3.4 conditions before autumn (i.e., the ability of the model to forecast across the so-called Southern Hemisphere autumn predictability barrier).

While the ability to predict Niño-3.4 values is necessary to make a rainfall forecast, it is only sufficient if ENSO forecasts are linked with rainfall. Everingham et al. (2007, 2008) tested the relationship between austral spring (SON) rainfall and the Clarke and Van Gorder (2003) Niño-3.4 predictions made using data up to the end of January, February, and March. Across all regions considered in their studies there was a higher risk of obtaining SON rainfall above the median when the statistical model predicted La Niña during the SON season. For selected northern regions this risk was reduced when the model predicted El Niño during the SON season. While the study revealed that under certain conditions the probability of having wetter (or drier) springs differed from climatology, the study did not attempt to quantify the precise risk of receiving above- (or below) median rainfall amounts, nor did it consider the risk of receiving certain amounts of rainfall other than the median.

A more general approach is to forecast the probability of rain distributions. Knowing the relevant rainfall probability density functions (pdfs) enables risk to be quantified more accurately; probability of rainfall greater than or less than any amount can be given. The goal of this paper is to report and test a method for predicting, early in the calendar year, the austral spring (SON) rainfall pdfs for sugar-growing regions along the eastern coast of Australia.

There are two major uncertainties contributing to the prediction of the SON rainfall pdf at a given location. Firstly, there is not a one-to-one relationship between a given ENSO state and SON rainfall at that location; the same ENSO state, by some measure, can result in two very different SON rainfall totals. Secondly, the future ENSO state relevant to the SON rainfall is not known at the beginning of the year; it must be predicted, and such predictions have errors. In this paper we shall examine these uncertainties and show how they can be combined to calculate the required pdf.

The structure of the paper is as follows: in the next section we discuss the (nonlinear) relationship between the El Niño index Niño-3.4 and northeastern coastal Australian rainfall and show how, for each cane-growing region, SON rainfall can be approximately described by a gamma distribution with parameters dependent on Niño-3.4. In section 3 we estimate, approximately, the Gaussian error distribution made by the Clarke and Van Gorder method when it predicts Niño-3.4. The results of sections 2 and 3 are then used, in section 4, to estimate SON rainfall probability distributions for the Tully, Plane Creek, and Harwood sugar mills, representative mills for the northern, central, and southern sugarcane regions. Section 5 presents and discusses the cross-verified results and the main text of the paper then concludes with a summary.

## 2. SON northeastern Australian rainfall and Niño-3.4

Daily rainfall data from 33 locations on Australia’s northeastern coast were obtained (see http://www.longpaddock.qld.gov.au/silo/). The original daily data from the Australian Bureau of Meteorology were in-filled using the techniques outlined in Jeffrey et al. (2001). We used the daily data to calculate SON rainfalls at the 33 locations.

Table 1 shows the results of lag correlating the SON rainfalls at the 33 locations with 3-month averages of Niño-3.4. Maximum (in magnitude) correlations occur when Niño-3.4 leads the SON rainfall by a few months. Slightly better correlations are obtained with the equatorial index Niño-4, which, being defined as the sea surface temperature anomaly averaged over the region 5°S–5°N, 150°W–160°E, is based on an equatorial region closer to the northeastern Australian coast than the Niño-3.4 region. However, the correlation differences are small, and since we will be using the Clarke and Van Gorder (2003) model that is set up to predict Niño-3.4, we will use this ENSO index here.

Correlation magnitudes in Table 1 are similar over large areas and tend to decrease southward. For simplicity in this paper we will focus on three stations—Tully, Plane Creek, and Harwood. These stations are representative of the northern, central, and southern coastal regions.

Figure 2 shows SON rainfall at these representative stations plotted against the relevant best-correlated Niño-3.4 lead index for the years 1950–2005 (Tully and Plane Creek) and 1950–2006 (Harwood). The plots indicate that the SON rainfall, particularly at Tully and Plane Creek, is approximately bilinearly dependent on Niño-3.4. Specifically, both small and large El Niños produce about the same dryness; in other words, past a given point, it does not matter how big the El Niño is—the rainfall typically does not decrease further. This is in contrast to SON rainfall during La Niña, which on average is more variable and higher as Niño-3.4 becomes more and more negative. This nonlinear rainfall response to ENSO is typical of stations in the northern and central regions. It also is seen in all-Australian rainfall as was originally pointed out by Power et al. (2006).

Figure 2 shows that, as noted earlier, while there is a link between SON rainfall and Niño-3.4, it is far from deterministic even if we take into account the nonlinearity discussed above. We therefore adopt a probabilistic approach, each Niño-3.4 value corresponding to a rainfall probability density function. If we had enough data, we could determine this pdf as a function of Niño-3.4 by constructing a histogram for each value of Niño-3.4. However, Fig. 2 shows that we do not have nearly enough data to do this. We will therefore proceed as follows:

*α*) is the gamma function. Note that

*f*(

*x*) is fully specified by only two parameters

*α*and

*β*. As summarized by Wilks (2006),

*α*and

*β*can be found from the data [see (2.4) and (2.5) below] using an approximation to the maximum likelihood solution. We checked that this approximation is an excellent one for the parameters we used. The approximation first calculatesandwhere, in our case,

*x*is the SON rainfall in mm for the

_{i}*i*th year and

*n*is the number of years for which data are available. The parameters

*α*and

*β*are then estimated as (see Wilks 2006)and

*x*′

_{i}/

*x*

*x*′

_{i}/

*x*

*s*

^{2}is the sample variance. Thus from (2.7) and (2.8)Hence, once

*s*and

*x*

*D*is known and, by (2.4) and (2.5), so are

*α*,

*β*, and the gamma distribution (2.1).

From the above we can determine the dependence of the gamma distribution pdf on Niño-3.4 if we can determine the dependence of *x**s* on Niño-3.4. The plots in Fig. 2 suggest that *x**N*_{*}, *x**P*_{*} (constant) and then, for Niño-3.4 ≤ *N*_{*}, *x**x**N*_{*}, *P*_{*}, and the constant slope for Niño-3.4 ≤ *N*_{*} to get the best fit.

The Fig. 2 plots also suggest that the variance about the *x**N*_{*} and then increases for Niño-3.4 ≤ *N*_{*}. For simplicity we therefore adopted a similar bilinear fit for *s* as for *x**N*_{*}, we estimated the constant *s* part of the fit using all data having Niño-3.4 > *N*_{*}. Let this constant value of *s* be *s*_{*}. We also estimated *s*_, the value of *s* calculated using all data having Niño-3.4 ≤ *N*_{*}. If *N*_ is the average value of Niño-3.4 found from the data for Niño-3.4 ≤ *N*_{*}, then our calculations have provided us two points (*N*_, *s*_) and (*N*_{*}, *s*_{*}) in the (*N*, *s*) plane. These two points can be used to define a straight line for Niño-3.4 ≤ *N*_{*} and thus complete the specification of the bilinear function *s* (Niño-3.4). Bilinear parameter values for Tully, Plane Creek, and Harwood are given in Table 2.

## 3. Prediction of Niño-3.4

*t*+ Δ

*t*) for various lead times Δ

*t*. In (3.1),

*τ*(

*t*) is an Indo-Pacific zonal wind anomaly index and

*h*

*t*) is related to upper-ocean heat content since it is the anomalous depth of the 20°C isotherm averaged across the equatorial Pacific from 5°S to 5°N. Both the

*τ*(

*t*) and

*h*

*t*) indexes are each useful predictors of ENSO across the Southern Hemisphere autumn; January, February, and March values of either index are correlated with September, October, and November values of Niño-3.4 later that year with a correlation of at least 0.6. In addition, since Niño-3.4 is strongly persistent from June through March the following year, by appropriate choice of the coefficients

*a*,

*b*, and

*c*in (3.1),

*S*(

*t*) is an excellent ENSO predictor throughout the year. The coefficients are determined by a least squares fit of

*S*(

*t*) to Niño-3.4 (

*t*+ Δ

*t*) for each calendar month and each lead time Δ

*t*. Cross-validated calculations indicate that the model performs as well or better than other statistical and dynamical ENSO prediction models (see Figs. 10.1 and 11.1 of Clarke 2008).

Our focus is on predicting 3-month averages of Niño-3.4 for May–July (MJJ), June–August (JJA), and August–October (ASO) since, as was shown in section 2, these Niño-3.4 indexes are related to rainfall probabilities. Forecasts of these indexes are needed from January, February, and March.

Histograms of “predicted” JJA Niño-3.4 minus observed JJA Niño-3.4 are shown in Fig. 3 for model “predictions” given data up until the end of January. The predictions are cross-verified hindcasts for the period from January 1981 to December 2001 and operational predictions from 2002 to 2006. As is apparent from the plot, the data are few and we decided to fit the data with a Gaussian distribution. In the appendix, we describe a bootstrap approach for estimating the predicted minus observed histogram. This histogram is similar to the straight Gaussian estimate, so much so that we get rainfall probability estimates that differ negligibly. In what follows we will use the simple Gaussian fit for the predicted minus observed pdf.

The preceding discussion focused on the prediction of JJA Niño-3.4; similar results are obtained for MJJ and ASO Niño-3.4. In the next section, we will show how the error distributions of this section and the link SON rainfall and Niño-3.4 of section 2 can be used to predict a pdf of SON rainfall at a given location.

## 4. Prediction of SON rainfall probability

We illustrate our method using our prediction of the pdf for the 2007 SON Tully rainfall from January 2007. At the beginning of February 2007 we know the January 2007 values of Niño-3.4, *τ*, and *h**a*, *b*, and *c* for Δ*t* = 6 months, are used in (3.1) to calculate *S*(*t*), our prediction of JJA 2007 Niño-3.4. In this prediction the known *a*, *b*, and *c* coefficients are “frozen,” having been obtained from a least squares analysis of data from January 1981 to December 2001.

If we randomly sample the predicted minus observed Gaussian pdf for the prediction of JJA Niño-3.4 from January (see Fig. 3), then since we know the predicted 2007 JJA Niño-3.4 for 2007, we can calculate a sample observed value of JJA Niño-3.4. From section 2 we can use this observed JJA Niño-3.4 value to calculate the gamma distribution parameters *α* and *β* for Tully so the gamma distribution is known. From this gamma distribution we calculate 1000 rainfall samples. Repeating this whole process 999 more times enables the calculation of one million possible SON rainfalls from which a histogram can be constructed (Fig. 4a). This histogram is an estimate of the 2007 SON rainfall pdf for Tully given climate data until the end of January 2007. To be consistent with our use of the gamma distribution for rainfall, this histogram is fitted to a gamma pdf (see Fig. 4a) and we use this as our final predicted 2007 rainfall pdf. Figure 4b shows the cumulative distribution functions corresponding to the histogram and gamma pdf.

We only have complete datasets for all stations up to the end of 2005, so we only have four verified SON rainfall forecasts (2002–05) since the *a*, *b*, and *c* model coefficients were frozen. To test our model predictions for more years, we additionally carried out cross-verified “forecasts” for the years 1981–2001 when data for the predictors Niño-3.4, *τ*, and *h**a*, *b*, and *c* found by least squares fitting of the remaining years 1981–97, 1999–2001. The SON rainfall–Niño-3.4 bilinear relationship (see Fig. 2) was also calculated with the 1998 data removed. Then the prediction for the pdf of 1998 SON rainfall at a given location was found in a similar way to that for the 2007 case described above. For each station we thus have 25 predicted pdfs (21 cross verified, 1981–2001, and 4 operational, 2002–05) available for testing the method.

## 5. Testing the accuracy of the SON rainfall probability predictions

Figure 5 shows the predicted median SON rainfalls at Tully, Plane Creek, and Harwood (solid gray line) and the corresponding observed SON rainfalls. The median and observed rainfalls at these locations are correlated at 0.57, 0.30, and 0.51, respectively. Also shown in Fig. 5 in each plot are dashed lines showing the 16⅔ and 83⅓ percentiles so that, in theory, 83⅓% − 16⅔% = 66⅔% = ⅔ of the predicted rainfalls should lie between the dashed lines. The percentage for the observed SON rainfalls within the dashed lines at Tully is 76%, close to the ⅔ value, but there are bigger discrepancies at Plane Creek (44%) and Harwood (92%). The better agreement is expected at Tully since the correlation of the predicted median and observed rainfall is higher there. The average percentage for all three stations is 71%, close to the expected ⅔ value.

*x*is the rainfall in millimeters,

*R*(

*x*) is the predicted cumulative distribution function corresponding to the predicted pdf, and

*R*(

_{o}*x*) is a cumulative distribution function corresponding to the observed rainfall, being zero when

*x*is less than the observed rainfall and unity when

*x*is greater than or equal to the observed rainfall. The CRPS is a measure of the error in the predicted pdf and behaves in a way you expect the error to behave. For example, the deterministic forecast

*x*=

*x*

_{predicted}corresponds to a Dirac

*δ*function pdf at

*x*=

*x*

_{predicted}and a unit step function

*R*(

*x*) at

*x*=

*x*

_{predicted}givingIn other words, the CRPS reduces to the absolute error when the forecast is deterministic. When the predicted pdf is not deterministic but, say, Gaussian with mean

*x*

_{predicted}and standard deviation

*σ*, the CRPS increases with increasing

*σ*when

*x*

_{predicted}≈

*x*

_{observed}, for then the predicted pdf and observed

*δ*function pdf differ increasingly as

*σ*increases. On the other hand, when

*x*

_{predicted}and

*x*

_{observed}are well separated, increasing

*σ*shortens the distance between the predicted and observed pdfs and the CRPS decreases. Specifically, numerical calculations show that when

*x*

_{predicted}=

*x*

_{observed}, thenand when |

*x*

_{predicted}−

*x*

_{observed}| > 2

*σ*,

Figure 6 shows the CRPS (solid black lines) for Tully, Plane Creek, and Harwood predicted from January for the years 1981–2006. The predictions are based on cross-verified results for 1981–2001 and predictions for 2002–06. The dashed curve in Fig. 6 corresponds to the CRPS error for the deterministic prediction [see (5.2)] equal to the long-term median rainfall and the gray curve to the deterministic prediction equal to the long-term mean. The curves show that the CRPS error is usually smaller for the black line (i.e., the pdf predictions are usually better using our model than trying to predict the rainfall using the long-term median or mean). The biggest prediction improvement is in the north at Tully where Niño-3.4 has its biggest influence on the rainfall and the model skill in predicting rainfall can be utilized.

Table 3 shows the average (1981–2005) CRPS errors for predictions given data up to the end of January, February, and March. It echoes the above results that the model gives improved SON rainfall predictions, this improvement decreasing southward. Notice from the table that while predictive skill changes spatially as one goes from one station to the next, for each station the skill is similar whether predictions are made from the end of January, February, or March.

## 6. Conclusions

We have developed a method for the long-lead forecasting of ENSO-influenced rainfall probability, specifically applying it to Australia’s sugarcane region along its northeastern coast. For planning purposes and risk assessment in the highly variable climate, forecasts of SON rainfall pdfs in specific locations are needed at the beginning of the calendar year. The predicted pdfs can be used to assess the likelihood of extreme SON rainfall, which can be very damaging to the industry. The likelihood of extremely heavy SON rainfall increases as the magnitude of a La Niña increases.

The prediction scheme involves forecasting Niño-3.4 several months in advance across the austral autumn, the most difficult time of the year to forecast ENSO, and then using the known relationship between Niño-3.4 and SON rainfall based on historical data. The latter relationship is nonlinear in the sense that while bigger La Niñas typically correspond to higher rainfall, bigger El Niños typically do not result in drier conditions. The method takes into account both the uncertainty associated with the Niño-3.4 forecast and the uncertain connection between rainfall and Niño-3.4 using two pdfs—a Gaussian pdf associated with the error in predicting Niño-3.4 using the Clarke and Van Gorder (2003) forecast method and a gamma pdf whose two parameters depend on observed Niño-3.4.

Specifically, suppose it is, say, January 2010 and we wish to predict the 2010 SON rainfall pdf at Tully. The method’s first step is to predict the 2010 value of JJA Niño-3.4, since JJA Niño-3.4 is the 3-month Niño-3.4 index most strongly related to the SON Tully rainfall. One thousand sample values of observed JJA Niño-3.4 are then obtained from this prediction and the Gaussian predicted minus observed error distribution. Since the gamma distribution of the SON Tully rainfall is a known function of observed JJA Niño-3.4 through its alpha and beta parameters, the 1000 sample observed Niño-3.4 values determine 1000 gamma Tully rainfall distributions. Each of these is then sampled 1000 times to obtain a million SON rainfalls from which a histogram is constructed. This histogram is then an estimate of the SON 2010 rainfall pdf at Tully. To be consistent with using the gamma distribution to represent rainfall, in practice we fit the histogram to a final gamma distribution and make it our estimate of the SON 2010 rainfall pdf at Tully.

Cross-verified tests of the forecast skill were carried out at the Tully, Plane Creek, and Harwood mills, representing, respectively, the northern, central, and southern regions of the approximately 1700-km-long cane-growing coastal strip. The tests showed that there is some skill in the pdf forecasts, particularly in the northern region where the SON rainfall–ENSO connection is strongest.

## Acknowledgments

The authors gratefully acknowledge financial support from the Australian Government through the Sugar Research and Development Corporation and from National Science Foundation Grants ATM-0623402 and OCE-0850749.

## REFERENCES

Clarke, A. J., 2008:

*An Introduction to the Dynamics of El Niño & the Southern Oscillation*. Elsevier, 324 pp.Clarke, A. J., , and S. Van Gorder, 2003: Improving El Niño prediction using a space-time integration of Indo-Pacific winds and equatorial Pacific upper ocean heat content.

,*Geophys. Res. Lett.***30****,**1399. doi:10.1029/2002GL016673.Everingham, Y. L., , R. C. Muchow, , R. C. Stone, , N. G. Inman-Bamber, , A. Singels, , and C. N. Bezuidenhout, 2002: Enhanced risk management and decision-making capability across the sugar industry value chain based on seasonal climate forecasts.

,*Agric. Syst.***74****,**459–477.Everingham, Y. L., , A. J. Clarke, , C. C. M. Chen, , S. Van Gorder, , and P. McGuire, 2007: Exploring the capabilities of a long lead forecasting system for the NSW sugar industry.

,*Proc. Aust. Soc. Sugar Cane Technol.***29****,**9–17.Everingham, Y. L., , A. J. Clarke, , and S. Van Gorder, 2008: Long lead rainfall forecasts for the Australian sugar industry.

,*Int. J. Climatol.***28****,**111–117.Jeffrey, S. J., , J. O. Carter, , K. M. Moodie, , and A. R. Beswick, 2001: Using spatial interpolation to construct a comprehensive archive of Australian climate data.

,*Environ. Model. Softw.***16**(4) 309–330.Power, S., , M. Haylock, , R. Colman, , and X. Wang, 2006: The predictability of interdecadal changes in ENSO activity and ENSO teleconnections.

,*J. Climate***19****,**4755–4771.Wilks, D. S., 2006:

*Statistical Methods in the Atmospheric Sciences*. 2nd ed. Academic Press, 627 pp.

## APPENDIX

### Bootstrap Approach for Estimating the Niño-3.4 Prediction Error

Figure 3 shows a histogram of predicted JJA Niño-3.4 forecasts from January minus the observed JJA Niño-3.4 values using the Clarke and Van Gorder (2003) ENSO prediction model. Another way to estimate this histogram is to use a bootstrap method as follows:

For each of the 21 yr from 1981 to 2001 we have a set of predictors, Niño-3.4 (*t*), *τ*(*t*), and *h**t*), and the corresponding observed predictands, Niño-3.4(*t* + Δ*t*). We omit one year (say 1998) and sample the 1981–97 and 1999–2001 data 21 times with replacement to obtain a set of 21 predictors and their corresponding predictands. The coefficients *a*, *b*, and *c* in (3.1) are then found by a least squares fit and the prediction *S*(*t*) is made for 1998 using these coefficients and the predictors Niño-3.4(*t*), *t*(*t*), and *h**t*) for 1998. The difference *S*(*t*) − Niño-3.4(*t* + Δ*t*) for 1998 is then calculated. This process is repeated 999 times so that for the year 1998 we have 1000 *S*(*t*) − Niño-3.4(*t* + Δ*t*) differences. Similar “omit one year” calculations are then repeated for the other 20 yr in the 1981–2001 interval. In addition, bootstrap calculations are also performed for the 5 yr 2002–06 but in those cases none of the years in the 1981–2001 set of predictors and predictands had to be omitted. The result of all these calculations is a set of 26 000 realizations of *S*(*t*) − Niño-3.4(*t* + Δ*t*), based, as much as possible, on the coefficients *a*, *b*, and *c* found during the 1981–2001 training interval. From these 26 000 realizations we construct the histogram and Gaussian pdf fit shown in Fig. A1.

Pearson (ordinary) correlation (column 6) of a 3-month average of the El Niño index Niño-3.4 with SON rainfall at 33 stations (column 1) located (see columns 2 and 3) along the northeastern Australian coast for the years from 1950 to the (recent) end year in column 8. The 3-month average Niño-3.4 index (column 4) leads the SON rainfall by the number of months shown in column 5. For example, the correlation at Mossman is between the SON rainfall at Mossman and the MJJ Niño-3.4 index 4 months earlier and so for Mossman MJJ is in column 4 and 4 is in column 5. The lead was chosen to give a maximum (in magnitude) correlation. The stations have been arranged so that latitude south increases down the list. Harwood and all stations north of Isis have correlation coefficients significantly different from zero since |*r*_{crit}(95%)| = 0.22. Column 7 is the ratio of the explained variance of the least squares bilinear fit (see Fig. 2) to the explained variance of the standard linear regression fit. The larger the number is when compared with unity, the better the bilinear fit is than the standard regression fit. Note that the column 7 calculations were done with the lead months of column 5 corresponding to standard linear correlation; in a few cases slightly different leads may have resulted if we had optimized the lead for the bilinear fit. Here, MAM indicates March–May and DJF indicates December–February. The other 3-month acronyms are defined in the text.

Mean and median rainfall as well as bilinear fit (see Fig. 2) SON rainfall parameters for the Tully, Plane Creek, and Harwood mills for the years 1950–2005, 1950–2005, and 1950–2006, respectively. The quantities (*N*_{*}, *P*_{*}) correspond to the hinge point in Fig. 2 where the sloping line segment of slope *dP*/*dN* for *N* = Niño-3.4 ≤ *N*_{*} joins the constant SON precipitation line *P* = *P*_{*}; *N*_{*}, *P*_{*}, and *dP*/*dN* were varied to obtain the best fit to the data in the least squares sense with the tabulated values shown. Under this bilinear fit, *s*_{*}, the sample standard deviation for Niño-3.4 > *N*_{*} and *s*_ the sample standard deviation for Niño-3.4 ≤ *N*_{*}, were calculated as in columns 7 and 8. The parameter *N*_, the average value of Niño-3.4 for Niño-3.4 ≤ *N*_{*}, was used to estimate the bilinear dependence of standard deviation on Niño-3.4 (see the last paragraph of section 2).

Average (1981–2005) CRPS errors in mm of SON rainfall for predicted pdfs at Tully, Plane Creek, and Harwood using our prediction method and predictions based on the long-term SON median rainfall. Predictions are made given data up to the end of January, February, and March. Positive values in the next to last column indicate a lower error for our prediction method. The skill in the last column is equal to (our method CRPS − long-term medium CRPS) divided by (perfect prediction CRPS − long-term median CRPS). Since the perfect prediction CRPS is zero, this ratio reduces to 1 − (our method CRPS)/(long-term median CRPS). The skill is thus 100% when our method is perfect, between 0% and 100% when it beats the climatological long-term median skill and negative when it loses to climatology.