Next Article in Journal
Application of Ant Colony Optimization Algorithm Based on Triangle Inequality Principle and Partition Method Strategy in Robot Path Planning
Next Article in Special Issue
Statistical Inference for Two Gumbel Type-II Distributions under Joint Type-II Censoring Scheme
Previous Article in Journal
Well-Posedness Scheme for Coupled Fixed-Point Problems Using Generalized Contractions
Previous Article in Special Issue
On Construction and Estimation of Mixture of Log-Bilal Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Socioeconomic Determinants of Building Fires through Backward Elimination by Robust Final Prediction Error Criterion

School of Engineering and Technology, Central Queensland University, Rockhampton, QLD 4701, Australia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 524; https://doi.org/10.3390/axioms12060524
Submission received: 15 March 2023 / Revised: 1 May 2023 / Accepted: 23 May 2023 / Published: 26 May 2023
(This article belongs to the Special Issue Statistical Methods and Applications)

Abstract

:
Fires in buildings are significant public safety hazards and can result in fatalities and substantial financial losses. Studies have shown that the socioeconomic makeup of a region can impact the occurrence of building fires. However, existing models based on the classical stepwise regression procedure have limitations. This paper proposes a more accurate predictive model of building fire rates using a set of socioeconomic variables. To improve the model’s forecasting ability, a backward elimination by robust final predictor error (RFPE) criterion is introduced. The proposed approach is applied to census and fire incident data from the South East Queensland region of Australia. A cross-validation procedure is used to assess the model’s accuracy, and comparative analyses are conducted using other elimination criteria such as p-value, Akaike’s information criterion (AIC), Bayesian information criterion (BIC), and predicted residual error sum of squares (PRESS). The results demonstrate that the RFPE criterion is a more accurate predictive model based on several goodness-of-fit measures. Overall, the RFPE equation was found to be a suitable criterion for the backward elimination procedure in the socioeconomic modeling of building fires.

1. Introduction

Building fires remain a significant concern for households, businesses, and authorities across Australia, as evidenced by the annual expenditure of over $2.5 billion on fire protection products and services [1]. Despite this significant investment, building fires claimed the lives of 51 Australians in 2020 [2] and cost the country’s economy 1.3% of its gross domestic product (GDP) [3]. These costs are a combination of losses due to injuries, property damages, environmental damages, destruction of heritage, and various costs to affected businesses. In Queensland alone, 1554 fire incidents caused damage to building structures and contents in 2020 [4], with each incident representing a significant loss to a Queenslander who may have lost a family home, a loved one, or a source of livelihood that has sustained generations of Australians. As such, continued efforts to understand and mitigate the incidence of building fires are necessary.
Studies linking socioeconomic data to building fires have been conducted in various jurisdictions using quantitative and qualitative methodologies. Lizhong, et al. [5] established the relationship between GDP per capita, education level, and fire and death rates in Jiangsu, Guangdong, and Bei**g, China. It adopted partial correlation analysis to compute the correlation coefficient of every variable pairing. In Cook County, United States, geocoding and visual map** connected poverty rates to higher ‘confined fire’ incident rates in one-family and two-family dwellings [6]. Logistic regression is also used to identify relevant socioeconomic variables through four implementations within a four-stage conceptual framework [7]. The study utilizes the census data of New South Wales residents and the corresponding variables selected to calculate indexes within the Socioeconomic Indexes for Areas (SEIFA) project.
Other methodologies have also adopted algorithms to not only assign coefficients but also select variables that build the most fitting model. For example, Chhetri, et al. [8] utilized the classical stepwise regression method and discriminant factor analysis (DFA) to select predictive determinants from variables identified in the technical papers of the Socioeconomic Indexes for Areas (SEIFA). As a result, it managed to capture variables with high t-statistics. However, its use of the classical stepwise regression method, as proposed by Efroymson [9], has been known to have several limitations. Critics have also discouraged its use of t-statistics or p-value elimination criteria and the forward selection procedure to build statistical models [10,11,12,13,14]. The limitations of the classical stepwise regression method can be summarized into five issues—overreliance on chance, overstated significance, lack of guarantee for global optimization, inconsistency-causing collinearity, and non-contingency of outliers [10,11,12,13,14]. In addition, the method has been shown to provide poorer accuracy than principal component analysis (PCA) [15]. Therefore, the methodology was improved in a study in the West Midlands, U.K., by adding PCA to discover the most predictive variables or components [16].
This paper attempts to improve the methodology in Chhetri, Corcoran, Stimson and Inbakaran [8] by using the backward elimination method and the robust final prediction error (RFPE) criterion to model the socioeconomic determinants of building fires. Such modifications to the model-building algorithm and elimination/selection criteria have the potential to produce a socioeconomic model with superior predictive accuracy. Additionally, the resulting model may make more cautious representations of individual parameters’ influence, preventing false confidence and reflecting the real world more accurately. The contribution of this paper includes the first application of the backward elimination by RFPE criterion and the comparative analysis of RFPE to other criteria applicable to the backward elimination procedure. Over and above that, the paper aims to improve the effectiveness of future fire safety regulations and programs that better protect households with the identified socioeconomic risk profile.
To evaluate the suitability of the proposed method, this paper presents a comprehensive analysis in six sections. Section 2 provides a review of the relevant literature to highlight the limitations of the conventional regression approach. Section 3 presents the proposed robust backward elimination method using the RFPE criterion. In Section 4, a case study based on data from the South East Queensland region is presented to demonstrate the effectiveness of the proposed method. The available alternative criteria to the backward elimination procedure and the comparative analysis of the proposed criterion are described in Section 5. Finally, Section 6 concludes the paper by discussing the study’s findings and outlining future research directions.

2. Related Work

Before going into the method’s ingrained limitations, the common purpose of adopting the classical regression method has to be understood. Often, researchers adopted the method of disregarding ‘insignificant’ variables to achieve parsimony, i.e., ‘simpler’ equations [10,11]. Then, the parsimonious model is inferred for the explanatory variables’ influences on the dependent variable [13,14]. Others use the resulting model for prediction and forecasting purposes [10,17].
Chhetri, Corcoran, Stimson and Inbakaran [8] conducted an ingenious study to model the socioeconomic determinants of building fires. It has resourcefully identified the Index of Socioeconomic Advantages and Disadvantages (IRSAD) by the Australian Bureau of Statistics (ABS) as a suitable pool for candidate explanatory variables. In addition, the study uses discriminant function analysis (DFA) to identify determinants of fires in different suburbs—the culturally diversified and economically disadvantaged suburbs, the predominantly traditional family suburbs, and the high-density inner suburbs with community housing. However, it uses classical stepwise regression to identify the overall socioeconomic determinants of building fires, which has been proven to have some limitations.
The limitations of the classical stepwise regression can be summarized into five issues: overreliance on chance, overstated significance, lack of guarantee for global optimization, inconsistency-causing collinearity, and non-contingency of outliers, which can be described one by one as follows:

2.1. Limitation 1: Over-Reliance on Chance

There is a high probability of the regression failing to identify actual causal variables. One of the main reasons is that the set of variables might, by chance, affect the particular training datasets. Without a validation process, the same variables might not show the same degree of influence if based on other sample datasets, such as datasets from other periods. The chance of nuisance variables getting selected, synonymously known as type I error, has been quantified by multiple studies, such as the one by Smith [11]. Apart from referencing experiments that show poor performance in small datasets [18,19], Smith [11] conducts a series of Monte Carlo simulations to show that stepwise regression can include nuisance variables 33.5% of the time while choosing from 50 candidate variables. The rate almost tripled when the method was used for 1000 candidate variables. The simulations also found that at least one valid variable was not selected 50.5% of the time when choosing from 100 candidate variables [11]. The main reason for the limitation is that the statistical tests used in stepwise regression are designed a priori, i.e., they are made to quantify a model that has been previously built or established, for example, through expert knowledge and causation studies [10]. It was never intended for model-building purposes. In turn, the method produced results that often overstated their significance.

2.2. Limitation 2: Overstated Significance

McIntyre, Montgomery, Srinivasan and Weitz [13] determine that statistical significance tests have been too liberal for any stepwise regression model since it has been ‘best-fitted’ to the dataset, biasing the data towards significance. Additionally, Smith [11] stressed that stepwise regression tends to underestimate the standard error of the coefficient estimate, leading to a narrow confidence interval, an overstated t-statistic, and understated p-values. The phenomenon also signifies the overfitting of the model to the training dataset. In practical terms, the stepwise algorithm does not pick the set of variables that determine the response variable in the population, but it picks the set of variables that ‘best’ fit the training sample dataset.

2.3. Limitation 3: Collinearity Causes Inconsistency

Stepwise regression assumes that explanatory variables are independent of each other. Therefore, there is no provision for collinearity in the stepwise regression procedure. As a result, collinearity in stepwise regression produces high variances and inaccurate coefficient estimates [20]. These effects are again attributed to the objective of finding a model that ‘best fits’ the training data. Models that contain different variables may have a similar fit in the presence of collinearity; therefore, the procedure will result in inconsistent results, i.e., the procedure becomes arbitrary [10,21]. Collinearity’s effect on the order of inclusion or elimination is one of the reasons for the varying outcomes [22,23]. With that said, these effects are more pertinent if the purpose of adopting stepwise regression is mainly inferential [24,25,26]. As a predictive model, the inconsistency is less relevant as variables compensate for each other as their coefficients are too high or too low [26]. Therefore, the resulting function may still satisfactorily predict the dependent variable but not be as reliable in estimating individual influence [26].

2.4. Limitation 4: No Guarantee of Global Optimization

Based on the limitations discussed, it is fair to question the optimality of stepwise regression’s outcome. Thompson [27], Freckleton [21], and Smith [11] discussed whether global optimization is achieved in stepwise regression, especially the forward selection algorithm. Since the algorithm selects variables one by one, the choice of the n-th variable depends on the (n − 1)th variable. Therefore, it is reasonable to conclude that the method cannot even guarantee that the n-variable model achieved is the best-fitting n-variable equation. In other words, the local optimization reached by conducting the stepwise regression does not guarantee that it is the global optima. In addition, the issue may also be exacerbated by erratic variable selection in multicollinear datasets. Even a small degree of multicollinearity has been shown to bias stepwise regression towards achieving local optimization and away from global optimization [28].

2.5. Limitation 5: Caused by Outliers

Outliers are a persistent issue in statistical analysis. They introduced bias to the most basic statistical measure, e.g., the mean value of sample data, affecting the accuracy of more advanced statistical techniques [29]. One single outlier can bias classical statistical techniques that should be optimal under normality or linearity assumptions. Firstly, population data inherently contains outliers; as the sample data gets more prominent, there is a greater likelihood of encountering outlying data points [30]. Secondly, large behavioral and social datasets are more susceptible to outliers [30,31]. Thirdly, it has been established that outliers in survey statistics of such scale are almost unpreventable, partly due to significant errors in survey responses or data entry [29,32]. Additionally, in contrast to the effect of collinearity, there is evidence that outliers affect inferential accuracy and a model’s predictive accuracy [33].
After acknowledging the limitations of classical stepwise regression, a natural progression should lead to exploring an alternative to the method. Although the criterion modification will not wholly replace causation studies or eliminate the same weaknesses, it will produce significantly more reliable and cautious inferences and predictions.

3. Backward Elimination by Robust Final Predictor Error (RFPE) Criterion

A multivariate regression equation was sought to represent the rates of building fires based on an area’s socioeconomic composition. The resulting equation is expected to take the form of Equation (1).
y i = β 0 + β 1 x i 1 + + β d x i d ,
where variable bi represents the rate of emergency services demand at area i, xij ( j 1 ,   2 ,   ,   d ) represents socioeconomic variables respectively for demand area i, βj ( j 1 ,   2 ,   ,   d ) is the regression coefficient allocated to each of the j-th socioeconomic variables, and β0 represents the intercept.
A backward elimination was adopted to detect and eliminate insignificant socioeconomic variables based on the robust final predictor error (RFPE) criterion. The algorithm was set up to remove the single variable that improves the RFPE the most. The use of the RFPE criterion, developed by Maronna, Martin, Yohai and Salibin-Barrera [31], has the benefit of minimizing the effect of outliers. The robust technique is an improvement to Akaike’s FPE criterion, which can be significantly biased by outliers in the dataset [34]. The procedure is then adapted to the data sourcing and processing methodology in Chhetri, Corcoran, Stimson and Inbakaran [8] study on building fires in South East Queensland. The approach has been proposed and discussed by Untadi, et al. [35]. The proposed RFPE equation is presented in Equation (2) as the expected value of the function ρ .
R F P E C = E ρ y 0 x 0 C ' β C ^ σ
where β ^ = a r g min β q i = 1 n ρ y i x i C β σ ^
y i = j = 1 p x i j β j + u i = x i β + u i
ρ r = r 2
x i C = x i 1 ,   ,   x i d
C 1 ,   2 ,   ,   d
i = 1 ,   2 ,   ,   n
where x i j ,   y i is the dataset consisting of the relevant explanatory variables xiC and response variable yi. x 0 ,   y 0 represents the data point that is added to measure the sensitivity of the dataset to outliers. C refers to the set of explanatory variables that are subsets of the index 1 ,   2 ,   ,   d . β ^ and σ ^ notated the MM-estimators of the parameters and scale, respectively. MM-estimators are a statistical estimation approach formulated by Yohai [36], which employs the Iteratively Reweighted Least Squares (IRWLS) method for optimizing the estimation procedure. The initial estimators are chosen using a strategy proposed by Pena and Yohai [37], which uses data-driven criteria to guide the selection of the starting estimates rather than a random selection method [38]. The explanatory variables and error term are i.i.d. standard normal. Adapting the estimator for Akaike’s FPE equation, the estimator for the RFPE equation was proposed as follows:
R F P E ^ = 1 n i = 1 n ρ r i C σ ^ + q n A ^ B ^  
where A ^ = 1 n i = 1 n ψ r i C σ ^ 2 ,   B ^ = 1 n i = 1 n ψ r i C σ ^  
r i C = y i x i C β C ^
q = C
ψ r = 2 r
Equation (9) is then embedded in the backward elimination procedure in Algorithm 1.
Algorithm 1. Algorithm of robust backward elimination by RFPE
  • Let Md be the full model that contains all d explanatory variables.
    M d y i = β 0 + β 1 x + + β d x i d
  • Calculate RFPE of Md.
  • For  k = d, d-1, …, 1:
    • Consider all k models that contain all but one of the variables in Mk. for a total of k-1 explanatory variables.
    • Among the k models, choose the model with the lowest RFPE and label it as Mk−1.
    • If RFPE of Mk−1 is higher or equal to the RFPE of Mk:
    • Terminate loop.
    • Else Continue with the remaining body of the loop
  • Return Mk
Firstly, the RFPE for a model that consists of all d explanatory variables and is set as Md is calculated. Then, every variable is eliminated and returned one by one to determine which elimination improves the RFPE of model Mk−1 the most. The algorithm removes the single variable that improves the RFPE the most. The elimination iterates until the algorithm reaches an RFPE of Mk−1 that is higher or equal to the RFPE of Mk. The termination means the algorithm assumes the subsequent iteration will not improve the model fit. An implementation in South East Queensland was conducted to validate the method’s proposed adoption.

4. Case Study: South East Queensland, Australia

South East Queensland (SEQ) refers to a region that accounts for two-thirds of Queensland’s economy and where seventy percent of the state’s population resides [39]. The region is socioeconomically diverse, with no one social or economic status accounting for the majority of the population, providing sufficient complexity to ‘stress test’ the methodology [40]. In addition, the region is experiencing one of the highest population growth rates in Australia. The rate of interstate and international migration to the region has been the main driving force for the growth, potentially causing significant changes in the socioeconomic composition of suburbs in SEQ [41,42]. Hence, the region may benefit most from the method’s implementation.
The paper defines SEQ to include the Australian Bureau of Statistics (ABS)’s twelve statistical area 4 (SA4) regions—Eastern Brisbane, Northern Brisbane, Southern Brisbane, Western Brisbane, Brisbane Inner City, Gold Coast, Ipswich, Logan to Beaudesert, Northern Moreton Bay, Southern Moreton Bay, Sunshine Coast, and Toowoomba. The study’s datasets are analyzed at the statistical area 2 (SA2) level as the unit of analysis. In the 2016 Census, there were 332 SA2 areas in 12 SA4 regions in South East Queensland.

4.1. Datasets

Inspired by the methodology developed by Chhetri, Corcoran, Stimson and Inbakaran [8], the study revolves around the Australian Bureau of Statistics (ABS) technical paper for Socioeconomic Indexes for Areas (SEIFA). One of the indexes within SEIFA is the Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD). In this study, the variables used to calculate IRSAD were the initial variables in the backward elimination algorithm. South East Queensland’s IRSAD is visualized in Figure 1.
The data are extracted from a 2016 Census database, “2016 Census—Counting Persons, Place of Enumeration”. It consists of tables containing aggregated values for the selected statistical areas, for example, the HIED dataset in Appendix A, Table A1. The data was accessed through the TableBuilder platform. Every variable represents a proportion of the population with a specific attribute, calculated using criteria defined for its numerator and denominator, summarized in Appendix A, Table A2.
However, such a set of explanatory variables has a predisposition to suffer from multicollinearity. It will violate the assumption of independence to which a regression model needs to conform in order to be meaningful [44]. Therefore, a stepwise elimination procedure is adopted to eliminate variables with a variation inflation factor (VIF) (Equation (14)) that are higher than a threshold of 10 [45]. The procedure is executed using the vif() function in the ‘car’ R package. As a result, five variables that are deemed multicollinear—INC_LOW, NOYEAR12, INC_HIGH, UNEMPLOYED, and OVERCROWD—are eliminated.
V I F j = 1 R j
where Rj is the coefficient of determination of the j-th explanatory variable in a regression with all other variables.
On the other hand, the rate of building fires in South East Queensland was set as the response variable of the study. It was calculated based on the Queensland fire and emergency services (QFES) incident data points, labeled as incident types 111 (Fire: damaging structure and contents), 112 (Fire: damaging structure only), 113 (Fire: damaging contents only), and 119 (Fire: not classified above), from 2015 to 2017 [46]. The total number of incidents throughout the three years is then cumulated, multiplied by 1000, and divided by the number of persons counted at each SA2 area in Census 2016, resulting in the triannual rate of building fires for every 1000 people. The data is accessible through the Queensland government’s open data portal.
However, inconsistencies exist between QFES and ABS geographical units of data labeling. This led to QFES tagging incident locations by their state suburb (SSC), while ABS collected the relevant socioeconomic data based on its definition of SA2. The main issue brought about by the difference is that a few suburbs are located in 2–4 SA2 areas. Specifically, there are 221 suburbs out of 3263 located in more than one SA2 area. Therefore, the study has adopted a "winner takes all" approach by assuming overlap** suburbs as part of SA2, where most of the suburb residents are located (50 percent plus one). A matrix of suburbs and SA2, represented as rows and columns, respectively, was generated through the ABS TableBuilder platform and named ‘SSCSA2’. It identifies the maximum value at every row and assigns the rows a SA2—the column name at which the maximum value is located. The "winner takes all" approach is conducted through the following code segment.
SSCSA2$SA2<-colnames(x)[apply(x,1,which.max)]
It must be noted that the QFES incident data points are labeled with suburb names that contain some misspellings. For example, some identified errors include ‘Cressbrookst’ and ‘Creastmead’. Additionally, the dataset does not distinguish names used for multiple different suburbs. Therefore, the study has identified these suburb names and added parentheses, distinguishing the suburbs by following the ABS State Suburbs (SSC) naming convention and cross-referencing the postcodes of the suburbs at issue. One example is Clontarf (Moreton Bay—Qld) and Clontarf (Toowoomba—Qld).

4.2. Parameters

The results were obtained using the R software at its 2021.09.0 version on a device equipped with AMD Ryzen 5 3450U, Radeon Vega Mobile Gfx 2.10 GHz, and 5.89 GB of usable RAM. In addition, the RobStatTM package was used to execute the robust stepwise regression analysis [47]. The tuning constant for the M-scale used to compute the initial S-estimator was set to 0.5. The constant determines the breakdown point of the resulting MM-estimator. Relative convergence tolerance for the iterated weighted least square (IRWLS) iterations for the MM-estimator was set to 0.001. The tolerance level was chosen to allow convergence to occur. The desired asymptotic efficiency of the final regression M-estimator was set to 0.95. Finally, the asymptotic bias optimal family of the loss function was used in tuning the parameter for the rho function.

4.3. Results

Nine variables have initially been eliminated, leaving ten variables in the final model. A detailed model specification, which includes a coefficient βd for every retained variable xd (see Equation (1)), is contained in Table 1.
The model does not satisfy the assumption of normally distributed error and equal variances of error, or homoscedasticity. A Shapiro-Wilks test on error resulted in convincing evidence to reject the null hypothesis that the error is normally distributed. A Breusch-Pagan test has also confidently rejected the null hypothesis that the error has equal variance. In light of the presented findings, various transformations—logarithmic, square root, Box Cox—are conducted on the explanatory variables and/or the response variable to find a conforming model. The logarithmic transformation to the response variable (Equation (15)) is found to be the best performing in terms of compliance with the Markov-Gauss assumptions.
log y i = β 0 + β 1 x i 1 + + β d x i d
Subsequently, the suggested methodology is re-executed using the transformed response variable, thereby yielding outcomes that are available in Table 2.
This time, six variables were initially eliminated, leaving thirteen variables in the final model. In identifying the individual parameters’ influence, caution has to be exercised as a model-building algorithm is known to overstate significance [11]. Further assessment, for example, through Monte Carlo simulations, is recommended. The model’s R-squared was calculated to be 0.4259, translating to 42.59 percent of variations explainable by the variables retained. The adjusted R-squared of 0.4024 indicates the model R-squared upon fitting to another dataset in the population. The error sufficiently satisfies the threshold set by Falk and Miller [48] for endogenous constructs such as the one obtained. A robust residual standard error (RSE) of 0.3779 meant the observed building fire rates were off from the actual regression line by approximately 0.3779 units on average. Two socioeconomic variables, NOCAR and OCC_SERVICE_L, were significant at the 0.001 level. Based on their t-statistics and F-statistics, the corresponding p-values (5.61 × 10−6 and 2.11 × 10−6, respectively) preliminarily indicated the variables’ inclusions were not due to chance. NOEDU’s highest positive coefficient increases building fire rates to the highest degree. In contrast, OCC_SERVICE_L’s lowest negative coefficient decreases building fire rates the most significantly.
The Breusch-Pagan test on the new model indicated a statistic of 0.2076. The test is unable to provide sufficient evidence to reject the null hypothesis that the error variance is equal at the 0.05 significant level. Figure 2 reinforces the indication as the plot of residuals to the fitted values forms a horizontal band around the y = 0 line.
However, the model still fails the Shapiro-Wilks test, as the p-value of 0.0004502 indicates sufficient evidence to reject the null hypothesis that the error is normally distributed at the 0.05 significance level. There is, however, a significant improvement in the statistic in comparison to the model prior to the transformation. The presence of skewness in the distribution of errors is observable from its Q-Q plot, as depicted in Figure 3, wherein a pronounced right tail is apparent. Despite this, several studies have proposed a relaxed normality assumption for large datasets, owing to the Central Limit Theorem. They have suggested sample size thresholds of N > 25 ,   N 15 ,   N 50 and N p > 10 where N is the sample size and p is the number of parameters [49,50,51]. The experiment satisfied all of the thresholds with a sample size of 332.
A five-fold cross-validation is then conducted to assess the performance of the method’s resulting model on unseen data. The number of folds is chosen because each fold will contain approximately 55 data points, a reasonable number of observations to minimize overfitting. The root of the mean of the square of errors (RMSE) in Equation (14) and the mean of absolute value of errors (MAE) in Equation (15) are used as the basis for comparison. They measure the difference between the value predicted by the model and the value of the testing model. The presence of a square root in RMSE means the measurement has a higher penalty for large errors.
RMSE = i = 1 n y i y p 2 n
M A E = i = 1 n y i y p n
where yi is the actual rate of building fires, yp is the projected rate of building fires, and n is the number of observations/suburbs. The cross-validation procedure is showcased in Algorithm 2.
Algorithm 2. Algorithm of the five-fold cross-validation
  • Randomly shuffle the dataset, D.
  • Divide D into 5 equally sized folds, D1, D2, D3, D4, and D5.
  • For every fold:
    • Set the current fold, Di, as the test dataset.
    • Set the remaining dataset as the training dataset.
    • Run the algorithm on the training dataset.
    • Measure RMSE and MAE of the resulting model based on the training dataset.
    • Measure RMSE and MAE of the resulting model based on the test dataset.
  • Return RMSE and MAE data
Five sets of measurements have been obtained for each round, which have different folds as the testing dataset. They are summarized in Table 3.
Table 3 shows negligible differences between the root-mean-square error (RMSE) and mean absolute error (MAE). An exception was observed in the iteration with the third fold as the testing dataset. A substantial difference was detected; however, the average difference across the five iterations is still negligibly low. This indicates the model’s equal performance on a dataset not involved in training the model, obtained through the proposed method.

5. Comparative Study

5.1. Alternative Backward Elimination Criteria

There are sizeable alternatives to backward elimination in assessing the goodness-of-fit of a model. Therefore, this paper adopts four criteria as the comparative basis for the RFPE criterion.

5.1.1. Akaike Information Criterion (AIC)

Akaike [52] proposed an indicator for a model’s quality by measuring its goodness-of-fit by estimating Kullback-Leibler divergence using the maximum likelihood principle. The Akaike’s Information Criterion (AIC) is proposed as follows [53].
A I C = 2 k 2 ln L θ ^ | y
where k is the number of parameters and L represents the maximum likelihood function of the parameter estimate θ ^ given the data y. The criterion can be derived from the likelihood L as a function of the residual sum of squares as follows [54]:
A I C = n log R S S n + 2 k
where RSS is the residual sum of squares of the model. The stepwise AIC algorithm has been implemented in financial, medical, and epidemiological applications [55,56,57].
The AIC is the most commonly used information theoretic approach to measuring how much information is lost between a selected model and the true model. It has been widely used as an effective model selection method in many scientific fields, including ecology and phylogenetics [58,59]. Compared with the use of adjusted R-squared to evaluate the model solely on fit, AIC also considers model complexity [58].

5.1.2. Bayesian Information Criterion (BIC)

The Bayesian information criterion, also known as the Schwarz information criterion, was proposed by Gideon Schwarz [60]. Modifying Akaike’s information criterion by introducing Bayes estimators to estimate the maximum likelihood of the model’s parameters. The BIC is formulated as follows:
B I C = k ln n 2 ln L θ ^ | y
Similarly to the AIC, the BIC The criterion can be derived from the likelihood L as a function of the residual sum of squares as follows:
B I C = n log R S S n + k ln n
The strength of BIC includes its ability to find the true model if it exists within the candidates. However, it comes with a significant caveat, as the existence of a true model that reflects reality is debatable. Although BIC penalizes overfitting on larger models, it prefers a more parsimonious or lower-dimensional model. However, for its predictive ability, AIC is better because it minimizes the mean squared error of prediction/estimation [61].

5.1.3. Predicted Residual Error Sum of Squares (PRESS)

Allen [62] developed an indicator of a model’s fit through the predicted residual error sum of squares (PRESS) statistic. The differentiation of the statistic at that time was its ability to measure fit based on samples that were not used to form a model [62,63]. The statistic is a cross-validation attempt by a leave-one-out method that subtracts y ^ i and leaves the i-th observation out, reducing the sample size to n − 1 [64]. Repeating the subtraction and omission of every data point will lead to the sum of squares of discrepancies [65,66]. PRESS is formulated as follows:
P R E S S = i = 1 n y i y ^ i 2

5.2. Comparison of Robust Final Predictor Error (RFPE) Criterion to Akaike’s Information Criterion (AIC) Criterion

Eight variables have been eliminated, leaving eleven variables in the final model. A detailed model specification, which includes a coefficient βd for every retained variable xd (see Equation (1)), is contained in Table 4.
Moreover, when identifying the influence of individual parameters, it is essential to exercise caution, as the model-building algorithm has been known to overstate the significance of the parameters [11]. Further assessment, for example, through Monte Carlo simulations, is recommended. The R-squared of the model was determined to be 0.3916, indicating that 39.16% of the variations can be explained by the retained variables. The adjusted R-squared of 0.3707 represents the R-squared value of the model fitted to another dataset from the same population. The error met the threshold criteria set by Falk and Miller [48] for endogenous constructs such as the one obtained. A robust residual standard error (RSE) of 0.5016 meant the observed building fire rates were off from the actual regression line by approximately 0.5016 units on average.
Three socioeconomic variables, namely NOCAR, NOEDU, and OCC_SERVICE_L, exhibited a significant effect at the 0.001 level. Based on their t-statistics and F-statistics, the corresponding p-values (6.36 × 10−7, 0.0007, and 0.0005, respectively) preliminarily indicated that the inclusion of these variables in the model was not due to chance. The variable NOEDU had the highest positive coefficient, and its inclusion in the model resulted in the highest increase in building fire rates. Conversely, the variable OCC_SERVICE_L had the lowest negative coefficient and had the most significant effect on decreasing building fire rates.
The Breusch-Pagan test on the new model indicated a statistic of 0.1242. Similarly to elimination by the RFPE criterion, the test is unable to provide sufficient evidence to reject the null hypothesis that the error variance is equal at the 0.05 significant level. Figure 4 reinforces the indication as the plot of residuals to the fitted values forms a horizontal band around the y = 0 line. A visual comparison to Figure 2 also does not find significant differences.
The model’s normality assumption was tested using the Shapiro-Wilks test, which yielded a p-value of 0.00417. This result provides sufficient evidence to reject the null hypothesis that the error is normally distributed at a 0.05 significance level. The degree of skewness observed in the error distribution is more pronounced than in the model produced through the RFPE criterion in Figure 3, where the p-value was lower at 0.01238. The skewness is further evident in the Q-Q plot, as depicted in Figure 5, where a significant right tail is visible.
The two models are comparable on RMSE and MAE. The model produced through the RFPE criterion resulted in a lower MAE but a higher RMSE than the model produced through the AIC criterion (Table 5).
To investigate further, we applied the same cross-validation procedure outlined in Algorithm 2 to the model generated using the AIC criterion. The results are presented in Table 6, with the most desirable outcomes highlighted in black. As RMSE penalizes models with significant errors against outliers, we can observe RFPE robustness against outliers in two ways. Firstly, the results show that the RFPE model outperformed the AIC model in terms of MAE but not RMSE. Secondly, it can be observed that the RFPE model consistently exhibited better RMSE performance in the testing dataset compared to the training dataset. Therefore, compared to the AIC, the results demonstrate RFPE and the relevant estimators’ disregard for outliers in training their model and their inability to predict extreme data points.

5.3. Comparison of Robust Final Predictor Error (RFPE) Criterion to Other Criteria

For comparative purposes, the study has also adopted p-value, BIC, and PRESS as criteria for the backward elimination procedure. The methods are carried out using the ‘SignifReg’ R package. The criteria are consistent with their respective equations in Section 5.1. Using the entire dataset for training, the resulting model from each criterion is assessed for its goodness-of-fit, as shown in Table 7.
The three measures suggest that the models are comparable, although the RFPE criterion slightly outperforms the others in terms of MAE and RMSE. To further assess the models’ performance, we applied the cross-validation procedure outlined in Algorithm 2 to the four different models. The corresponding goodness-of-fit measures are presented in Table 8.
After comparing the averaged goodness-of-fit measures across the different models, the RFPE criterion exhibits a slight superiority over the averaged MAE measured on the test and train datasets. This lower MAE demonstrates the robustness of the RFPE criterion, which is less sensitive to outliers than RMSE and indicates a model that is more adaptable to extreme cases [67]. The robust nature of the RFPE criterion is further evident in the first iteration, where a test dataset with a higher incidence of outliers is observed. The RFPE criterion has produced a model with a noticeable advantage, as indicated by the lower RMSE and MAE measures, suggesting a more resilient model that provides a better fit, even for a significantly outlying dataset.

6. Conclusions

This study has identified shortcomings in current socioeconomic models of building fires and has proposed a more robust approach through backward elimination using the Robust Final Predictor Error (RFPE) criterion. The proposed method has been evaluated using datasets from the South-East Queensland region of Australia, resulting in a model that retained 13 variables out of the 24 used by the Australian Bureau of Statistics to calculate the Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD). The model was deemed reasonable with an adjusted R-squared of 0.3717, a root of the mean of the square of errors (RMSE) of 0.494268, and a mean absolute value of errors (MAE) of 0.382724.
A comparative analysis revealed that the proposed RFPE-based approach outperforms other criteria such as p-value, Akaike’s Information Criterion (AIC), Bayesian Information Criterion (BIC), and predicted residual error sum of squared (PRESS) in terms of goodness-of-fit measures following cross-validation. These findings provide convincing evidence to support the use of backward elimination with the RFPE criterion for modeling the socioeconomic determinants of building fires.
Future research may involve comparing the RFPE-based approach with alternative methods such as model averaging, least absolute shrinkage and selection operator (LASSO), least absolute residuals (LAR), and principal component analysis (PCA) [16,68,69]. Monte Carlo simulations may also be used to assess the model’s reliability in identifying individual parameters and compare its performance to other modeling approaches. In the event that simulations prove unreasonable for building fire data, bootstrap** could serve as an alternative. In conclusion, this study has provided sufficient justification to adopt backward elimination with the RFPE criterion for predictive modeling of the socioeconomic determinants of building fires.

Author Contributions

Conceptualization, A.U., L.D.L., M.L. and R.D.; methodology, A.U., L.D.L., M.L. and R.D.; software, A.U.; resources, A.U.; data curation, A.U.; writing—original draft preparation, A.U.; writing—review and editing, L.D.L., M.L. and R.D.; visualization, A.U.; supervision, L.D.L., M.L. and R.D.; project administration, A.U. and L.D.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research is part of a degree that is funded by the CQUniversity Destination Australia Living Stipend Scholarship and the International Excellence Award. The Central Queensland University and the Destination Australia Program jointly funded the scholarships. Funding Number: RH6100.

Data Availability Statement

Restrictions apply to the availability of this data. Census data was obtained from the Australian Bureau of Statistics and is available at https://www.abs.gov.au/statistics/microdata-tablebuilder/tablebuilder (accessed on 1 September 2022) with the permission of the Australian Bureau of Statistics. Queensland Fire and Emergency Services (QFES) incident data was obtained from the Queensland Fire and Emergency Services and is available at https://www.data.qld.gov.au/dataset/qfes-incident-data (accessed on 1 September 2022) with the permission of the Queensland Fire and Emergency Services.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Equivalized total household income (weekly) dataset (code: HIED). Adapted from the Australian Bureau of Statistics.
Table A1. Equivalized total household income (weekly) dataset (code: HIED). Adapted from the Australian Bureau of Statistics.
SA2000115&&@@
Nil$1–$49$3000+Not StatedNA
Alexandra Hills808452181588
Tiarna595431138337
Note: Each category was assigned a code that consists of numerical values or symbols. Some database may contain categories coded with the symbols &, @ or V, referring to ‘Not Stated’, ‘Not Applicable’ or ‘Overseas Visitor’ categories, respectively. The symbols may repeat because the number of digits of the category codes within a table has to be the same (eg. 01, 02, …, 15, &&, @@, or VV).
Table A2. Input variable specifications. Adapted from the Australian Bureau of Statistics [43].
Table A2. Input variable specifications. Adapted from the Australian Bureau of Statistics [43].
Variable (Proportion)ABS NotationNumeratorDenominator
People with stated annual household equivalized income between $1 and $25,999INC_LOWHIED = 02–05HIED = 01–15
People with stated annual household equivalized income greater than or equal to $78,000INC_HIGHHIED = 11–15HIED = 01–15
People aged 15 years and over attending a university or other tertiary institutionATUNIAGEP > 14 and TYPP = 50AGEP > 14 and TYPP ne &&, VV
People aged 15 years and over whose highest level of educational attainment is a Certificate Level III or IV qualificationCERTIFICATEHEAP = 51HEAP ne 001, @@@, VVV, &&&
People aged 15 years and over whose highest level of educational attainment is an advanced diploma or diploma qualificationDIPLOMAHEAP = 4HEAP ne 001, @@@, VVV, &&&
People aged 15 years and over who have no educational attainmentNOEDUHEAP = 998HEAP ne 001, @@@, VVV, &&&
People aged 15 years and over whose highest level of educational attainment is Year 11 or lower (includes Certificate Levels I and II; excludes those still in secondary school)NOYEAR12HEAP = 613, 621, 720, 721, 811, 812, 998, and TYPP NE 31, 32, 33HEAP ne 001, @@@, VVV, &&&
People in the labor force who are unemployedUNEMPLOYEDLFSP = 4–5LFSP = 1–5
Employed people classified as machinery operators and driversOCC_DRIVERSOCCP = 7OCCP = 1–8
Employed people classified as laborers OCC_LABOUROCCP = 8OCCP = 1–8
Employed people classified as managersOCC_MANAGEROCCP = 1OCCP = 1–8
Employed people classified as professionalsOCC_PROFOCCP = 2OCCP = 1–8
Employed people classified as low-skill sales workersOCC_SALES_LOCCP = 6211, 6212, 6214, 6216, 6219, 6391, 6393, 6394, 6399OCCP = 1–8
Employed people classified as low-skill community and personal service workersOCC_SERVICE_LOCCP = 4211, 4211, 4231, 4232, 4233, 4234, 4311, 4312, 4313, 4314, 4315, 4319, 4421, 4422, 4511, 4514, 4515, 4516, 4517, 4518, 4521, 4522OCCP = 1–8
Occupied private dwellings with four or more bedroomsHIGHBEDBEDD = 04–30 and HHCD = 11–32BEDD ne &&, @@, and HHCD = 11–32
Occupied private dwellings paying more than $2800 per month in mortgage repaymentsHIGHMORTGAGEMRERD = 16–19TEND ne &, @ and MRERD ne &&&& and RNTRD ne &&&&
Occupied private dwellings paying less than $215 per week in rent (excluding $0 per week)LOWRENTRNTRD = 02–08TEND ne &, @, and MRERD ne &&&& and RNTRD ne &&&&
Occupied private dwellings requiring one or more extra bedrooms (based on the Canadian National Occupancy Standard)OVERCROWDHOSD = 01–04HOSD ne 10, &&, @@, and HHCD = 11–32
Occupied private dwellings with no carsNOCARVEHD = 00 and HHCD = 11–32VEHD ne &&, @@, and HHCD = 11–32
Occupied private dwellings with no Internet connectionNONETNEDD = 2 and HHCD = 11–32NEDD ne &, @, and HHCD = 11–32
Families with children under 15 years of age and jobless parentsCHILDJOBLESSLFSF = 16, 17, 19, 25, 26LFSF ne 06, 11, 15, 18, 20, 21, 27, @@
People aged under 70 who need assistance with core activities due to a long-term health condition, disability, or old ageDISABILITYU70AGEP > 70 and ASSNP = 1AGEP < 70 and ASSNP = 1–2
Families that are one-parent families with dependent offspring only ONEPARENTFMCF = 3112, 3122, 3212FMCF ne @@@@
People aged 15 and over who are separated or divorced SEPDIVORCEDMSTP = 3–4MSTP = 1–5
Each category was assigned a code that consists of numerical values or symbols. Some database may contain categories coded with the symbols &, @ or V, referring to ‘Not Stated’, ‘Not Applicable’ or ‘Overseas Visitor’ categories, respectively. The symbols may repeat because the number of digits of the category codes within a table has to be the same (eg. 001, 002, …, 100, &&&, @@@, or VVV). Interpretation Guide: [HIED = 02–05] refers to the summation of data satisfying category codes 02 to 05 in the HIED dataset. [AGEP > 14 and TYPP ne &&, VV] refers to the summation of data satisfying category codes greater than 14 in the AGEP dataset and category codes other than &&, VV in the TYPP dataset.

References

  1. Kelly, A. Fire Protection Services in Australia; IBISWorld: Manhattan, NY, USA, 2022. [Google Scholar]
  2. Australian Bureau of Statistics. Causes of Death, Australia. Available online: https://www.abs.gov.au/statistics/health/causes-death/causes-death-australia/2020 (accessed on 1 December 2022).
  3. Ashe, B.; McAneney, K.J.; Pitman, A.J. Total cost of fire in Australia. J. Risk Res. 2009, 12, 121–136. [Google Scholar] [CrossRef]
  4. Queensland Fire and Emergency Services. QFES Incident Data. 2020. Available online: https://www.data.qld.gov.au/dataset/qfes-incident-data (accessed on 1 September 2022).
  5. Lizhong, Y.; Heng, C.; Yong, Y.; Tingyong, F. The Effect of Socioeconomic Factors on Fire in China. J. Fire Sci. 2005, 23, 451–467. [Google Scholar] [CrossRef]
  6. Fahy, R.; Maheshwari, R. Poverty and the Risk of Fire; National Fire Protection Organisation: Quincy, MA, USA, 2021. [Google Scholar]
  7. Tannous, W.K.; Agho, K. Socio-demographic predictors of residential fire and unwillingness to call the fire service in New South Wales. Prev. Med. Rep. 2017, 7, 50–57. [Google Scholar] [CrossRef]
  8. Chhetri, P.; Corcoran, J.; Stimson, R.J.; Inbakaran, R. Modelling Potential Socio-economic Determinants of Building Fires in South East Queensland. Geogr. Res. 2010, 48, 75–85. [Google Scholar] [CrossRef]
  9. Efroymson, M.A. Multiple regression analysis. In Mathematical Methods for Digital Computers; John Wiley and Sons: Hoboken, NJ, USA, 1960; pp. 191–203. [Google Scholar]
  10. Harrell, F.E. Multivariable Modeling Strategies. In Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis; Harrell, J.F.E., Ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 63–102. [Google Scholar]
  11. Smith, G. Step away from stepwise. J. Big Data 2018, 5, 32. [Google Scholar] [CrossRef]
  12. Olusegun, A.M.; Dikko, H.G.; Gulumbe, S.U. Identifying the Limitation of Stepwise Selection for Variable Selection in Regression Analysis. Am. J. Theor. Appl. Stat. 2015, 4, 414–419. [Google Scholar] [CrossRef]
  13. McIntyre, S.H.; Montgomery, D.B.; Srinivasan, V.; Weitz, B.A. Evaluating the Statistical Significance of Models Developed by Stepwise Regression. J. Mark. Res. 1983, 20, 1–11. [Google Scholar] [CrossRef]
  14. Heinze, G.; Dunkler, D. Five myths about variable selection. Transpl. Int. 2017, 30, 6–10. [Google Scholar] [CrossRef]
  15. Ssegane, H.; Tollner, E.W.; Mohamoud, Y.M.; Rasmussen, T.C.; Dowd, J.F. Advances in variable selection methods I: Causal selection methods versus stepwise regression and principal component analysis on data of known and unknown functional relationships. J. Hydrol. 2012, 438–439, 16–25. [Google Scholar] [CrossRef]
  16. Hastie, C.; Searle, R. Socio-economic and demographic predictors of accidental dwelling fire rates. Fire Saf. J. 2016, 84, 50–56. [Google Scholar] [CrossRef] [Green Version]
  17. Ratner, B. Variable selection methods in regression: Ignorable problem, outing notable solution. J. Target. Meas. Anal. Mark. 2010, 18, 65–75. [Google Scholar] [CrossRef]
  18. Steyerberg, E.W.; Eijkemans, M.J.C.; Habbema, J.D.F. Stepwise Selection in Small Data Sets: A Simulation Study of Bias in Logistic Regression Analysis. J. Clin. Epidemiol. 1999, 52, 935–942. [Google Scholar] [CrossRef] [PubMed]
  19. Derksen, S.; Keselman, H.J. Backward, forward and stepwise automated subset selection algorithms: Frequency of obtaining authentic and noise variables. Br. J. Math. Stat. Psychol. 1992, 45, 265–282. [Google Scholar] [CrossRef]
  20. Cammarota, C.; Pinto, A. Variable selection and importance in presence of high collinearity: An application to the prediction of lean body mass from multi-frequency bioelectrical impedance. J. Appl. Stat. 2021, 48, 1644–1658. [Google Scholar] [CrossRef] [PubMed]
  21. Freckleton, R.P. Dealing with collinearity in behavioural and ecological data: Model averaging and the problems of measurement error. Behav. Ecol. Sociobiol. 2011, 65, 91–101. [Google Scholar] [CrossRef]
  22. Wang, K.; Chen, Z. Stepwise Regression and All Possible Subsets Regression in Education. Electron. Int. J. Educ. Arts Sci. 2016, 2, 60–81. [Google Scholar]
  23. Goodenough, A.E.; Hart, A.G.; Stafford, R. Regression with empirical variable selection: Description of a new method and application to ecological datasets. PLoS ONE 2012, 7, e34338. [Google Scholar] [CrossRef] [Green Version]
  24. Siegel, A.F. Multiple Regression: Predicting One Variable From Several Others. In Practical Business Statistics, 7th ed.; Siegel, A.F., Ed.; Academic Press: Cambridge, MA, USA, 2016; pp. 355–418. [Google Scholar]
  25. Seber, G.A.F.; Wild, C.J. Least Squares. In Methods in Experimental Physics; Stanford, J.L., Vardeman, S.B., Eds.; Academic Press: Cambridge, MA, USA, 1994; Volume 28, pp. 245–281. [Google Scholar]
  26. Wilson, J.H. Multiple Linear Regression. In Regression Analysis: Understanding and Building Business and Economic Models Using Excel; Business Expert Press: New York, NY, USA, 2012. [Google Scholar]
  27. Thompson, B. Stepwise Regression and Stepwise Discriminant Analysis Need Not Apply here: A Guidelines Editorial. Educ. Psychol. Meas. 1995, 55, 525–534. [Google Scholar] [CrossRef]
  28. Yuan, M.; Lin, Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2006, 68, 49–67. [Google Scholar] [CrossRef]
  29. Kwak, S.K.; Kim, J.H. Statistical data preparation: Management of missing values and outliers. Korean J. Anesth. 2017, 70, 407–411. [Google Scholar] [CrossRef]
  30. Osborne, J.W.; Overbay, A. The power of outliers (and why researchers should ALWAYS check for them). Pract. Assess. Res. Eval. 2004, 9, 6. [Google Scholar] [CrossRef]
  31. Maronna, R.A.; Martin, R.D.; Yohai, V.J.; Salibin-Barrera, M. Robust inference and variable selection for M-estimators. In Robust Statistics; Wiley Series in Probability and Statistics; John Wiley & Sons, Incorporated: Hoboken, NJ, USA, 2019; pp. 133–138. [Google Scholar]
  32. Wada, K. Outliers in official statistics. Jpn. J. Stat. Data Sci. 2020, 3, 669–691. [Google Scholar] [CrossRef]
  33. Zhang, W.; Yang, D.; Zhang, S. A new hybrid ensemble model with voting-based outlier detection and balanced sampling for credit scoring. Expert Syst. Appl. 2021, 174, 114744. [Google Scholar] [CrossRef]
  34. Akaike, H. Statistical predictor identification. Ann. Inst. Stat. Math. 1970, 22, 203–217. [Google Scholar] [CrossRef]
  35. Untadi, A.; Li, L.D.; Dodd, R.; Li, M. A Novel Framework Incorporating Socioeconomic Variables into the Optimisation of South East Queensland Fire Stations Coverages. In Proceedings of the Conference on Innovative Technologies in Intelligent Systems & Industrial Applications, Online, 16–18 November 2022. [Google Scholar]
  36. Yohai, V.J. High Breakdown-Point and High Efficiency Robust Estimates for Regression. Ann. Stat. 1987, 15, 642–656. [Google Scholar] [CrossRef]
  37. Pena, D.; Yohai, V. A Fast Procedure for Outlier Diagnostics in Large Regression Problems. J. Am. Stat. Assoc. 1999, 94, 434–445. [Google Scholar] [CrossRef] [Green Version]
  38. Maronna, R.A.; Martin, R.D.; Yohai, V.J.; Salibin-Barrera, M. M-estimators with smooth ψ-function. In Robust Statistics; Wiley Series in Probability and Statistics; John Wiley & Sons, Incorporated: Hoboken, NJ, USA, 2019; p. 104. [Google Scholar]
  39. Queensland Government. South East Queensland Economic Foundations Paper; Queensland Government: Brisbane, Australia, 2018.
  40. Queensland Health. Our People: A Diverse Population; Queensland Health: Cairns, Australia, 2020.
  41. Australian Bureau of Statistics. Population Movement in Australia. Available online: https://www.abs.gov.au/articles/population-movement-australia (accessed on 9 March 2023).
  42. Jivraj, S. The Effect of Internal Migration on the Socioeconomic Composition of Neighbourhoods in England. Ph.D. Thesis, University of Manchester, Manchester, UK, 2011. [Google Scholar]
  43. Australian Bureau of Statistics. Technical Paper: Socio-Economic Indexes for Areas (SEIFA); Australian Bureau of Statistics: Canberra, Australia, 2016.
  44. Poole, M.A.; O'Farrell, P.N. The Assumptions of the Linear Regression model. Trans. Inst. Br. Geogr. 1971, 52, 145–158. [Google Scholar] [CrossRef] [Green Version]
  45. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. (Eds.) Linear Regression. In An Introduction to Statistical Learning: With Applications in R; Springer: New York, NY, USA, 2013; pp. 59–126. [Google Scholar]
  46. Australasian Fire and Emergency Service Authorities Council. Australian Incident Reporting System Reference Manual; Australasian Fire and Emergency Service Authorities Council: East Melbourne, Australia, 2013. [Google Scholar]
  47. Salibian-Barrera, M.; Yohai, V.; Maronna, R.; Martin, D.; Brownso, G.; Konis, K.; Croux, C.; Haesbroeck, G.; Maechler, M.; Koller, M.; et al. Package ‘RobStatTM’. Available online: https://cran.r-project.org/web/packages/RobStatTM/RobStatTM.pdf (accessed on 5 December 2022).
  48. Falk, R.; Miller, N. A Primer for Soft Modeling; The University of Akron Press: Akron, OH, USA, 1992. [Google Scholar]
  49. Pek, J.; Wong, O.; Wong, A.C.M. How to Address Non-normality: A Taxonomy of Approaches, Reviewed, and Illustrated. Front. Psychol. 2018, 9, 2104. [Google Scholar] [CrossRef] [Green Version]
  50. Schmidt, A.F.; Finan, C. Linear regression and the normality assumption. J. Clin. Epidemiol. 2018, 98, 146–151. [Google Scholar] [CrossRef] [Green Version]
  51. Howell, D.C. Statistical Methods for Psychology; Cengage Learning: Boston, MA, USA, 2012. [Google Scholar]
  52. Akaike, H. Information Theory and an Extension of the Maximum Likelihood Principle. In Selected Papers of Hirotugu Akaike; Parzen, E., Tanabe, K., Kitagawa, G., Eds.; Springer: New York, NY, USA, 1998; pp. 199–213. [Google Scholar]
  53. Burnham, K.P.; Anderson, D.R. Information and Likelihood Theory: A Basis for Model Selection and Inference. In Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach; Springer: New York, NY, USA, 2002; pp. 49–97. [Google Scholar]
  54. Venables, W.N.; Ripley, B.D. Linear Statistical Models. In Modern Applied Statistics with S; Springer: New York, NY, USA, 2002; pp. 139–181. [Google Scholar]
  55. Zhang, T.; Zhang, J.; Liu, Y.; Pan, S.; Sun, D.; Zhao, C. Design of Linear Regression Scheme in Real-Time Market Load Prediction for Power Market Participants. In Proceedings of the 2021 11th International Conference on Power and Energy Systems (ICPES), Shanghai, China, 18–20 December 2021; pp. 547–551. [Google Scholar]
  56. Luu, M.N.; Alhady, S.T.M.; Nguyen Tran, M.D.; Truong, L.V.; Qarawi, A.; Venkatesh, U.; Tiwari, R.; Rocha, I.C.N.; Minh, L.H.N.; Ravikulan, R.; et al. Evaluation of risk factors associated with SARS-CoV-2 transmission. Curr. Med. Res. Opin. 2022, 38, 2021–2028. [Google Scholar] [CrossRef]
  57. Hevesi, M.; Dandu, N.; Darwish, R.; Zavras, A.; Cole, B.; Yanke, A. Poster 212: The Cartilage Early Return for Transplant (CERT) Score: Predicting Early Patient Election to Proceed with Cartilage Transplant Following Chondroplasty of the Knee. Orthop. J. Sport. Med. 2022, 10, 2325967121S2325900773. [Google Scholar] [CrossRef]
  58. Johnson, J.B.; Omland, K.S. Model selection in ecology and evolution. Trends Ecol. Evol. 2004, 19, 101–108. [Google Scholar] [CrossRef]
  59. Sullivan, J.; Joyce, P. Model Selection in Phylogenetics. Annu. Rev. Ecol. Evol. Syst. 2005, 36, 445–466. [Google Scholar] [CrossRef]
  60. Schwarz, G. Estimating the Dimension of a Model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  61. Vrieze, S.I. Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychol. Methods 2012, 17, 228–243. [Google Scholar] [CrossRef] [Green Version]
  62. Allen, D.M. The Relationship Between Variable Selection and Data Agumentation and a Method for Prediction. Technometrics 1974, 16, 125–127. [Google Scholar] [CrossRef]
  63. Tarpey, T. A Note on the Prediction Sum of Squares Statistic for Restricted Least Squares. Am. Stat. 2000, 54, 116–118. [Google Scholar] [CrossRef]
  64. Qian, J.; Li, S. Model Adequacy Checking for Applying Harmonic Regression to Assessment Quality Control. ETS Res. Rep. Ser. 2021, 2021, 1–26. [Google Scholar] [CrossRef]
  65. Quan, N.T. The Prediction Sum of Squares as a General Measure for Regression Diagnostics. J. Bus. Econ. Stat. 1988, 6, 501–504. [Google Scholar] [CrossRef]
  66. Draper, N.R.; Smith, H. Applied Regression Analysis, 2nd ed.; John Wiley: New York, NY, USA, 1981. [Google Scholar]
  67. Hodson, T.O. Root-mean-square error (RMSE) or mean absolute error (MAE): When to use them or not. Geosci. Model Dev. 2022, 15, 5481–5487. [Google Scholar] [CrossRef]
  68. Haem, E.; Harling, K.; Ayatollahi, S.M.T.; Zare, N.; Karlsson, M.O. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models. J. Pharmacokinet. Pharmacodyn. 2017, 44, 55–66. [Google Scholar] [CrossRef] [PubMed]
  69. Kim, H.R. Model Building with Forest Fire Data: Data Mining, Exploratory Analysis and Subset Selection. 2009. Available online: http://fisher.stats.uwo.ca/faculty/aim/2018/4850G/projects/FIREProjectFinalReport.pdfB (accessed on 15 November 2022).
Figure 1. Visualization of SEIFA scores on the map of South East Queensland [43].
Figure 1. Visualization of SEIFA scores on the map of South East Queensland [43].
Axioms 12 00524 g001
Figure 2. Residuals—fitted values plot for the log(y) model by RFPE criterion.
Figure 2. Residuals—fitted values plot for the log(y) model by RFPE criterion.
Axioms 12 00524 g002
Figure 3. Q-Q plot for the log(y) model by RFPE criterion.
Figure 3. Q-Q plot for the log(y) model by RFPE criterion.
Axioms 12 00524 g003
Figure 4. Residuals—fitted values plot for the log(y) model by AIC criterion.
Figure 4. Residuals—fitted values plot for the log(y) model by AIC criterion.
Axioms 12 00524 g004
Figure 5. Q-Q plot for the log(y) model by AIC criterion.
Figure 5. Q-Q plot for the log(y) model by AIC criterion.
Axioms 12 00524 g005
Table 1. yi model for socioeconomic predictors of building fires by robust backward elimination method.
Table 1. yi model for socioeconomic predictors of building fires by robust backward elimination method.
Variables Coefficient Est.Std. ErrorVIFt ValueF Value(Pr > t)
(INTERCEPT)−0.05590.1474-−0.3790-0.7048
CERTIFICATE1.94620.58532.88513.325011.05670.0010
CHILDJOBLESS−1.30460.47443.1884−2.75007.56270.0063
DISABILITYU7011.99853.10344.52893.866014.94830.0001
HIGHBED−0.73900.16742.2814−4.415019.48881.38 × 10−5
NOCAR5.01590.78772.37756.367040.54406.65 × 10−10
NOEDU13.36455.06032.19782.64106.97510.0087
OCC_DRIVERS1.07180.54441.79031.96903.87570.0498
OCC_LABOUR3.42191.08074.84633.166010.02630.0017
OCC_MANAGERS4.72620.75811.72006.234038.86481.43 × 10−9
OCC_SERVICE_L−5.92571.75132.6066−3.384011.44810.0008
Robust residual standard error = 0.357, R-squared = 0.4154, and adj. R-squared = 0.3972. Shapiro-Wilks test on residuals: W = 0.93478, p-value = 6.832 × 10−11. Breusch-Pagan test on residuals: BP = 22.586, df = 10, p-value = 0.01238.
Table 2. log(yi) model for socioeconomic predictors of building fires by robust backward elimination method.
Table 2. log(yi) model for socioeconomic predictors of building fires by robust backward elimination method.
Variables Coefficient Est.Std. ErrorVIFt ValueF Value(Pr > t)
(INTERCEPT)−0.09610.2168-−0.4430-0.6579
CERTIFICATE2.54590.79023.01073.222010.38070.0014
CHILDJOBLESS−1.46600.67793.8942−2.16304.67640.0313
DISABILITYU709.80594.41735.63212.22004.92780.0271
HIGHBED−0.83720.27563.6520−3.03809.22870.0026
LOWRENT1.24761.40353.86180.88900.79020.3747
NOCAR5.24161.13482.94534.619021.33295.61 × 10−6
NOEDU15.07156.3852.14872.36005.57160.0189
NONET0.43731.95827.58250.22300.04990.8234
OCC_DRIVERS1.14160.78372.14461.45702.12190.1462
OCC_MANAGERS2.81061.16122.37012.42005.85810.0161
OCC_PROF−1.12330.52933.4929−2.12204.50430.0346
OCC_SERVICE_L−11.23792.32582.8882−4.832023.34562.11 × 10−6
ONEPARENT1.63181.38214.37811.18101.39400.2386
Robust residual standard error = 0.4832, R-squared = 0.3964, and adj. R-squared = 0.3717. Shapiro-Wilks test on residuals: W = 0.98247, p-value = 0.0004502. Breusch-Pagan test on residuals: BP = 16.822, df = 13, p-value = 0.2076.
Table 3. The RMSE and MAE of models produced by RFPE after 5-fold cross-validation.
Table 3. The RMSE and MAE of models produced by RFPE after 5-fold cross-validation.
Testing DatasetTraining DatasetMeasurements
RMSEMAE
Train.Test.Diff.Train.Test.Diff.
12, 3, 4, 50.4768180.571732−0.0949140.3713230.443796−0.072473
21, 3, 4, 50.5003640.4710370.0293270.3889150.3549200.033995
31, 2, 4, 50.4916250.503445−0.0118200.3784460.395941−0.017495
41, 2, 3, 50.5016560.4719820.0296740.3929490.3511240.041824
51, 2, 3, 40.5003900.4734560.0269340.3819410.387893−0.005952
Mean Abs. Diff.0.4941710.4983300.0385340.3827150.3867350.034348
Table 4. Final model for socioeconomic predictors of building fires by AIC criterion.
Table 4. Final model for socioeconomic predictors of building fires by AIC criterion.
Variables Coefficient Est.Std. ErrorVIFt ValueF Value(Pr > t)
(INTERCEPT)0.01220.20275.17670.06002.01690.9521
CERTIFICATE1.44851.01993.55361.42004.85530.1565
CHILDJOBLESS−1.4060.63813.3722−2.20304.21290.0283
DIPLOMA−4.76252.32034.4612−2.05308.25610.0409
DISABILITYU7011.16823.88683.47642.87304.58210.0043
HIGHBED−0.56260.26282.9668−2.141025.82730.0331
NOCAR5.71041.12361.78045.082011.74896.36 × 10−7
NOEDU19.71625.75213.03163.42806.17270.0007
OCC_MANAGERS3.21781.29522.75232.48409.03780.0135
OCC_PROF−1.39140.46283.6245−3.006012.53690.0029
OCC_SERVICE_L−9.07652.56345.6759−3.54104.53200.0005
SEPDIVORCED4.07261.91315.17672.12902.01690.0340
Residual standard error = 0.5016, R-squared = 0.3916, and adj. R-squared = 0.3707. Shapiro-Wilks test on residuals: W = 0.98687, p-value = 0.00417. Breusch-Pagan test: BP = 16.481, df = 11, p-value = 0.1242.
Table 5. Summary of comparative measures of models produced by the AIC and RFPE methods.
Table 5. Summary of comparative measures of models produced by the AIC and RFPE methods.
MeasuresRFPEAIC
RMSE0.4942680.492444
MAE0.3827240.385877
Table 6. Summary of comparative measures of models produced by AIC and RFPE after 5-fold cross-validation.
Table 6. Summary of comparative measures of models produced by AIC and RFPE after 5-fold cross-validation.
Elimination
Criteria
Testing DatasetTraining DatasetMeasurement
RMSEMAE
Train.Test.Diff.Train.Test.Diff.
RFPE12, 3, 4, 50.4768180.571732−0.0949140.3713230.443796−0.072473
21, 3, 4, 50.5003640.4710370.0293270.3889150.3549200.033995
31, 2, 4, 50.4916250.503445−0.0118200.3784460.395941−0.017495
41, 2, 3, 50.5016560.4719820.0296740.3929490.3511240.041824
51, 2, 3, 40.5003900.4734560.0269340.3819410.387893−0.005952
Mean Abs. Diff.0.4941710.4983300.0385340.3827150.3867350.034348
AIC12, 3, 4, 50.4756790.553801−0.0781220.3732780.435709−0.062431
21, 3, 4, 50.4988190.4658670.0329530.3920830.3608630.031220
31, 2, 4, 50.4885060.508005−0.0194990.3826230.398990−0.016367
41, 2, 3, 50.5003400.4592480.0410920.3962390.3441140.052125
51, 2, 3, 40.4983950.4681680.0302270.3851110.388907−0.003796
Mean Abs. Diff.0.4923480.4910180.0403780.3858670.3857170.033188
Table 7. Summary of comparative measures of models produced by RFPE, p-value, BIC, and PRESS criteria.
Table 7. Summary of comparative measures of models produced by RFPE, p-value, BIC, and PRESS criteria.
MeasuresRFPEp-ValueBICPRESS
RMSE0.4942680.5339030.4960120.506705
MAE0.3827240.4153320.3911710.398664
Table 8. Summary of comparative measures of models produced by RFPE, p-value, BIC, and PRESS criteria after 5-fold cross-validation.
Table 8. Summary of comparative measures of models produced by RFPE, p-value, BIC, and PRESS criteria after 5-fold cross-validation.
Elimination CriteriaTesting DatasetTraining DatasetMeasurements
RMSEMAE
Train.Test.Diff.Train.Test.Diff.
RFPE12, 3, 4, 50.4768180.571732−0.0949140.3713230.443796−0.072473
21, 3, 4, 50.5003640.4710370.0293270.3889150.3549200.033995
31, 2, 4, 50.4916250.503445−0.0118200.3784460.395941−0.017495
41, 2, 3, 50.5016560.4719820.0296740.3929490.3511240.041824
51, 2, 3, 40.5003900.4734560.0269340.3819410.387893−0.005952
Mean Abs. Diff.0.4941710.4983300.0385340.3827150.3867350.034348
p-value12, 3, 4, 50.5190790.588888−0.0698090.4031960.463334−0.060138
21, 3, 4, 50.5411300.5037230.0374070.4209140.3928370.028077
31, 2, 4, 50.5326160.539056−0.0064390.4098190.437554−0.027735
41, 2, 3, 50.5392410.5118210.0274200.4244600.3785450.045915
51, 2, 3, 40.5371110.5210200.0160900.4182380.4038380.014400
Mean Abs. Diff.0.5338350.5329020.0314330.4153250.4152210.035253
BIC12, 3, 4, 50.4789080.558554−0.0796460.3796550.436716−0.057061
21, 3, 4, 50.5015840.4728910.0286930.3959940.3717310.024263
31, 2, 4, 50.4934700.506130−0.0126600.3896510.397295−0.007644
41, 2, 3, 50.5034370.4648890.0385480.4000900.3552240.044866
51, 2, 3, 40.5021960.4707590.0314370.3904170.394151−0.003734
Mean Abs. Diff.0.4959190.4946440.0381970.3911610.3910230.027514
PRESS12, 3, 4, 50.4880930.574441−0.0863480.3862420.447798−0.061556
21, 3, 4, 50.5117230.4859570.0257670.4018450.3858430.016003
31, 2, 4, 50.5066140.507073−0.0004590.3983440.399954−0.001610
41, 2, 3, 50.5143860.4744900.0398960.4092980.3558060.053492
51, 2, 3, 40.5122070.4843320.0278750.3975400.403109−0.005568
Mean Abs. Diff.0.5066050.5052590.0360690.3986540.3985020.027646
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Untadi, A.; Li, L.D.; Li, M.; Dodd, R. Modeling Socioeconomic Determinants of Building Fires through Backward Elimination by Robust Final Prediction Error Criterion. Axioms 2023, 12, 524. https://doi.org/10.3390/axioms12060524

AMA Style

Untadi A, Li LD, Li M, Dodd R. Modeling Socioeconomic Determinants of Building Fires through Backward Elimination by Robust Final Prediction Error Criterion. Axioms. 2023; 12(6):524. https://doi.org/10.3390/axioms12060524

Chicago/Turabian Style

Untadi, Albertus, Lily D. Li, Michael Li, and Roland Dodd. 2023. "Modeling Socioeconomic Determinants of Building Fires through Backward Elimination by Robust Final Prediction Error Criterion" Axioms 12, no. 6: 524. https://doi.org/10.3390/axioms12060524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop