Next Article in Journal
Safer Working at Heights: Exploring the Usability of Virtual Reality for Construction Safety Training among Blue-Collar Workers in Kuwait
Next Article in Special Issue
A Deep-Learning Approach to Driver Drowsiness Detection
Previous Article in Journal
The Development of the Pooled Rideshare Acceptance Model (PRAM)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Process Safety Performance Indicators Using Big Data: How a PSPI Looks Different from a Data Perspective

1
University of Huddersfield, Huddersfield HD1 3DH, UK
2
The Netherlands Organisation (TNO), 2333BE Leiden, The Netherlands
3
Syngenta Huddersfield Manufacturing Centre, Huddersfield HD2 1GX, UK
*
Author to whom correspondence should be addressed.
Safety 2023, 9(3), 62; https://doi.org/10.3390/safety9030062
Submission received: 19 July 2023 / Revised: 25 August 2023 / Accepted: 29 August 2023 / Published: 4 September 2023
(This article belongs to the Special Issue Safety and Risk Management in Digitalized Process Systems)

Abstract

:
This work presents a data-centric method to use IoT data, generated from the site, to monitor core functions of safety barriers on a batch reactor. The approach turns process safety performance indicators (PSPIs) into online, globally available safety indicators that eliminate variability in human interpretation. This work also showcases a class of PSPIs that are reliable and time-dependent but only work in a digital online environment: profile PSPIs. It is demonstrated that the profile PSPI opens many new opportunities for leading indicators, without the need for complex mathematics. Online PSPI analyses were performed at the Syngenta Huddersfield Manufacturing Centre, Leeds Road, West Yorkshire, United Kingdom, and shared with their international headquarters in Basel, Switzerland. The performance was determined with industry software to extract time-series data and perform the calculations. The calculations were based on decades of IoT data stored in the AVEVA Factory Historian. Non-trivial data cleansing and additional data tags were required for the creation of relevant signal conditions and composite conditions. This work demonstrates that digital methods do not require gifted data analysts to report existing PSPIs in near real-time and is well within the capabilities of chemical (safety) engineers. Current PSPIs can also be evaluated in terms of their effectiveness to allow management to make decisions that lead to corrective actions. This improves significantly on traditional PSPI processes that, when reviewed monthly, lead to untimely decisions and actions. This approach also makes it possible to review PSPIs as they develop, receiving notifications of PSPIs when they reach prescribed limits, all with the potential to recommend alternative PSPIs that are more proactive in nature.
Keywords:
barrier; PSPI; real-time

1. Introduction

Safety key performance indicators (KPIs) traditionally have been of interest in the industry for a long time. In the early days, KPIs were displayed at the entrance of a chemical plant [1]. In 1956, always met with controversy, Blake defended the use of objective safety KPIs against, what he called, sub-objective application of human factors work by Heinrich [2]. Despite attention being paid to process safety, incidents and accidents continue to occur [3]. The major wake-up call happened after an explosion occurred at the BP Texas City Refinery, which killed 15 workers on 23 March 2005 [4]. Two significant issues that came out of the 2007 Baker report [5] were around process safety management, which continuously manages process safety risks (includes monitoring), and the development of leading and lagging process safety performance indicators (PSPIs) [6,7,8]. In 2014, a special issue in Safety Science (edited by Hale) discussed options and approaches [9]; several were developed, but the problem was never satisfactorily solved, and lagging indicators remain important to this day. With the introduction of data sciences, the speed with which KPIs can be extracted from a functioning plant have increased with an additional question of whether or not organisations would invest in the technologies required for near or real time reporting [10].
Following Cundius and Alt’s advice [10], this work presents a practical approach to move indicators closer to real time using big data with the analysis of data collected from a data lake. In this case, the AVEVA historian [11] data lake is used from Syngenta’s Huddersfield Manufacturing Centre (HMC) [12]. Defined calculations are used to perform computations on the extracted data to place that data in a form that can be presented to management as a PSPI. The goal for the PSPI is that it allows management to make real-time informed decisions [13], which could lead to positive actions [14] that improve the overall performance of, and results from, the process.

Literature Review

There are many views for the rationale of selecting and using KPIs for safety, aptly named PSPIs. The different views for the purposes of PSPIs can be summarised as being:
  • Able to detect deviations of a process to help justify expenditure towards safety and report on safety in the chemical engineering sphere of influence;
  • An intervention tool to prevent issues or “knock-on” effects;
  • Able to support decisions to promote organisational vision;
  • Used as an effective monitoring tool to allow the organisation to “feel safe” and adhere to regulations and standards;
  • Proactive.
This study introduces a new view towards PSPIs: a data-centric view that offers a different perspective on PSPIs. Data-centric PSPIs would now, due to near or real-time qualities, be able to:
  • Readily support the first view towards process health and behaviour;
  • Be used as an intervention tool;
  • Support decisions in a timelier manner;
  • Provide assurance to the organisation of process health;
  • Be proactive.
Indicators can be any measurement that produces information with respect to items that would be of interest to an organization [15]. These measurements can be obtained by devices located in the plant or as mentioned before, from production and revenue related figures. Regardless of the source of data generated from the process, an “indicator must be reliable, quantifiable and easy to understand” Rockwell, 1959, as cited in [16]. Big data can support the reporting of indicators and demonstrate the health and viability of a process safety management system which follows through with the 2007 Baker report recommendations [6].
Performance is measured because organisations, from operator to management, need to ensure that:
  • Processes and people are performing safely, effectively and efficiently;
  • Organisational impact upon the environment is as minimal as possible;
  • Assets are managed and maintained safely and securely;
  • The company is viable and profitable.
This list is in line with the rationale provided by Selvik et al. [17].
There are different types of measurements for performance: financial or operational [18,19]. The rationale for performance measurement is that measurement leads to performance management which should lead to performance improvement Bititci, Carrie and McDevitt, 1997, as cited in [20]. Unfortunately, instead of being a positive influence, performance measurement can be looked upon in a negative manner, such as a controlling feature of the organisation [21]. This can, invariably, become a source of conflict between employees and management. Also, unfortunately, if seen as a controlling feature, employees could find ways to manipulate results to report non-negative results [22]. As a corollary, organisations identify indicators that could become performance indicators to then lead to identification of KPIs.
As organisations look for continuous improvement methodologies, KPIs are a useful method of allowing organizations to find areas of improvement or help detect areas which require improvement. Measurements that have a positive “knock-on” effect are valuable, i.e., having one indicator that also has the potential to lead to improvements in other areas of the organization, as a whole, benefits. The difficulty, however, is the ability to measure the actual “knock-on” effect.
A KPI is an indicator that measures performance and allows management to make informed decisions that lead to change. Ideally, organizations should have a small number of KPIs, as these indicators would identify the areas that are successful or require improvement [14,17]. These are the drivers for organisational success.
As described by Parmenter [21], many organizations work with incorrect KPIs. These KPIs may be incorrectly identified as KPIs but are instead process indicators or even indicators. Effective KPIs would align with the strategic vision of the organisation [22]. Organisations should strive to move away from a singular view of performance measurements towards a holistic viewpoint, which allows management to make more informed decisions that lead to successful action [23].
Just like KPIs, organisations require indicators that indicate the level of performance from a safety perspective referred to as PSPIs.
As stated by Reiman and Pietikäinen, “safety performance indicators are needed in order to monitor the current level of safety in safety-critical organizations” [15]. They are “indispensable” [14] and should “strengthen the management control of process safety” [24]. As KPIs are useful in detecting areas of improvement, PSPIs should also be useful in detecting areas that require attention to improve the safety of the process.
Indicators can be used to evaluate safety by assessing the performance level and by determining performance trends, which are monitors of routine activity [7]. PSPIs can be leading or lagging in nature. As described by Hopkins, leading and lagging are relative terms that are relative to performance of a particular control [7]. The Health and Safety Executive (HSE) of the United Kingdom define leading indicators as measures of inputs, whereas lagging indicators measure outcomes.
Leading indicators are “precursors of harm” or “early warning” types of indicators, as per Hopkins, 2009, as cited in [25]. These indicators are also referred to as activities or input indicators [17,23,25,26]. For example, leading indicators may be indicators that reflect the status of an activity that ensures safe practice on site, or they can be described as a measure of organizational ability to control risks. Leading indicators provide a sign that something is about to change and potentially for the worse, i.e., the risk associated [23] with the process has changed, as per Kjellén, 2000, as cited in [25]. An example provided by Hopkins [7] states that leading indicators could measure systems that fail during testing. These leading indicators provide for organizational desire to be more proactive than reactive and drive improvement [15,25,27].
Lagging indicators are sometimes referred to as outcome indicators [17,25,26]. Effectively, steps in the process have already been completed and the resultant is displayed. These types of indicators inform the organisation of performance achieved; however, if these indicators are safety related, they can be used as lessons learned to prevent future abnormal occurrences. Many indicators with respect to process safety incidents can be shared, i.e., loss prevention bulletins issued by the Institute of Chemical Engineers (IChemE) [28] or other related publications like Lessons Learned from Recent Process Safety Incidents [29]. The role of the organisation is to ensure that the lessons are learned from those incidents, which may not occur for a variety of reasons [30].
As per Erikson 2009, as cited in [25], safety management system performance is commonly shown by lagging indicators. These indicators showcase the difference between quality assurance and quality control but more for the production side and not necessarily for process safety. Hopkins 2009, as cited in [25], states that organisations require both leading and lagging indicators to showcase safety performance. This statement is supported by the guidance published by the HSE [25,31]. “Safety indicators are only worth develo** if they are used to drive improvement…” [7] or if the PSPI leads to an action, as per Skoog, 2007, as cited in [24].
Regardless of leading or lagging, PSPIs should be meaningful indicators for incidents that occur often enough [7,32] to inform the organisation of the state of the process, as per Mogford, 2005, as cited in [7,14]. With the use of big data, organisations can monitor progress and chart the measurement and witness trends over time. Organisations benefit from online data-driven monitoring of processes [3,33]. As described by Louvar, “you can only manage what you measure” [26]. One of the keys to process indicators is to follow the indicators [13] and note what actions have come about from monitoring the process, which are described in the sections that follow.
Section 2 explains the materials and methods surrounding the case study scenario where actual data generated by the process is used to produce the PSPI at a particular plant at Syngenta’s HMC. Section 2 carries on with an overview of the methodology utilized. Section 3, Section 4 and Section 5 provide the results, discussion of the results and then the conclusions and recommendations of the study, respectively.

2. Materials and Methods

2.1. The Process

The case study carries on from the previous work conducted by the authors [34] at Syngenta’s HMC [12], located in Huddersfield, West Yorkshire, United Kingdom, which produces various crop protection products. The site comprises more than fifty batch processes. The batch process for this study is an established process that is monitored and controlled with a distributed control system (DCS). The final product of this process is an intermediate product that is further processed on site to form the final agrichemical product that is sold on the market. The intermediate is sent over to another plant on site for additional processing. PSPIs are important for this process as this process is an extremely energetic exothermic reaction with a potential for a runaway. Failure to monitor the process may lead to over-temperature events, which, under extreme situations, could lead to a runaway reaction. Other potential consequences of this process are summarized in the BowTie shown in Singh et al. [34], such as the potential for vessel rupture if other layers of protection were to fail.
Reactants are transferred from bulk storage and sent to the main reactor, R-100. The exothermic reaction process temperature in R-100 is controlled via the addition of a catalyst. The resultant material from R-100, unreacted reactants and products produced from the reaction, is transferred for further separation and purification. Figure 1 below shows the process flow diagram (PFD) for reactor R-100, with the key sensors for the PSPI noted on the diagram.
After additional processing, the product from R-100 is sent to storage (S-100). The material from S-100 is combined with the product from the intermediate reactor (I-100). These are reactants for the next stage of the reaction in reactor R-200. The product from R-200 is then processed finally in reactor R-300, to produce the finished product for this process, which then is transferred via flexible intermediate bulk containers for additional processing at another process on site. A simplified block flow diagram of the process is shown in Figure 2 below.
The current PSPIs are used to measure the effectiveness of process safety from raw material to finished product. All data required to compute the resultant PSPIs were available in the data lake from the historian. However, the data were not always in the form that allows one to report the PSPI directly. For example, for the PSPI reporting the temperature difference between two thermocouples, the difference in measurement is not available as the difference needs to be computed for the analysis and review. The data are extracted from the data lake, and then, calculations are performed to report the resultant PSPI to middle and senior management.

2.2. Current PSPIs

For the production process at Syngenta’s HMC facility, there are a total of six existing PSPIs. The current leading PSPIs, with measurements within defined limits, for the process are as follows:
  • R-100 operating temperature with validation of temperature reading;
  • R-300 operating pressure;
  • I-100 operating temperature.
The lagging PSPIs, with measurements within defined limits, for the process are as follows:
  • I-100 confirmation of vessel purge during reactant charge;
  • I-100 reactant charge;
  • R-100 safety temperature trip during testing.
As per the definitions proposed by Hopkins [7], the third lagging measure above may be defined as a leading measure, as the PSPI reflects the capability of the safety trip during a test and not during operation where the safety measure would be required. For historic reasons at the plant, however, this PSPI is labelled as a lagging indicator.
The focus of this study is with respect to the leading indicator temperature control PSPI for R-100: PSPI-1. The temperature of R-100 is measured by the resistance temperature detector (RTD), T-100. A redundant RTD is also mounted on the same reactor, T-200. The purpose of T-200 is to ensure that there is no discrepancy between the two measurement devices. The temperature of the process must not exceed 115 °C. This would lead to a potential hazardous situation and, because of this, a true measurement of the temperature in the reactor was required. This PSPI provides management an indication of the safe operating conditions of the process. The temperature difference component of the PSPI ensures that the organisation is confident with the temperature readings from the process as well as ensuring that an investigation does take place if the temperature exceeds allowances.
The specific time of interest for the temperature of the reactor is during the addition of the catalyst. The operating temperature limits for the reactor are:
87   T 99  
T-100 and T-200 are also deemed to agree if the measurements do not have a difference greater than 5 °C. The resultant calculation for the difference measurement is:
T 100 T 200 5  
The purpose of PSPI-1 is that if the temperature rises towards the utmost upper limit of 115 °C, then additional safety features (layers of protection) come into play. The PSPI is reported monthly using red, amber and green (RAG) ratings.

2.3. Method

2.3.1. Generalized Data Extraction Process

This work uses Seeq (R50), a commercial software from Seeq.com [35], to extract the time series from the data lake in the AVEVA historian [11]. As mentioned by Singh et al. [34], one of the major challenges surrounds identification and extraction of the correct data at the right time for the analysis. Support is required by experienced operators and engineers that work on the process. Performance indicator calculation results also require confirmation by comparison to previous results to ensure current correct results.
The following diagram demonstrates the thought flow and decision process for the methodology proposed. The process is given in Figure 3.

2.3.2. Overview

The identification and extraction of the correct data along with the computation of the PSPI allow reporting of the PSPI from the operator to senior management levels in the organisation. The PSPI under consideration is the PSPI reporting that the reaction is performed under the appropriate temperature limits and that the main and redundant sensor measurements agree within the prescribed parameter.
The data required for reporting the PSPIs is generated by sensors located on the plant equipment, and their measurements are recorded in the data lake of the AVEVA historian [11]. The method below provides an explanation of how the time-series data is extracted from the data lake and subsequently used to produce the requisite PSPI.
The first PSPI, R-100 operating temperature, requires T-100 and T-200 measurement data, which are recorded every 2 s, regardless of batch operation for production. As explained by Singh et al. [34], additional tag information is also required to ensure that only relevant measurements are utilised to generate the PSPIs in question. For example, one additional set of data points is string in nature and not just numerical. Those string results are used in conjunction with the temperature measurements to ensure that the temperature readings used for the PSPI are only during the specific reaction phase. The use of Seeq [35] allows the user to visualise the time-series data, string and numerical, chronologically.
Data are then sorted, organised and the calculation steps created to produce the specific PSPIs. Some of the PSPIs use simple counts for measures of performance, for example, the number of times when a specific condition is witnessed.
The phase name tag, R100_PhaseName, was used to identify the batch reaction cycle. This signal was used to create a condition to identify when the temperature signals were during the reaction phase. Since the PSPI states that the temperature readings must lie within the upper and lower limits of 99 °C and 87 °C, respectively, and the process temperature is controlled with the stepwise addition of a catalyst, a composite condition was created to ensure that the temperature readings were only during the catalyst addition. The following software code ensures that the temperature readings fall within the catalyst addition of the reaction cycle:
$ T 100 . w i t h i n ( $ C a t a l y s t A d d i t i o n i n R e a c t i o n )
With respect to the code above, $T100 represents all of the temperature readings from process RTD T-100. The code $CatalystAdditioninReaction refers to a condition created with the use of the string data from the R100_PhaseName tag and the numerical mass data from the vessel containing the catalyst to identify when the catalyst addition occurred during the reaction.
Once the readings were identified as readings during the catalyst addition of the reaction phase, then a count was conducted to identify when temperature readings were outside of the specified limits. The following formula using the software code allows areas to be identified and counted when the temperature range was “not between” the specified limits.
$ T 100 < 87   o r   $ T 100 > 99
The second half of the component in the PSPI is the agreement between T-100 and T-200 to ensure that readings agreed within 5 °C, i.e., the difference between T-100 and T-200 did not exceed 5 °C. The process for ensuring that the temperature readings for T-200 occurred only during the reaction phase could also be duplicated. However, to aid in the precision of the analysis, all T-100 and T-200 readings were utilised to ensure that they agreed within the organisational prescribed allowable limit of 5 °C with the difference between the measurements calculated using simple subtraction of the two readings at the same point in time:
$ T 100 $ T 200 5  
The resultant temperature difference calculation was used to create another condition to identify when the difference exceeds the prescribed limit.
The PSPI is green if there is zero or one deviation, amber if there are two or three deviations and red if there are four or more deviations. If the PSPI is red, the situation is escalated so that technical support investigates the reason for the deviation to understand if the risk for continuing operation is acceptable. If the PSPI is amber, there is no escalation. However, if the PSPI is amber over two or three reporting periods, this triggers an investigation into the deviations by the technical support group as well. This PSPI is set “just above” the safe operating window but before the safety instrumented system trip point to provide an indication that the process may have changed. As stated earlier, the limits for the RAG rating are defined by process safety engineers and production management.

3. Results

The results from the development and presentation of the PSPIs using actual plant data were depicted in a dashboard akin to results as shown if presented with a spreadsheet tool, as done traditionally. For this case study, the result of the analysis was not limited to a set number of days of operation as no data were exported from the software into additional software tools to present the dashboard. Results from the analysis are for the year 2022. The annual results dashboard was created to allow monitoring of the PSPI on a near real-time basis. To show that the calculation was correct, data extraction and results are shown for one month of operation.
Figure 4 below shows T-100 temperature readings within the catalyst addition of the reaction for the month of February 2022.
The results from the analysis shown in Figure 5 below display six instances where T-100 was outside of the organisationally defined limits.
The resultant table reporting the six readings, hence a “red” rating, as produced from the software and reproduced in a spreadsheet format, is shown in Table 1 below.
For the PSPI, only two instances that exceeded the boundary limits were reported, which would indicate an “amber” warning. The reported results are shown in Figure 6 below.
This part of the analysis utilised all data available within the time-series range. The results of the February 2022 Temperature Control PSPI are shown in Table 2 below:
As shown in the table, there were no discrepancies between the readings of T-100 and T-200. As mentioned earlier, there were, however, six instances where T-100 exceeded the set limits of operation. Table 3 below shows the results of the analysis of the PSPI during the entire calendar year of 2022.
There was no reason to stop at a single PSPI: all PSPIs were analysed to then subsequently showcase the safety performance of the process as described in Figure 3 above. Table 4 below shows a snapshot of the results from all PSPIs as well as additional measures introduced to help describe the PSPIs from this process.

4. Profile PSPIs

Whilst performing the analysis, it was noticed that each batch showcased a consistent profile. Zooming in on the temperature profile of a single batch process, one can see a shape that is repeated for each operation, as shown in Figure 7 below.
Early on in the study, it was clear that comparing consecutive profiles over a long period of time could yield novel PSPIs, but it proved difficult to design a reliable PSPI. The most repeatable (and reliable) indicator was the following: the rate of change in temperature increase during the heating operation remains within acceptable limits. With that, the challenge was to determine the rate of change as well as provide a validation for the acceptable limits for that rate.
Below are the steps that were performed to determine the rate of change in temperature during the heating phase of the batch process:
  • RT100_PhaseName Tag used to isolate data only during the reaction phase;
  • For the heating phase, steam valve position data incorporated to determine when the steam valve was open during the heating phase;
  • Aligned steam valve position with the RT100_PhaseName tag data;
  • Results cleansed to reflect open steam valve position during the heating phase;
  • Temperature data during this open valve position period during the reaction phase highlighted;
  • Batch temperature profiles superimposed to identify anomalies;
  • Derivative function used to calculate the rate change in temperature as shown in the formula below:
$ T 100 . d e r i v a t i v e ( )
The derivative function in the software was used to calculate the rate of change in the temperature signal, T-100. At each time point, the derivative value is calculated as the slope between the current sample and the previous sample in the input signal. Since it takes two measurement points to compute a singular slope value, the output has one less data point. The view of the analysis was changed to showcase one capsule, a single batch process heating period, superimposed onto another batch process heating period.
From these steps, Figure 8 was obtained from the software tool, which provided the basis for deciding the limits for acceptable temperature increase during the batch operation.
The temperature profile during the heating phase brought up additional questions and queries, and the proposed rate of change PSPI is any value less than 0.005 °C s−1.

5. Discussion

This paper validates a practical method of using big data for near real-time reporting of organisational PSPIs and introduces a leading indicator based on the dynamic behaviour of the process.
The R-100 operating temperature with validation of temperature reading PSPI is defined as a leading indicator for the organisation. With the use of big data and the software solution Seeq [35], as an issue described by Hopkins [7], the organisation now has the ability to capture and report this PSPI on a near real-time basis. Looking at the dashboard for 2022 as shown in Table 4, events for this PSPI are “unusual”, but now the organisation can monitor the temperature readings to see if they exceed the prescribed limits, which could then allow the team to reach a decision for action. These limits are in place to notify operators and management that the process did exceed a temperature limit enroute to the critical temperature that could lead to an extremely unsafe condition.
In this instance, this PSPI showed six occasions (Figure 5 and Table 1, Table 2, Table 3 and Table 4) in February 2022 when only two occasions were reported (Figure 6). The four occasions not reported were due to temperature measurement allowances in reporting. Once the PSPI was created using the software tool, the temperature control PSPI can now be monitored, updated and reported as required without the need of data collection and additional resources, nor is it dependent on variations in human interpretation. Though the difference may seem insignificant, it is quite important. The human interpretation from Figure 6 would mean that a single spurious batch occurred, which tends to be attributed to reactant impurities. Figure 5 shows a series of spurious batch operations potentially indicating an underlying problem with the process equipment.
T-100 temperature exceeding the limits (Figure 5 and Figure 6) shows how the readings could have been misread, as the additional temperature readings still could have been interpreted to be within the prescribed limits or simply deemed to be “close enough” or not enough to warrant reporting. Also, when reporting the instances of T-100 exceeding the limits, there may be in-built tolerances or accepted values which, technically, exceed the limit but are not deemed to exceed the limit. As the organisation flags red PSPIs, the month of February would have been missed, which could have led to an upcoming hazardous situation due to hidden equipment failure.
When scrutinizing the PSPI, the discussion leads towards acceptable limits when reporting. From a management perspective, are those values identified and reviewed? Would a “red” rating lead to additional issues or concerns for the team involved? Are instances leading to action with senior levels of management? Is this a big data conflict? Would employees not be receptive to big data initiatives if no actions occur due to the results? In any organisation, are values overlooked when reviewing the data to report the PSPI initially? If so, how often does misrepresentation of PSPIs occur?
From the analysis performed on this system, the traditional methodology of reporting PSPIs was considered. However, the investigation into data channels showed that the PSPIs can be reported in a near real-time basis with the use of available technology and software. The data required for the PSPIs is readily available in the data lake, and their reporting increases the number of relevant data channels that must be extracted. At the same time, it might not be necessary to keep the existing PSPI per se as the data available may allow revision of the actual PSPI being reported (is that PSPI required)? Would another PSPI do a better job at providing an indication of the safety of the process?
Whilst reviewing the data and having an awareness of the other calculation tools available, the authors propose a novel type of PSPI that is based on reaction profiles. In this case, the most reliable part of that profile was the increase in temperature at the start-up of the batch process leading to the PSPI: the rate of change in temperature as the process is heated to start the batch operation remain within acceptable limits. From the single batch temperature profile shown in Figure 7 and the result shown in Figure 8, the temperature profile remains relatively consistent from one batch to the next, barring one anomalous result. Further exploration and investigation into that anomalous result showed that the batch process was paused, cooled and then subsequently restarted. This explains the predominant flat profile and the temperature rate increases.
With the added possibility of using the data available in the data lake along with the time-series nature of the data collected, it becomes possible to compute other PSPIs that are, technically speaking, involved with the time component for response, which can be easily be managed if the data-capture system is well designed and the recording of time when the data is captured is to a measurable degree of certainty [3]. This alternative method to design PSPIs is probably more effective and efficient with respect to a time component, i.e., time for a response, than simply adding PSPIs to help measure the safety performance of a process. So, when PSPIs are reported through efficient data-capture systems, the PSPIs can report the time required for safety systems to respond. That time factor then becomes the PSPI and not whether the response was achieved.
Moving on, from one PSPI to PSPIs for that part of the process, the study can be extended to all PSPIs for the entire process, then extended to all processes from all units to the site. Future work could focus on the development of PSPIs for site wide safety performance. These PSPIs can also be shared across multiple sectors and industries to aid in monitoring the safety performance for other organisations with PSPIs, which all could understand [16]. This work will also allow organisations to review their PSPIs to see if they are still a fit for the purpose or if a new revised PSPI would be more applicable. A data-centric view of PSPIs allows all the above to happen.

6. Conclusions

To produce metrics, employees need to collect and collate as well as disseminate information. On many occasions, this can be a very resource-intensive task. In a process manufacturing environment, operators and engineers would need to wade through and sort vital information from the vast amount of data generated from a process [25]. Time and resources are required to extract data generated by the process itself, which may be stored in a data lake through some historian software or other data acquisition system.
Once the PSPIs were created using a data-centric approach, the dashboard can be updated with revised date ranges for upcoming months. An update of the date range is only required to present the updated PSPIs. No additional work is required. This limits the number of resources required to present PSPIs to middle and senior management. Those resources can now be allocated towards revision or creation of new, more effective PSPIs for monitoring process safety. But more importantly, the method described in this work does not require gifted data scientists. It is well within the grasp of chemical (safety) engineers to work with equations 1 to 6 and to interpret the outcomes as shown in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. In fact, the interpretation of process diagrams (Figure 1 and Figure 2) and the methods described in paragraph 2.3.1 are perhaps better performed without the interference of a programmer but left to a chemical engineer to work with the software themselves.
This means that there may be no additional resource(s) required to collate the information and report the PSPI. The data were already available during operation; they were not registered for the PSPI. With real time assurance that the PSPI reporting was available, management could be readily informed, and the correct PSPI values reported is also assured. With a data-centric approach, there is a decrease in the potential for reporting errors, even if there may still be a requirement to re-confirm any anomalous results. The ability to showcase PSPIs may be used to demonstrate compliance and reporting to external auditing agencies and with the reduction in the resource requirement, leading to cost savings for the organisation [36,37,38].
The method can also be used to compute other PSPIs for the facility. All data are available in the data lake, and when the right combination of functions is determined using the flow-diagram in Figure 3, one can simply report the PSPIs with the data available in the data lake. Two challenges remain: what to do when the PSPI goes to an amber to red RAG rating? The subsequent question then becomes: will the management decision lead to an action? If the answers for those questions are ‘yes’, the PSPI is required as an effective tool for monitoring and reporting safety to ensure that the process is safe or must be actioned upon to bring it back to a safe condition.
The benefit of using this software tool, which allows analysis to be performed using actual time-series data generated from the source, showcases a more effective, clear and efficient manner in reporting true PSPI results. Veracious PSPIs endear greater confidence in the PSPI reported leading to the elimination of errors in reporting. The greater confidence in the PSPI represents a true reflection of the safe nature of the process and invariably the safety of personnel.
Instead of looking at the difference between the temperature measurements or the absolute value of the temperature measurement, a new PSPI is suggested where the rate of change in the temperature measurement can be used as a leading indicator. Is there a general trend considering the multitude of measurements available? This rate of change in temperature during the heating up period could also be used as a tool for operators when reviewing the status of the batch. During a handover, operators could use the rate as an indicative tool of the status of the plant and operation instead of the absolute value of the temperature as the rate will provide information regarding the general trend of the process.
As previously discussed by Singh et al. [34], alongside successful navigation of the software system, intimate knowledge of the process is required to understand the PSPIs in use. Users can also recommend new and revised PSPIs either in addition to or supplant existing PSPIs. Further study is required to confirm the acceptable limits of the rate of change in temperature, which should include analysis of batch operations of a longer period. Additional study that could also follow is that the data generated PSPIs allow for management to make appropriate decisions, which allow subsequent actions to occur to help improve the performance of the process or reduce the potential unsafe conditions. Finally, more studies are also required to see if the new PSPIs recommended had a positive impact, or even how to measure the impact, upon the safe operation of the process.

Author Contributions

Conceptualization, P.S. and C.v.G.; methodology, P.S.; software, P.S.; validation, C.v.G. and N.S.; formal analysis, P.S.; investigation, P.S.; resources, P.S. and N.S.; data curation, P.S.; writing—original draft preparation, P.S.; writing—review and editing, C.v.G. and N.S.; visualization, P.S.; supervision, C.v.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are unavailable due to privacy and confidentiality restrictions.

Acknowledgments

Authors acknowledge the support provided by colleagues from the University of Huddersfield and Syngenta Huddersfield Manufacturing Centre.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Blois, L.A. Progress in Accident Prevention. Mon. Labor Rev. 1926, 22, 1–3. [Google Scholar]
  2. Heinrich, H.; Blake, R. The Accident Cause Ratio 88:10:2. National Safety News, May 1956; 18–22. [Google Scholar]
  3. Lee, J.; Cameron, I.; Hassall, M. Improving process safety: What roles for Digitalization and Industry 4.0? Process Saf. Environ. Prot. 2019, 132, 325–339. [Google Scholar] [CrossRef]
  4. Holmstrom, D.; Altamirano, F.; Banks, J.; Joseph, G.; Kaszniak, M.; Mackenzie, C.; Shroff, R.; Cohen, H.; Wallace, S. CSB investigation of the explosions and fire at the BP texas city refinery on 23 March 2005. Process Saf. Prog. 2006, 25, 345–349. [Google Scholar] [CrossRef]
  5. Baker, J.A.; Leveson, N.; Bowman, F.L.; Priest, S.; Erwin, G.; Rosenthal, I.; Gorton, S.; Tebo, P.; Hendershot, D.; Wiegmann, D.; et al. The Report of The BP U.S. Refineries Independent Safety Review Panel; The BP U.S. Refineries Independent Safety Review Panel: Texas City, TX, USA, 2007; p. 374. [Google Scholar]
  6. Allars, K. BP Texas City Incident Baker Review; Health and Safety Executive: Bootle, UK, 2007; p. 2. [Google Scholar]
  7. Hopkins, A. Thinking About Process Safety Indicators. Saf. Sci. 2009, 47, 460–465. [Google Scholar] [CrossRef]
  8. Klein, T.; Viard, R. Process Safety Performance Indicators in Chemical Industry—What Makes It a Success Story and What Did We Learn So Far? Chem. Eng. Trans. 2013, 31, 391–396. [Google Scholar] [CrossRef]
  9. Le Coze, J.-C.; Pettersen, K.; Reiman, T. The foundations of safety science. Saf. Sci. 2014, 67, 1–69. [Google Scholar] [CrossRef]
  10. Cundius, C.; Alt, R. Real-Time or Near Real-Time?-Towards a Real-Time Assessment Model. In Proceedings of the 34th International Conference on Information Systems, Milano, Italy, 15–18 December 2013. [Google Scholar]
  11. Aveva. AVEVA™ Historian. 2021. Available online: https://www.aveva.com/en/products/historian/ (accessed on 13 July 2021).
  12. Syngenta. Huddersfield Public Information Zone. 2021. Available online: https://www.syngenta.co.uk/publicinformationzone (accessed on 13 July 2021).
  13. Ali, M.; Cai, X.; Khan, F.I.; Pistikopoulos, E.N.; Tian, Y. Dynamic risk-based process design and operational optimization via multi-parametric programming. Digit. Chem. Eng. 2023, 7, 100096. [Google Scholar] [CrossRef]
  14. Pasman, H.; Rogers, W. How can we use the information provided by process safety performance indicators? Possibilities and limitations. J. Loss Prev. Process Ind. 2014, 30, 197–206. [Google Scholar] [CrossRef]
  15. Reiman, T.; Pietikäinen, E. Leading indicators of system safety—Monitoring and driving the organizational safety potential. Saf. Sci. 2012, 50, 1993–2000. [Google Scholar] [CrossRef]
  16. Swuste, P.; Nunen, K.v.; Schmitz, P.; Reniers, G. Process safety indicators, how solid is the concept? Chem. Eng. Trans. 2019, 77, 85–90. [Google Scholar] [CrossRef]
  17. Selvik, J.T.; Bansal, S.; Abrahamsen, E.B. On the use of criteria based on the SMART acronym to assess quality of performance indicators for safety management in process industries. J. Loss Prev. Process Ind. 2021, 70, 104392. [Google Scholar] [CrossRef]
  18. Jacobs, F.R.; Chase, R. Operations and Supply Chain Management, 5th ed.; McGraw-Hill Education: New York, NY, USA, 2019; p. 544. [Google Scholar]
  19. Klose, A.; Wagner-Stürz, D.; Neuendorf, L.; Oeing, J.; Khaydarov, V.; Schleehahn, M.; Kockmann, N.; Urbas, L. Automated Evaluation of Biochemical Plant KPIs based on DEXPI Information. Chem. Ing. Tech. 2023, 95, 1165–1171. [Google Scholar] [CrossRef]
  20. Kasie, F.M.; Belay, A.M. The impact of multi-criteria performance measurement on business performance improvement. J. Ind. Eng. Manag. 2013, 6, 595–625. [Google Scholar] [CrossRef]
  21. Parmenter, D. Key Performance Indicators: Develo**, Implementing, and Using Winning KPIs, 3rd ed.; Wiley: Hoboken, NJ, USA, 2015; p. 444. [Google Scholar]
  22. Hutchins, D. Hoshin Kanri: The Strategic Approach to Continuous Improvement; Taylor and Francis: Abingdon, UK, 2016. [Google Scholar] [CrossRef]
  23. Leveson, N. A systems approach to risk management through leading safety indicators. Reliab. Eng. Syst. Saf. 2015, 136, 17–34. [Google Scholar] [CrossRef]
  24. Zwetsloot, G.I.J.M. Prospects and limitations of process safety performance indicators. Saf. Sci. 2009, 47, 495–497. [Google Scholar] [CrossRef]
  25. Sultana, S.; Andersen, B.S.; Haugen, S. Identifying safety indicators for safety performance measurement using a system engineering approach. Process Saf. Environ. Prot. 2019, 128, 107–120. [Google Scholar] [CrossRef]
  26. Louvar, J. Guidance for safety performance indicators. Process Saf. Prog. 2010, 29, 387–388. [Google Scholar] [CrossRef]
  27. Diaz, E.; Watts, M. Metrics-driven decision-making improves performance at a complex process facility. Process Saf. Prog. 2020, 39, e12092. [Google Scholar] [CrossRef]
  28. IChemE. Loss Prevention Bulletin. 2022. Available online: https://www.icheme.org/knowledge/loss-prevention-bulletin/ (accessed on 1 September 2022).
  29. Ness, A. Lessons Learned from Recent Process Safety Incidents; American Institute of Chemical Engineers: New York, NY, USA, 2015; Volume 111, p. 23. [Google Scholar]
  30. Zhao, J.; Suikkanen, J.; Wood, M. Lessons learned for process safety management in China. J. Loss Prev. Process Ind. 2014, 29, 170–176. [Google Scholar] [CrossRef]
  31. HSE. Health and Safety Executive. Information and Services. 2022. Available online: https://www.hse.gov.uk/ (accessed on 13 October 2022).
  32. Mendeloff, J.; Han, B.; Fleishman-Mayer, L.A.; Vesely, J.V. Evaluation of process safety indicators collected in conformance with ANSI/API Recommended Practice 754. J. Loss Prev. Process Ind. 2013, 26, 1008–1014. [Google Scholar] [CrossRef]
  33. Harhara, A.; Arora, A.; Faruque Hasan, M.M. Process safety consequence modeling using artificial neural networks for approximating heat exchanger overpressure severity. Comput. Chem. Eng. 2023, 170, 108098. [Google Scholar] [CrossRef]
  34. Singh, P.; Sunderland, N.; van Gulijk, C. Determination of the health of a barrier with time-series data how a safety barrier looks different from a data perspective. J. Loss Prev. Process Ind. 2022, 80, 104889. [Google Scholar] [CrossRef]
  35. Seeq. Seeq about Us. 2023. Available online: https://www.seeq.com/about (accessed on 25 February 2023).
  36. Di Bona, G.; Silvestri, A.; De Felice, F.; Forcina, A.; Petrillo, A. An Analytical Model to Measure the Effectiveness of Safety Management Systems: Global Safety Improve Risk Assessment (G-SIRA) Method. J. Fail. Anal. Prev. 2016, 16, 1024–1037. [Google Scholar] [CrossRef]
  37. Falcone, D.; De Felice, F.; Di Bona, G.; Duraccio, V.; Silvestri, A. Risk assessment in a cogeneration system: Validation of a new safety allocation technique. In Proceedings of the 16th IASTED International Conference on Applied Simulation and Modelling, Mallorca, Spain, 29–31 August 2007. [Google Scholar]
  38. Yadav, O.P.; Zhuang, X. A practical reliability allocation method considering modified criticality factors. Reliab. Eng. Syst. Saf. 2014, 129, 57–65. [Google Scholar] [CrossRef]
Figure 1. Simplified R-100 PFD.
Figure 1. Simplified R-100 PFD.
Safety 09 00062 g001
Figure 2. Process Block Flow Diagram.
Figure 2. Process Block Flow Diagram.
Safety 09 00062 g002
Figure 3. Data Extraction Process Flowchart.
Figure 3. Data Extraction Process Flowchart.
Safety 09 00062 g003
Figure 4. February T-100 Readings During Catalyst Addition of Reaction.
Figure 4. February T-100 Readings During Catalyst Addition of Reaction.
Safety 09 00062 g004
Figure 5. February T-100 Readings Highlighting Exceeding Limits.
Figure 5. February T-100 Readings Highlighting Exceeding Limits.
Safety 09 00062 g005
Figure 6. February T-100 Temperature Readings Exceeding Limits Reported.
Figure 6. February T-100 Temperature Readings Exceeding Limits Reported.
Safety 09 00062 g006
Figure 7. February T-100 Temperature Readings for a Typical Batch Operation.
Figure 7. February T-100 Temperature Readings for a Typical Batch Operation.
Safety 09 00062 g007
Figure 8. T-100 Temperature Profile During Heating Phase for Batch Operations in February 2022.
Figure 8. T-100 Temperature Profile During Heating Phase for Batch Operations in February 2022.
Safety 09 00062 g008
Table 1. February 2022 Temperature Control Outside Prescribed Limits.
Table 1. February 2022 Temperature Control Outside Prescribed Limits.
Name1–28 February
R-100 Temperature Exceeds Limits6
Table 2. February 2022 Temperature Control PSPI.
Table 2. February 2022 Temperature Control PSPI.
Name1–28 February
R-100 Temperature Exceeds Limits6
T-100/T-200 Not Within 5 °C0
Table 3. Reactor Temperature Control PSPI for 2022.
Table 3. Reactor Temperature Control PSPI for 2022.
Date RangeR-100 Temperature Exceeds LimitsT-100/T-200 Not within 5 °C
January10
February60
March30
April00
May00
June30
July30
August10
September10
October30
November10
December00
Table 4. PSPIs for February 2022.
Table 4. PSPIs for February 2022.
Name1–28 February
Leading MeasuresR-100 Temperature Exceeds Limits6
T-100 / T-200 Not Within 5 °C0
R-300 P-100 Exceeds Limits0
R-300 P-100 Neutralisation Step Exceeds Limits0
I-100 Temperature Within Operating Limits0
I-100 T-300 / T-400 Not Within 5 °C0
I-100 T-300 / T-400 Not Within 5 °C in Reaction0
Lagging MeasuresI-100 Purge Check0
I-100 Low Inert Gas Flow0
Health Check of AlarmsI-100 Hi-Lo Temperature Trip Activation0
I-100 Low Temperature Trip Activation0
I-100 High Temperature Trip Activation0
R-300 High Pressure Trip Activation0
R-100 High Pressure Trip Activation0
R-100 Hi-Hi Pressure Trip Activation0
R-100 High Temperature Trip Activation0
R-100 High Temperature Alarm Count0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, P.; van Gulijk, C.; Sunderland, N. Online Process Safety Performance Indicators Using Big Data: How a PSPI Looks Different from a Data Perspective. Safety 2023, 9, 62. https://doi.org/10.3390/safety9030062

AMA Style

Singh P, van Gulijk C, Sunderland N. Online Process Safety Performance Indicators Using Big Data: How a PSPI Looks Different from a Data Perspective. Safety. 2023; 9(3):62. https://doi.org/10.3390/safety9030062

Chicago/Turabian Style

Singh, Paul, Coen van Gulijk, and Neil Sunderland. 2023. "Online Process Safety Performance Indicators Using Big Data: How a PSPI Looks Different from a Data Perspective" Safety 9, no. 3: 62. https://doi.org/10.3390/safety9030062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop