A novel object-based algorithm capable of identifying and tracking convective outflow boundaries in convection-allowing numerical models is presented in this study. The most distinct feature of the proposed algorithm is its ability to seamlessly analyze numerically simulated density currents and bores, both of which play an important role in the dynamics of nocturnal convective systems. The unified identification and classification of these morphologically different phenomena is achieved through a multivariate approach combined with appropriate image processing techniques. The tracking component of the algorithm utilizes two dynamical constraints, which improve the object association results in comparison to methods based on statistical assumptions alone. Special attention is placed on some of the outstanding challenges regarding the formulation of the algorithm and possible ways to address those in future research. Apart from describing the technical details behind the algorithm, this study also introduces specific algorithm applications relevant to the analysis and prediction of bores. These applications are illustrated for a retrospective case study simulated with a convection-allowing ensemble prediction system. The paper highlights how the newly developed algorithm tools naturally form a foundation for understanding the initiation, structure, and evolution of bores and convective systems in the nocturnal environment.
Convectively generated outflow boundaries, such as density currents and bores, have an important contribution to the dynamics of mesoscale convective systems (MCSs). The theoretical importance of density currents is well established due to their critical role in the MCS evolution (e.g., Rotunno et al. 1988; Weisman and Rotunno 2004). On the other hand, atmospheric bores are still less familiar to the meteorological community, but these disturbances have received considerable attention recently, including being a focus of the Plains Elevated Convection at Night field campaign (PECAN; Geerts et al. 2017). The increasing interest in bores is largely driven by their ability to initiate and maintain nocturnal MCSs (Carbone et al. 1990; Crook et al. 1990; Locatelli et al. 2002; Parker 2008; Blake et al. 2017; Parsons et al. 2018). Recent work has also shown that bores occur commonly in association with warm season nocturnal convection over the Great Plains (Haghi et al. 2017).
The dynamical significance of convective outflow boundaries has prompted the scientific community to create automated algorithms for identifying and tracking these features. The earliest algorithm developed for this purpose was entirely based on observational data and closely connected to the procurement plans for the Next Generation Weather Radar (NEXRAD) system (e.g., Crum and Alberty 1993). In particular, Uyeda and Zrnić (1986) as well as Smith et al. (1989) were the first to describe radar-based algorithms for gust front (density current) detection that relied on the velocity convergence along radials. Later enhancements to these algorithms included the addition of radar reflectivity in the Advanced Gust Front Algorithm (AGFA; Eilts et al. 1991) and the use of knowledge-based signal processing in the Machine Intelligent Gust Front Algorithm (MIGFA; Delanoy and Troxel 1993; Troxel et al. 1996; Smalley et al. 2005). Likewise, advances in computational resources have made it possible to identify and track convective outflow boundaries in high-resolution model outputs. Previous model-based algorithms have focused on detecting density currents by incorporating various physical parameters, such as temperature (Gentine et al. 2016), density potential temperature (Torri et al. 2015; Drager and van den Heever 2017), buoyancy (Tompkins 2001; Seigel 2014), wind (Langhans and Romps 2015), or a combination of several relevant parameters (Li et al. 2014).
Although the aforementioned methodologies have greatly enhanced our understanding of density current and gust front dynamics, they are somewhat restricted in their application, compared to the algorithm presented in this paper. On one hand, the approaches reviewed earlier on are not suitable for detecting multiple types of convective outflow boundaries. This limitation can be problematic with regard to typical nighttime environments in which density currents can trigger atmospheric bores upon their interaction with the stable boundary layer (White and Helfrich 2012). The frequent generation of bores during the nighttime hours (Haghi et al. 2017) and their important role in the maintenance of nocturnal MCSs (Parker 2008; Blake et al. 2017) necessitate the development of methods to detect and track bores as well as their parent density currents. The other limitation of earlier convective outflow algorithms is that they have been mostly applied in idealized modeling frameworks, which may make them inappropriate for real-time forecasting applications.
To understand the interplay between nocturnal outflow boundaries and convective systems in real-time high-resolution numerical weather prediction (NWP) models, this study presents a novel object-based algorithm that is capable of seamlessly identifying density currents and bores. The latter is achieved by employing a multivariate approach similar to the dryline identification algorithm of Clark et al. (2015). Rather than attempting to detect all convective outflow boundaries present in high-resolution model simulations, the objective of this algorithm is to isolate only those that provide sufficient lifting for the initiation and maintenance of nocturnal MCSs. The latter goal falls in line with the recent findings of Parker (2008), French and Parker (2010), and Parsons et al. (2018), who suggest that the primary lifting mechanism for nocturnal MCSs can change from a density current to an internal bore with the onset of nocturnal cooling.
Aside from the specific choice of identification parameters, the proposed algorithm differs from the previously discussed methods in terms of how it tracks the identified objects in time. Traditionally, object-tracking techniques rely on statistical methods to match objects from two different model time steps (Lakshmanan 2012). By contrast, the object tracker proposed in this study accounts for the dynamics of convective outflow boundaries in an explicit manner. As will be shown later in the paper, imposing dynamical constraints in the algorithm yields more robust tracking results, compared to an algorithm based on statistical considerations alone. It is also worth remarking that developing algorithms with due regard to the dynamical aspects of the tracked objects was a key recommendation of Davis et al. (2009b)—one of the first studies focusing on object-based identification and verification using NWP data.
In addition to the technical details behind the algorithm framework, this paper also highlights a spectrum of additional algorithm applications relevant for bore research and operational forecasting of nocturnal storms. Generally speaking, these algorithm applications can be utilized in two different ways. The first pertains to the verification of numerically simulated convective outflow boundaries. With the advance of convection-allowing NWP models, object-based verification techniques like the Method for Object-Based Diagnostic Evaluation (MODE; Davis et al. 2006a) have become a popular choice for validating the accuracy of localized and spatially inhomogeneous fields such as precipitation (e.g., Davis et al. 2009a; Johnson et al. 2011a,b; Johnson and Wang 2012, 2013; Johnson et al. 2013; Clark et al. 2014). Unfortunately, the majority of these object-based methods cannot be readily extended to verify bore forecasts. MODE, for instance, relies on the presence of continuous observational datasets in order to verify model forecasts, which is not feasible in the case of convective outflow boundaries.
Second, the algorithm applications proposed in this paper can be also used to obtain a better understanding of the underlying bore dynamics, as well as the role of bores in initiating and maintaining nocturnal MCSs. To examine the characteristics of numerically simulated bores, previous studies (e.g., Martin and Johnson 2008; Koch et al. 2008) had to first identify their location by subjectively examining the appropriate model output fields. While this is a reasonable approach in terms of single case studies, analyzing the dynamical properties of bores in large datasets spanning a large number of numerical simulations is considerably more time consuming and, additionally, prone to human errors.
The algorithm framework and the attendant algorithm applications in this study are illustrated through a forecast experiment based on the 6 July 2015 PECAN case study. Using the Weather Research and Forecasting (WRF) Model (Powers et al. 2017), 40 ensemble members with a horizontal grid spacing of 1 km are run between 0300 and 0900 UTC. These high-resolution forecast members are initialized with a Gridpoint Statistical Interpolation (GSI)-based convection-allowing ensemble data assimilation system (Johnson et al. 2015; Johnson and Wang 2017; Wang and Wang 2017) after assimilating both radar and conventional data over a time window of 3 and 12 h, respectively.
The rest of the paper is organized as follows. Sections 2 and 3 introduce the identification and tracking components of the algorithm. Section 4 describes several algorithm tools and their application to the 6 July 2015 case study. Finally, section 5 summarizes the main aspects of the algorithm, outlines some of its limitations, and suggests possible ways to overcome those in future work.
2. Identification of convective outflow boundaries
As elucidated in section 1, the novelty of this algorithm comes from the unified description of density currents and bores, which is made possible by taking into account the dynamical similarities between these two convective outflow boundaries. Specifically, it is well known that (i) density currents and bores are characterized by high values of vertical velocity along their leading edge, and (ii) their passage leads to a sudden jump in pressure near the surface. The pressure rise in density currents is mostly hydrostatic in nature and arises due to the horizontal advection of cold air behind them, but it also contains a nonhydrostatic component associated with the deceleration of the density current relative inflow. This is to be contrasted with the pressure increase in a bore, which is caused by the net upward displacement and subsequent adiabatic cooling of near-surface stable air. The identification component of the algorithm parameterizes these effects through the 1-km above ground level (AGL) vertical velocity and the temporal change in mean sea level pressure , where represents the model time step and is set to be 15 min in this study. Although the choice of a particular height level for the vertical velocity is somewhat arbitrary, the 1-km AGL vertical velocity was selected because it (i) marks the location where the prefrontal updraft of a convective outflow boundary is likely to be found (e.g., Rotunno et al. 1988) and (ii) has been used in some real-time NWP forecast products to assist the bore missions during the PECAN field campaign (Johnson et al. 2017). After carefully choosing representative threshold values for these two model variables (see section 2b for details), their corresponding binary fields are combined through a binary AND operation (1 and 0 for meeting or not meeting the criteria, respectively; see bottom-left side of Fig. 1). Later on, the algorithm utilizes the temporal change in the 2-m temperature to determine the morphology of the identified objects. By definition, density currents cause a drop in the near-surface temperature [i.e., 0°C (15 min)−1]. Nevertheless, the algorithm classifies the identified objects as density currents only if 1°C (15 min)−1, where the superscript denotes that the threshold refers specifically to density currents (see Table 1). The lower value is chosen deliberately to ensure that the algorithm (i) captures only sufficiently strong density currents that are likely to have dynamic significance and (ii) accounts for the overall cooling of the nocturnal boundary layer. Since the passage of bores either does not change the 2-m temperature or leads to a slight near-surface warming (Koch and Clark 1999), a convective outflow object is classified as a bore only if 0°C (15 min)−1, where the superscript is shorthand for bores. It is namely these distinct surface temperature responses that permit the object-based algorithm to determine the morphology of the identified convective outflow boundaries.
The workflow associated with the identification component of the algorithm is further illustrated through the 1-km horizontal grid spacing simulation of the 6 July 2015 case study (Fig. 2), the details of which are summarized in section 1. In particular, the information contained within the and fields from Figs. 2a and 2b is used to create the convective outflow objects shown in Fig. 2c. The field from Fig. 2d emphasizes the heterogeneous morphology of the identified objects. For example, the southern portion of the tracked object (refer to the black contours in Fig. 2c) resembles an atmospheric bore ( is neutral or positive), while the northern portion behaves like a density current ( is mostly negative). As expected, the model-simulated reflectivity field in Fig. 2b suggests that the density current is located next to a convectively active region, in contrast to the bore, which has already propagated away from the parent MCS.
Additional evidence for the discriminating capabilities of the algorithm is provided in Fig. 3. The first column of this figure shows the algorithm output for two consecutive times, during which the tracked object splits into two additional objects, and the southern one changes its morphology from a density current to a bore. The objectively determined outflow classification results (Figs. 3a,c) verify successfully against the vertical cross sections for the corresponding forecast lead times (Figs. 3b,d). More specifically, the vertical cross section from Fig. 3b shows classical density current signatures, such as an enhanced prefrontal updraft and a sudden drop in the virtual potential temperature () following the passage of the boundary. Contrastingly, there is an amplitude-ordered wave train in the wake of the first convective outflow boundary in Fig. 3d (for x between 0 and 10 km on the cross section), which is typical for the passage of an undular bore. It is worth remarking that the bore in Fig. 3d is immediately followed by its parent density current. The latter is located around x = 0 km on the cross section and appears to be considerably shallower in comparison to the previously shown forecast lead time (Fig. 3b). Note that the significantly weaker vertical velocities at the leading edge of the shallow density current prevent the object-based algorithm from identifying it. This example portrays the algorithm’s objective to only detect convective outflow boundaries that provide sufficient lifting for the initiation and maintenance of nocturnal convection. The scenario depicted in Fig. 3 corresponds quite well to the idealized simulations of Parker (2008), wherein the primary lifting mechanism required for the sustenance of a nocturnal squall line changes from a density current to a bore following the stabilization of the underlying boundary layer.
All variables used in the identification component of the algorithm are modified in order to yield meaningful identification results. As far as the and fields are concerned, Gaussian filters with standard deviations of = 3 km and = 5 km are applied to remove any small-scale noise (Table 1). The alteration of is slightly more involved due to the presence of numerical noise for short forecast lead times. Specifically, the use of data assimilation (DA) in the numerical simulations from this study introduces spurious inertia–gravity waves due to the imbalance between mass and wind fields (Lynch and Huang 1992; Wang et al. 2013). Although it is possible to remove this spurious numerical noise by exploiting initialization techniques such as the nonlinear normal mode initialization (Baer and Tribbia 1977) and digital filter initialization (Lynch and Huang 1992), this work proposes an alternative technique to remove the numerical noise within the formulation of the algorithm itself. In particular, the discrete cosine transform (DCT; Ahmed et al. 1974) is applied to the pressure tendency field in an attempt to filter out the spurious inertia–gravity waves whose wavelength is considerably larger than the wavelength of the signal defining the convective outflow boundaries. The choice of DCT was guided by the study of Denis et al. (2002), which emphasized the suitability of the method for limited-area models with aperiodic model fields. Although the choice of a scale separation wavelength is somewhat arbitrary, tuning experiments over three diverse case studies showed that 150 km is optimal. To filter out the large-scale features, the spectral coefficients corresponding to > 150 km are set to 0 before inverting the field back to physical space. Finally, since the filtered field still contains small-scale features that can degrade the identification capabilities of the algorithm, a Gaussian filter with = 1.5 km is applied to the filtered field. The relatively small value of allows the object-based algorithm to detect dissipating bore objects, in which the signature is weak.
It is worth mentioning that the DCT filter may be not as effective during the first hour or two of model integration when the spurious gravity wave activity is prolific and spans a larger spectrum of wavelengths. To address this issue, the algorithm has the option to substitute the field with the magnitude of the horizontal mean sea level pressure gradient close to the model initialization time (upper-left portion of Fig. 1). Similar to ∆, the field is smoothed via a Gaussian filter with km. The time at which the pressure identification variable changes from back to ∆ is referred to as a separation time and is defined to be the forecast lead time beyond which the absolute value of the domain-averaged does not show significant temporal trends ( = 2 h 30 min for the numerical simulation used in this study). Algorithm tests comparing its performance with the two pressure identification variables confirmed that is a reasonable substitute of for very short forecast lead times, but degrades the object identification results in the absence of significant numerical noise. Further note that the threshold value of for is 3 times larger, compared to its counterpart value for (i.e., ; refer to Table 1). The latter is intended to help the algorithm identify decaying convective outflow objects with weak vertical motions. Alternative approaches for optimally defining the threshold values of the identification parameters are examined in section 5.
Last, we discuss the application of morphological image processing techniques within the identification component of the object-based algorithm. These techniques aim to address the discontinuous nature of the convective outflow objects produced as a result of merging the and (or fields through the binary AND operation. More specifically, the algorithm incorporates the so-called open-close filter (Dougherty 1992), defined as the sequential application of 1 binary opening and 10 binary closing iterations (see Table 1). This filter was chosen because of its restoration property, which removes both union and subtractive noises (Dougherty and Lotufo 2003). In particular, binary opening removes small-scale objects identified incorrectly by the algorithm, while binary closing fills the gaps between adjacent outflow objects, making their structure more coherent. Note that binary opening is applied first in order to prevent the spurious growth of small-scale objects. The choice of binary iterations ( and ; Table 1) was based on several tuning experiments, in which the structure of the identified outflow objects was assessed qualitatively. To ensure that only sufficiently large objects are considered by the algorithm, an additional size thresholding is applied (; bottom-left corner of Fig. 1 and Table 1 for the value of ) prior to the classification of the identified convective outflow boundaries as density currents or bores.
c. Grid spacing considerations
Apart from the need to choose appropriate model parameters, the identification of convective outflow boundaries in the algorithm is ultimately dependent on the ability of NWP models to correctly represent their dynamical characteristics (e.g., surface warming, pressure rise). Past studies (Koch et al. 2008; Martin and Johnson 2008; Johnson et al. 2017) along with additional analyses based on the 6 July 2015 simulations (see Fig. S1 in the online supplemental material) have concluded that the adequate representation of atmospheric bores in NWP models requires a horizontal grid spacing of less than 4 km. This, in turn, implies that the object-based algorithm should be used only in conjunction with data from higher-resolution convection-permitting NWP models. With a view of making our algorithm applicable to a broader range of model configurations, the remainder of this section discusses how changes in the horizontal resolution of convection-allowing models impact its identification capabilities. In particular, we comment on the key modifications required to successfully adapt the algorithm to a coarser 3-km model output, which is more typical of currently operational convection-allowing NWP systems such as the High-Resolution Rapid Refresh model (HRRR; Smith et al. 2008).
It is well known that coarser-resolution model simulations tend to have a smoothing effect on the underlying model fields, which can, in turn, have downstream impacts on the number and/or extent of objects identified by the algorithm. Nonetheless, the application of a Gaussian filter implicitly circumvents this problem as it smooths the identification variables from different model resolutions to the same spatial scale. This statement is supported both by Table 2 and Figs. S2a and S2b, which show that the median and interquartile range (IQR) values associated with the smoothed identification variables are nearly identical on the 1- and 3-km model domains (i.e., their ratio is 1). The only exception is the magnitude of the mean sea level pressure gradient , for which the median and IQR values on the 3-km domain are lower than their respective values on the 1-km domain (refer to Figs. S2c,d). The aforementioned discrepancy arises because the Gaussian filter is applied to model variables with different statistical characteristics: , while . To use the pressure gradient magnitude as an identification variable on the coarser 3-km model output, the threshold value from Table 1 needs to be scaled accordingly. In this study, the latter was achieved via the quantile mapping (QM) technique (e.g., Reiter et al. 2018). The QM results summarized in Table 2 indicate that the scaling factor for the threshold is ; that is, the value needs to be reduced by 40% on the 3-km grid in order for the algorithm to provide equivalent object identification results. As expected, the scaling factors for the other identification variables are 1, suggesting that their corresponding thresholds do not need to be changed if the algorithm is to be run on numerical simulations with a horizontal grid spacing of 3 km.
Another important consequence of using coarser convection-allowing model simulations with the object-based algorithm is the requirement to adjust the number of iterations in the open-close filter (i.e., and in Table 1). Assuming there is a one-to-one correspondence between the binary fields on two different model domains with a horizontal grid spacing of and , it can be shown that
that is, the number of open–close iterations on the new domain is inversely proportional to the change in model resolution. To account for the fact that the number of iterations must be an integer number (i.e., ), the relationship from Eq. (1) is further approximated as
3. Tracking convective outflow boundaries
The concept behind the object tracker is presented in Fig. 4. This schematic shows how a convective outflow boundary might evolve in a typical nighttime environment and also highlights the key processes that the object tracker is expected to handle. Suppose that the leftmost object in Fig. 4 is one of the many objects identified by the algorithm at some initial time . Throughout its evolution, the aforementioned object could undergo various changes including (i) translation ( → ), (ii) splitting ( → ), (iii) merging ( → ), and (iv) morphology transformation ( → ; blue to red color shading). The purpose of the object tracker is to recognize all of these changes by associating objects between two consecutive image frames (separated by the model time step ).
Association of objects from two neighboring image frames is a challenging problem, particularly in cases of object splitting, merging, or rapid evolution (all of which are common for convective outflow boundaries). Past work on multiobject tracking has examined several techniques with a varying degree of complexity (Lakshmanan 2012). Those methods range from a simple minimization of object distances (greedy approach; Dijkstra 1959) to Kalman filter applications (Kalman 1960). Nevertheless, the aforementioned tracking techniques are prone to errors, especially when the identified objects undergo rapid structural changes. To address these problems, Lakshmanan et al. (2003) developed a hybrid tracking approach that exhibits superior performance over the aforementioned methods. In this work, we use a variation of the hybrid tracking approach to formulate our object tracker.
The components of the object tracker are summarized by the block diagram in Fig. 5 and further illustrated in Fig. 6 through a representative case scenario, in which a target object (black shading) splits into two smaller objects (1 and 2; gray shading) at the current model time step. In addition to these two objects, the binary field in Fig. 6 contains an additional third object (3), which is not physically related to the target object. Ideally, the tracking component of the algorithm should only associate objects 1 and 2 with the target object. Within the framework of the object tracker, this is achieved through a sequential application of three different constraints, the technical details of which are explained in the remainder of this section.
The first constraint is purely statistical in nature and utilizes the template matching technique (Brunelli 2009). Template matching calculates the 2D cross correlation between an image and a kernel ; that is,
where is a coordinate pair from the model domain, while and denote the width and height of the kernel, respectively. The purpose of template matching is to measure the degree of statistical similarity between a candidate object1 from the current image frame and a target object identified in the previous image frame.
Template matching is performed on each target–candidate object pair for a given algorithm analysis time. We let the smaller of those objects be the kernel , while the larger one is the image (refer to splitting and merging scenarios in Fig. 5). To account for different kernel shapes and sizes, the result from Eq. (3) is normalized by the sum of the kernel values to yield the 2D cross-correlation coefficient ; that is,
The object tracker assumes that two objects are statistically related to each other for sufficiently large values of the maximum correlation coefficient . The value of 0.72 for in Fig. 6b implies there is a high likelihood that candidate object 1 is associated with the target object. Furthermore, the location of suggests that object 1 originates from the southern portion of the target object. Both of these inferences are consistent with the object splitting scenario depicted in Fig. 6a.
Apart from measuring the statistical similarity between candidate and target objects, the template matching procedure provides an estimate for the motion vector (denoted as ; see Fig. 5). In particular, is calculated by dividing the Euclidian distance between the location of the central kernel point2 ( and the location of ( (i.e., the distance between the two object centroids) through the model time step . Mathematically,
Past algorithm tests have shown that the estimates can be impacted negatively if objects undergo rapid structural changes. To address this issue, the tracker incorporates a second estimate of (; see Fig. 5), which is based on the “greedy approach” of Dijkstra (1959) and measures the shortest distance between the center of the kernel and the collection of image points corresponding to the coordinates of the larger object; that is,
As indicated in Fig. 5, = only if .
Unlike other approaches, the tracker presented in this paper prioritizes the dynamics of convective outflow boundaries by adding two dynamical constraints as part of the object association procedure. The first one makes use of the pressure gradient force [PGF; is the mean sea level pressure normalized by the reference density value] in an attempt to restrict the direction toward which objects are allowed to propagate. Given that density currents and bores are associated with jumps in the surface pressure, one should expect them to move approximately in the direction of the PGF (red arrows in Fig. 6c). Having said that, a candidate object can be associated with a target object only if the angle between the object’s motion vector and the PGF [i.e., ] is sufficiently small. The second dynamical constraint imposes an upper limit on the object’s propagation speed and is denoted as . Ideally, should be derived from theoretical considerations regarding either the density current speed [e.g., Eq. (2) in Koch et al. (1991)] or the bore speed [e.g., Eqs. (2.4) and (3.1) in Rottman and Simpson (1989)] . However, to assess the overall feasibility of the second dynamical constraint in the initial algorithm tests, was assumed to remain constant throughout the algorithm runs. A similar approach was undertaken by Davis et al. (2006b) in their definition of “rain systems.”
Association between a target and a candidate object occurs only if the following three conditions are met simultaneously: (refer to the second half of Table 1 for the values of , , , and ). According to the object splitting scenario in Fig. 6a, all candidate objects are able to pass the template matching threshold of = 0.3. However, only candidate objects 1 and 2 successfully meet the additional dynamical requirements regarding the object’s propagation speed and direction. Therefore, this example shows how the tracker is able to handle the complex dynamical behavior of convective outflow boundaries by exploiting their statistical and dynamical properties simultaneously.
The addition of dynamical constraints in the object tracker is beneficial for several reasons. First, the simultaneous fulfillment of three different conditions relaxes the prescribed threshold values used for object association (Table 1). Tracking objects only with the aid of template matching [e.g., the hybrid tracking approach of Lakshmanan et al. (2003)] would have required a significantly higher value of in order to avoid spurious object associations. While a higher would decrease the number of false alarms, it can also lower the probability of detection, especially when convective outflow objects undergo significant structural changes.
Another benefit of incorporating dynamical constraints in the algorithm’s tracker is to make the search area (i.e., the area within which target and candidate objects are associated) flow dependent (dynamic). This idea is illustrated in the schematic from Fig. 7, where all three candidate objects are perfectly correlated with the target object (), but only the first one is physically associated. In this example, both the statically and dynamically defined search areas correctly discard object 3 from the association process. Nevertheless, due to its isotropic search radius (, the static method will spuriously associate object 2 with the target object despite the aforementioned candidate object being located upstream of the target’s propagation direction. This result is contrasted by our flow-dependent tracking method, which would shrink the search area in accordance to the PGF direction and correctly disassociate candidate object 2 from the target object. It is also worth noting that a tracking method using a static search radius will underperform in cases featuring extreme environmental conditions for which convective outflow boundaries propagate faster than usual. For instance, candidate object 1 in Fig. 7 fails to pass the object association test for the static approach since it moves beyond in a single model time step. Provided that is defined according to theory, the model-derived environmental information should increase the value of and, as a result, extend the search radius of the tracker. As revealed by the schematic example in Fig. 7, object 1 falls within this extended search area and will be correctly associated with the target object.
The use of dynamical constraints is also important when candidate objects are perfectly correlated with the target object (i.e., ; e.g., all candidate objects in Fig. 7). While a perfect match between a candidate and a target object is certainly possible, these situations typically occur if one of the objects is so small that it can fit entirely into the other one. In such cases, the additional dynamical information provided by and is essential in determining whether the two objects are physically related to one another.
4. Applications of the object-based algorithm
The intention of this section is to discuss the development of specific algorithm tools relevant for both research and operational forecasting applications. Special emphasis is placed on using the algorithm in conjunction with convection-allowing ensemble prediction systems.
a. Theoretical prediction of bores based on environmental profiles from NWP models
One application of the object-based algorithm is to use hydraulic and linear wave theories in order to determine whether a density current can trigger a bore and whether this bore will be maintained in the nighttime environment. Predicting the development and longevity of bores is needed due to their potential role in modifying convective instability and initiating deep convection (Carbone et al. 1990; Karyampudi et al. 1995; Koch and Clark 1999; Locatelli et al. 2002; Wilson and Roberts 2006). During the PECAN field campaign, such theoretical predictions were made by manually picking two environmental profiles on both sides of a numerically simulated density current (Haghi et al. 2015; Geerts et al. 2017). Despite the encouraging results from such a forecasting approach, its application is highly subjective and time consuming. In the context of ensemble prediction systems where information from multiple ensemble members is integrated to provide probabilistic bore forecasts, an automated objective method is much more desirable. As a result, the algorithm introduced in sections 2 and 3 is extended to objectively determine whether the nighttime environment can support convectively generated bores using environmental profiles from NWP models.
The method for extracting environmental profiles of meteorological variables on both sides of a density current will be referred to as a four-dimensional (4D) distance minimization and is schematically portrayed in Fig. 8a. The key variable in this method is the user-defined reference point (blue dot in Fig. 8a), which indicates where the theoretical bore analysis is to be performed. For the sake of explaining the minimization procedure, Fig. 8a considers a collection of three target objects identified by the algorithm for the first three algorithm time steps (i.e., ). Here, the set represents the target coordinates at time , while is the total number of points associated with . Let the Euclidian distance between and a point located within target be defined as Analogous to the previous definitions, we now define the corresponding set , where represents the collection of Euclidian distances between the reference point and the coordinate points comprising . The objective of the 4D minimization procedure is to find the smallest distance from point to any of the three target objects considered in this example [i.e., ]. The previous expression indicates that the distance is uniquely associated with the positional index from the nearest target object . The latter positional index defines the so-called pivot point (light blue dot in Fig. 8a). In the schematic diagram from Fig. 8a, , and the pivot point is located in the middle parts of . The main purpose of point is to mark the center of the pivot line (PL) segment (red line in Fig. 8a), which intersects the density current needed for the theoretical bore analysis. Note that the orientation of is parallel to the direction of the PGF (i.e., approximately in the direction of density current propagation).
The 4D minimization procedure is typically applied throughout the entire time period for which the algorithm is run, such as in the example from Fig. 8b. The time series showing the minimal distances for different time steps suggests that the objects from most of the ensemble members pass over the user-selected reference point ~4 h after the numerical model is initialized. It is worth pointing out that the time of passage is not necessarily identical between different ensemble members. The latter results from the inherent ensemble diversity with regard to the simulated density currents and highlights the ability of the 4D minimization framework to analyze those density currents in a dynamically consistent way (i.e., at the time of their passage over the reference point ).
An example of a probabilistic bore prediction using theoretical considerations is shown in Fig. 9 and refers to a specific choice of with a latitude of 43.4° and a longitude of −97.7°. The two environmental profiles needed for the theoretical bore forecasts are extracted from the end points of the PL segment . The total length of is relatively large (80 km) to ensure that the aforementioned profiles are taken sufficiently away from the nonhydrostatic density current head. The first part of this analysis uses the two-layer hydraulic theory of Rottman and Simpson (1989) in order to determine the likelihood of bore development. According to this theory, the development of a bore is dependent upon the value of the Froude number and nondimensional height (see more details in Rottman and Simpson 1989). The analysis of the vertical profiles from points and allows us to calculate the flow regime for each of the ensemble members and plot their distribution in the parameter space of and . For instance, the ensemble forecast results shown in Fig. 9a suggest a partially blocked flow regime, in which an atmospheric bore is expected to form ahead of the analyzed density current. Moreover, the mode of the ensemble distribution reveals that the most likely bore strength is , which corresponds to a weakly turbulent atmospheric bore.
The second part of the probabilistic bore prediction utilizes linear wave theory and estimates whether the environmental conditions are favorable for maintaining the convectively generated atmospheric bore. In particular, the algorithm calculates the ensemble distribution of the Scorer parameter (Scorer 1949) ahead of the density current (i.e., at point ). The example distribution from Fig. 9b shows a sharp decrease in from the surface to 1 km AGL, where approaches 0 m−2. The latter condition insinuates that waves originating in the subcritical region of the bore are likely to be trapped3 (i.e., maintain the bore for a longer time). This example also highlights the enhanced variability in the ensemble forecasts of near the surface, which possibly results from the inherent difficulty of numerical models to describe the nocturnal boundary layer structure. The increased range of values near the surface is likely to yield different wave trapping characteristics (Haghi et al. 2017).
b. Analysis of object attributes based on explicitly resolved convective outflow boundaries
The ability of convection-allowing NWP models to explicitly simulate convective outflow boundaries provides a unique opportunity for the object-based algorithm to extract dynamically relevant density current and bore attributes. One of them is the propagation speed , which has been routinely analyzed in previous work (e.g., Goff 1976; Koch et al. 1991). Although the object-based algorithm already utilizes motion vector estimates for the purposes of object tracking (see discussion on in section 3b), we propose an alternative method to estimate in the neighborhood of the user-defined reference point (i.e., ). Such a consideration takes into account the scarcity of bore observations and allows the algorithm to perform model verification only in regions where such data are readily available.
To calculate , we consider the target object containing the pivot point at time (; black color shading in Fig. 10a) and the target objects from the two neighboring time steps ( and ; gray color shading in Fig. 10a). Prior to estimating , the three target objects are skeletonized using the medial axis transform (MAT; Lakshmanan 2012), that is, for some time . This transform intends to increase the accuracy of the estimate due to its property to simplify the shape of the objects. Since the pivot point may no longer belong to as a result of the MAT, we define a new pivot point , which minimizes the distance between P and (i.e., ). Then, using the direction of the PGF at point and the closest nine neighboring points, we define a total of 10 PL segments (colored lines in Fig. 10a), each having a length of 80 km. If the set of coordinates for a particular PL is denoted as , then is the set of three points where the PL intersects the objects considered in this analysis. Therefore, the propagation speed of the object associated with the ith pivot line is defined as the Euclidian distance between the and points divided over twice the model time step (); that is,
Figure 10b shows the distribution of the propagation speeds from all quality-controlled ensemble members and neighboring points (32 × 10 = 320 points in total) in proximity to point , which has a latitude of 41.7° and a longitude of −97.4° in this example. For this particular choice of , the median and interquartile range values turn out to be 17.6 m s−1 and 2.1 m s−1, respectively. Similar to the theoretical bore analysis, the ensemble distribution of does not refer to a single model time, but rather corresponds to the time when a convective outflow object associated with a particular ensemble member passes over the reference point .
There are at least two advantages of calculating via the aforementioned neighborhood method. On one hand, this approach takes into account the inherent uncertainty of the estimate (indicated by the coloring of the PLs in Fig. 10a), which can arise both due the differential propagation of the tracked objects as well as the algorithm’s inability to calculate the exact location of the three intersection points used in the calculation of . The box-and-whisker diagram inset in Fig. 10a reveals that the velocity in the neighborhood of point can vary by more than 2 m s−1, making the single-valued estimates of not representative of the object’s propagation speed. On the other hand, the neighborhood approach also leads to a significant increase in the sample size of the ensemble (Fig. 10b) and, as a result, improves the overall representation of ensemble uncertainty with respect to the object’s propagation speed. In particular, sensitivity experiments indicated that the ensemble distribution of becomes more Gaussian after including the additional speed estimates from the neighboring points (not shown).
The second explicit analysis tool concerns the evaluation of bore amplitude and lifting in the vicinity of the reference point ; that is,
where and denote the height of the stable boundary layer (SBL) prior to and after the passage of the bore. Calculating and rests on the assumption that the virtual potential temperature is conserved when air parcels undergo bore lifting. Note that the accuracy of this approach will decrease in the presence of convectively induced latent heating, whereby surfaces can no longer be treated as material surfaces.
The procedure for estimating and starts with constructing a cross section over the central PL segment from Fig. 10a. The length of the resulting cross section (Fig. 11a) is the same as in the theoretical bore analysis from section 4a (80 km) and restricts the algorithm to only sample the undisturbed environment away from the bore front. Here, we utilize one of the theoretical bore analysis routines in order to get a first estimate of the inversion height upstream of the approaching bore (; gray diamond shape). More specifically, is defined as the depth over which the environmental lapse rate exceeds the moist adiabatic lapse rate [see Haghi et al. (2017) for more details]. The height of the stable layer downstream of the bore (; gray diamond shape) is then defined as the maximum height of the surface (gray dotted line).
It is worth remarking that the definition of in this object-based algorithm differs from previous studies, wherein is typically calculated as the average stable layer height in the wake of the bore. The rationale behind our definition of is based on qualitatively analyzing the diverse bore structures in our ensemble simulations. Specifically, examination of certain ensemble members revealed that bores may either (i) not show a semipermanent increase in the SBL height or (ii) be characterized by a rather shallow slope owing to more pronounced dispersion effects. The atypical bore characteristics in such cases meant that it is not appropriate to artificially divide a bore into subcritical and supercritical regions and then estimate the average postbore height in the subcritical region. By letting the postbore SBL height be the maximum height of the surface, the algorithm is able to better capture the intrinsic bore diversity and provide more consistent ensemble results. With these considerations in mind, our first estimates of the effective bore amplitude and lifting nearby point can be written as and , respectively.
Analogous to the estimation of the propagation speed , we also give first-order accounts to the accuracy of the retrieved SBL height , which is a traditionally challenging variable to estimate (Vickers and Mahrt 2004). In particular, model results have indicated that neighboring surfaces can differ significantly with respect to their postbore displacement heights. To alleviate this sensitivity, the deterministic value of is perturbed by K to create a broad range of stable boundary layer heights (dashed gray lines in Fig. 11a) that accounts for errors in the retrieval technique as well as the underlying ambiguity in terms of defining the SBL height. Within this envelope of possible SBL heights, the algorithm searches for the surface with the highest vertical displacement (green dotted line in Fig. 11a), which is then used to get optimal estimates of and (green diamond shapes). In particular, and are defined to be the minimum and maximum of the aforementioned surface. These optimal SBL heights are then used to redefine the bore amplitude and lifting from Eqs. (8) and (9).
Repeating the outlined procedure for the entire ensemble of 36 quality-controlled members provides the ensemble distribution of bore amplitudes in Fig. 11b. For this specific choice of , varies between 1.5 and 2.7 and has a median value of 2.1. The interquartile ensemble range implies that ~1/4 of the ensemble members predict the development of a bore with laminar characteristics, while the remaining ~3/4 of them anticipate a weakly turbulent bore. Further analysis also revealed that there is a positive correlation between and for the ensemble forecasts used in this study (not shown). The latter is consistent with the two-layer hydraulic theory presented by Rottman and Simpson (1989) [refer to their Eqs. (2.4) and (3.1)] and suggests that the explicit bore routines provide physically sound results.
c. Object-based probabilities of explicitly resolved convective outflow boundaries
Apart from its ability to objectively analyze the characteristics of explicitly resolved convective outflow boundaries, the algorithm presented herein can also provide probabilistic information regarding their representation in convection-allowing ensemble prediction systems. Given that each member in the ensemble forecast is associated with two distinct binary fields corresponding to the location of the simulated density currents and bores, object-based probabilities can be generated by simply calculating the relative frequency of the aforementioned binary fields over different model grid points. An example application of the outlined procedure is shown in Fig. 12 and illustrates how the object-based probabilities that are linked to the largest convective outflow object identified close to the model initialization time (0315 UTC; refer to Figs. 12a,c) evolve after more than 3 h of model integration (Figs. 12b,d). Note that the object-based probabilities evaluated with respect to the other target objects are not shown in Fig. 12 for clarity.
The sequence of images in Fig. 12 suggests that the largest convective outflow boundary initialized in our numerical simulations is expected to move east-southeast and that the main lifting mechanism associated with this particular boundary is very likely to change from a density current to a bore in the later forecast hours. Although a similar evolution is also evident in the deterministic forecast from Fig. 3, the algorithm-derived probabilities in this example contain additional information that could be potentially useful to operational forecasters. For instance, Figs. 12b and 12d reveal that the bore (density current) probabilities in the southern portion of the tracked outflow boundary are higher (lower), compared to its northern parts. Given a hypothetical scenario in which the ensemble members predict the development of convection in the vicinity of the highest bore probabilities, operational forecasters would have an additional physical insight about the forcing mechanism behind the model-simulated convection. If the radar and surface observations collected later on do (or do not) confirm the presence of a bore, these operational forecasters will have strong evidence that the NWP products used as part of their analysis are accurate (inaccurate) and would be able to adjust their forecasts accordingly. This idealized example suggests that the ability of the object-based algorithm to discriminate between density currents and bores could serve as useful guidance for the operational forecasting of nocturnal convection.
It is important to add that the object-based probabilities calculated with respect to the initial target objects are not the only means of utilizing the ensemble information contained in convection-allowing NWP models. In particular, density current and bore probabilities can be produced by independently considering all objects identified at a given model time step. However, since the latter approach does not make use of the algorithm’s tracker, operational forecasters would be unable to follow the evolution of convective outflow boundaries that are of particular interest to them. By contrast, the object-based probabilities in the example from Fig. 12 do not only add a temporal component to the algorithm’s output, but are also quite relevant in the context of short-range and rapidly updating NWP systems (e.g., the HRRR; Smith et al. 2008), wherein convective outflow boundaries are commonly present in the model’s initial conditions.
5. Conclusions and discussion
This study describes the development of an object-based algorithm for the automatic identification and tracking of convectively generated outflow boundaries. While object-based techniques to analyze density currents have been developed previously (Li et al. 2014; Drager and van den Heever 2017), a unique aspect of this algorithm is its ability to simultaneously account for both density currents and atmospheric bores. The detection of these morphologically different convective outflow boundaries is possible only after combining several model fields. Although the use of multiple identification variables is not a new concept (e.g., Clark et al. 2015), the specific parameter choice made in this algorithm is fundamentally different from past studies and is designed to target only those convective outflow boundaries that provide sufficient lifting for the initiation and maintenance of nocturnal MCSs. It is important to note that some of the identification variables are challenging to use without additional preprocessing steps. In particular, the spatial inhomogeneity of vertical velocity ( and the contamination of mean sea level pressure tendencies ) with spurious numerical noise require the application of carefully tuned image processing/filtering techniques in order to obtain physically meaningful results.
Another novel feature of the proposed algorithm is its object tracker. Following the suggestion of Davis et al. (2009b), the object tracker is formulated to explicitly account for the dynamics of convective outflow boundaries. The inclusion of the pressure gradient force direction and the maximum translational velocity as dynamical constraints for object association is especially important if objects experience rapid structural and/or morphological changes. In these cases, traditional statistical methods such as template matching are likely to provide erroneous tracking results. Likewise, the incorporation of dynamical information provides a better link between the algorithm and the underlying model data. For instance, it should be possible to relate differences in the speed or direction of object propagation back to the pressure gradient force used in the formulation of the algorithm.
To address the growing interest in atmospheric bores among researchers and operational forecasts (e.g., during the PECAN field campaign; Geerts et al. 2017), the second objective of the paper was to describe several algorithm applications relevant to the analysis and prediction of bores in NWP models. These applications were shown in the context of a convection-allowing ensemble forecast experiment from 6 July 2015. To the authors’ best knowledge, this is the first time a dedicated methodology for describing the ensemble distribution of specific bore parameters is presented. Such ensemble estimates are expected to provide valuable information for assessing the predictability of the studied phenomenon.
The four-dimensional minimization procedure, which lies at the heart of the suggested algorithm tools, allows algorithm users to automatically sample convective outflow boundaries and perform various diagnostics pertinent to their own research objectives. The specification of user-defined verification (reference) points through the aforementioned minimization procedure obviates the need for a spatially continuous verification dataset, which is required in other object-based verification methods (e.g., MODE; Davis et al. 2006a). The postprocessing tools developed as part of this object-based algorithm represent a natural extension of previous PECAN efforts to forecast the development and maintenance of bores based on manual input from model-derived environmental profiles (Haghi et al. 2015; Geerts et al. 2017). Apart from completely automating the routines needed for the theoretical bore analysis, the ability of the algorithm to determine the likelihood of bore occurrence could prove especially beneficial to operational forecasters by guiding them where the initiation of nocturnal convection is more likely.
Despite the encouraging performance of the newly developed object-based algorithm, it is also important to point out some of its limitations. The algorithm weaknesses are largely associated with its identification component. Most evidently, the static threshold values used to define convective outflow boundaries in the model domain are global in nature and only valid over relatively short periods of time. Within this study, the aforementioned problem is addressed by introducing two different sets of threshold values dependent upon the forecast lead time and the magnitude of the spurious numerical noise. One way to improve the identification capabilities of the algorithm is to use adaptive thresholding methods (e.g., through utilizing image histograms; Tobias and Seara 2002). Alternatively, introducing more sophisticated techniques such as marker-controlled watershed segmentation (Soille 2003) could completely remove the need for prescribing threshold values. Past experience related to the development of methods for detecting and tracking convective outflow boundaries in radar data can be also helpful in bolstering the identification results of our object-based algorithm. For instance, Delanoy and Troxel (1993) show how tracking gust fronts and anticipating their location in future radar scans improves the overall identification results. Another potentially useful concept incorporated in the aforementioned study is the application of functional template correlation (FCT) filters (Delanoy et al. 1992). In particular, this type of matched filters can be designed to describe the topological characteristics of model-simulated convective outflow boundaries and, hence, reduce the number of spuriously identified objects.
Finally, it is hoped that this paper will serve as a foundation for future studies aiming at advancing the algorithm’s capabilities as well as developing other relevant algorithm applications. A particularly interesting line of research would be to determine whether the classification of convective outflow boundaries can be extended to more complex features such as solitons and deep tropospheric gravity waves. Future work should also take advantage of the algorithm’s capability to objectively analyze some of the dynamical aspects of convective outflow boundaries. In line with the observational study of Toms et al. (2017), the algorithm could offer a convenient framework to examine the temporal evolution of the wave trapping characteristics following a model-simulated bore. Such an analysis could lend important insights into the two-way interactions between atmospheric bores and the ambient environment in which they develop and maintain. Moreover, a data assimilation study utilizing the proposed algorithm to objectively evaluate the forecast impact of assimilating novel PECAN observations is already being carried out by the authors of this work and will be reported in an upcoming paper.
The work is primarily supported by NSF Award AGS-1359703. Computing resources were provided by the Yellowstone (ark:/85065/d7wd3xhc) and Cheyenne (https://doi.org/10.5065/D6RX99HX) machines at NCAR’s Computational and Information Systems Laboratory, sponsored by the NSF. The authors of this paper would like to thank Kevin Haghi for providing code routines for the theoretical bore calculations. Special acknowledgements go to Aaron Johnson and Samuel Degelia from the University of Oklahoma, who regularly participated in discussions regarding the object-based algorithm and provided valuable feedback.
Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/MWR-D-18-0116.s1.
This article is included in the Plains Elevated Convection At Night (PECAN) Special Collection.
A candidate object is an object to be associated with the target object.
Note that the center of the kernel is taken to be the closest point to where .
If is a horizontal wavenumber, then the square of the vertical wavenumber becomes negative for m−2 and finite values of . According to linear wave theory, if transitions from positive to negative values, wave trapping will occur in the layer characterized by positive values (like in the example from Fig. 9b).