The 2020 Sand Point Earthquake: Initial Conditions

6 minute read

Published:

Sand Point Tsunami

What factors led to the Sand Point Tsunami are not so well known. What is known for sure is that of the three event sequence: Simeonof, Sand Point, and Chignik, Sand Point had the largest, inspite of it being a strike-slip focal mechanism. In terms of tsunamis, you would have expected either Simeonof or Chignik to have produced larger tsunamis, but they didn’t. The produced things that were fairly weak, as far as tsunamis go for the region. So what gives? To paraphrase Richard Feynman, we have to account for all the energy in the system, and somehow the Sand Point Earthquake had enough energy to produce a tsunami of a whooping 0.71 m (2.5 ft). How do we account for all the energy?

Normally, a seismologist would start off trying to invert seismic and geodetic data to get a result. This sort of thing is akin to taking everything we know from seismometers and GNSS stations, and hoping that something sticks. But! There is a critical issue here. How do you know where to invert? What assumptions do you make about the data to ensure a nicely fitting inversion. The seismic data is pointing ever so plainly to a right-lateral strike-slip pattern, so it would be a safe assumption to make that our inversion must have a strike-slip geometry to invert on. If one starts here, they would find themselves beating their heads against the wall due to the complex geometry of the earthquake. So some other geometry must be there besides strike-slip or in addition to strike-slip. But, how would you know what it is or what the extent even is?

Short answer, seismic and GNSS data would not tell you. You would need something else. We are in the business of accounting for energy, so what would give us the better audit?

Water-level Data

Water-level data is the thing most likely to give you the most fidelious accounting of energy for the tsunami. It does not ask for many assumptions. In fact, all you really need is more than three stations (tide gauges or DART buoys) and the first arrivals of the tsunami at those locations. From these things, you could trace back to a potential source area and run a water-level data only inversion. What that gives you are the initial conditions needed to generate the tsunami, which here would be the initial sea-surface displacement. Before I get too ahead of myself, there are caveats to using water-level data. If the sampling rate of the data is too slow, you might miss important information in first arrival times or worse yet, you would start ingesting non-linear arrival information screwing up your source area. You must be precise. While some tide gauges can be contaminated quickly, some others are not. In such cases, you would find that in some locations you may only be able to ingest 1/2-3/4 of a wavelength whilst in other locations you may be able to use up to 2. It is a fine art to pick these arrival times. Too little of an arrival can lead to confusing and unconstrained initial conditions. Too much and you have overfit an area, giving a false impression that that area matters more. It is easy to trick oneself at this step.

Like in meteorology and the atmospheric sciences, poorly constrained initial conditions can lead to immensely different results for the tsunami. In some aspects, you might have 10 m of initial sea-surface displacement in a very concentrated area, or you might have 0.5 m in an expansive region. These results would tell you widely different stories, which is the chief concern when doing anything modeling related. A model can do anything you want it to do, if you don’t impose many restrictions. And, with water-level data, there are not many restrictions, which makes the endeavor incredibly ripe for mis-interpretation. How does one overcome this?

Choosing a Source Region

In this case, it was rather simple. Start with the area immediately around the strike-slip geometry the USGS pointed towards with their finite fault model. If the initial sea-surface displacement is similar to a strike-slip earthquake, then next step inversions can use that area as a base to start off with. If it was that simple, I would have had a paper out about the earthquake in late 2020/early 2021. But, it wasn’t. For better or for worse, the initial sea-surface displacements were not indicative of strike-slip deformation. They were indicative of megathrust deformation, and suddenly there I had to start again on how to pick a source region. After some time to think, the safest assumption for the source region was to draw a rectangular box from the trench to incorperate the GNSS site AC12, where initial sea-surface displacements could be checked against real data, and extend the box the southwest and northeast always following the trench. In the end, this source region ended up being too small in the southwest direction, as most of the probably real displacements were being bunched up in the left most quadrent of the box, which led to it being extended further southwest. The end result is shown in the following figure.

Figure 1. Rough outline of the final source area used in (Santellanes et al. (2021), in review).

Gaussian Source Inversions

All of these inversions are based off of the ingenious method described by Tsushima et al. (2009). In that and subsequent papers, Gaussian sources are used to calculate the Green’s Functions response at the tide gauges and DART buoys used in the inversion. The method is rather ingenious because it allows for displacements of the sea-surface to be tied to water displacement overhead allowing for tide gauges and DARTs to estimate the initial conditions and source region of the tsunami and to see how it compares to results expected of earthquake sources. This method effectively allows for a tsunami to be re-analyzed, as the method put forward by Tsushima et al. (2009) is used for tsunami forecasting. The method also allows for some constraints by way of GNSS stations so if AC12 had -10 cm of deformation, then we can put it in the inversion to be on the look out for it and produce results that are hopefully consistent with it. At the end of the day, the method they put forward is the cornerstone for obtaining the sea-surface deformations.

Figure 2. Illustration and notes on methods from Tsushima et al. (2009)

Ending Thoughts for Now

This is all fine and dandy, but it hardly makes for good science. I have not yet explicitly explained why this earthquake and tsunami are so important to study. A seismologist or tsunami scientist would be able to see why this earthquake is so weird, but not someone who doesn’t do this for a living. Which raises the question: why should you care about this? A tsunami happened. It was fairly big, but it caused no big damage. At the end of the day, no one died. Case closed. Let’s move on to the next big thing. In the terms of overview for the paper, this blog enlightens people, but missing from the explanation is the why care. That’s usually something reserved for a talk or seminar. This can often lead to papers not properly eludicating the hazard of what they are studying. But! I feel like it would be proper for its own standalone blog post. Seems like a tease, which it undoubtedly is, but time is finite. Also, part of why YOU should care lies in the way warnings and watches are put out for tsunamis, which requires seperate teaching.

References

  1. Melgar, D., & Bock, Y. (2013). Near-field tsunami models with rapid earthquake source inversions from land- and ocean-based observations: The potential for forecast and warning. Journal of Geophysical Research: Solid Earth, 118(11), 5939–5955. https://doi.org/10.1002/2013JB010506

  2. Santellanes, S. R., Melgar, D., Crowell, B. W., & Lin, J. T. (2021). Potential megathrust co-seismic slip during the 2020 Sand Point, Alaska strike-slip earthquake. Pre-print on ESSOAr

  3. Tsushima, H., Hino, R., Fujimoto, H., Tanioka, Y., & Imamura, F. (2009). Near-field tsunami forecasting from cabled ocean bottom pressure data. Journal of Geophysical Research: Solid Earth, 114(6), 1–20. https://doi.org/10.1029/2008JB005988