Speaker I: Andrew Allyn
Title: Can we predict species distributions? Understanding the relationship between species distribution model prediction skill and novel environmental conditions
Abstract: Correlative species distribution models (SDMs) are commonly used to predict species occurrence patterns under future environmental conditions, offering essential information to face the challenges arising from climate-driven species distribution shifts. Although widely applied, we still have much to learn about the predictive skill of these models, especially how well they can predict under novel conditions expected with continued global climate change. We investigated this question using a simulation study and assessed whether the relationship between SDM prediction skill and environmental novelty was consistent across two different large marine ecosystems, the California Current and the Northeast U.S. Shelf, and two contrasting species archetypes, a resident-mobile and a seasonally-migrating species. Both marine ecosystems experienced novel conditions during the prediction testing period, however, the degree of novelty was considerably greater for the Northeast U.S. Shelf. While we anticipated consistent declines in prediction skill as environmental novelty increased, this relationship was complicated by the species underlying movement characteristics as prediction skill remained stable and even increased under novel conditions in certain situations. This work adds to our theoretical understanding of SDM prediction skill and provides guidance for distribution forecast and projection efforts, highlighting how underlying system dynamics and species characteristics may interact to create unexpected patterns in model prediction skill.
Speaker II: Mark Borrelli
Title: Seafloor Mapping in Extreme Shallow Waters: Platforms, Instruments, Data Needs and Utility
Abstract: Coastal environments are among the most dynamic ecosystems on earth. Collecting data in shallow waters is hazardous, costly, and time-consuming. Yet these data are critical to our understanding of the physical processes at work and implications for ecosystem evolution at multiple temporal and spatial scales. Here we present an overview of the seafloor mapping program within the Coastal Processes and Ecosystem Lab at the UMass Boston. Recent and ongoing projects and data regarding coastal sediment transport, submerged aquatic vegetation, benthic ecology, marine debris and other anthropogenic alterations will be discussed.
Speaker III: David Coe
Title: Investigating Climate Connections: Using Regional Daily Weather Types and AI to Understand Variability in Timing of the Seasons in the Northeast U.S.
Abstract: Weather Type (WT) analysis identifies a region’s characteristic weather patterns that are useful to investigate underlying trends and variability in regional weather. Characteristic Weather Types for the spring season, Mar – May, and fall season, Sep – Nov, in the Northeast U.S. are identified using k-means clustering of ERA5 500-hPa height, MSLP, and 850-hPa u and v winds. The resulting WTs are analyzed for their seasonal, monthly, and daily frequency, their seasonal evolution, and relation to extreme temperature and precipitation events. The WTs are also used in conjunction with deep learning techniques, such as Convolutional Neural Networks, to train an AI model to identify the observed daily WTs based on their 500 hPa height fields. Once trained, the AI model can classify new data as one of the observed WTs. Using this model, we match winter dates to the fall and spring WTs to better understand changes in seasonal transition to and from the cold season. For the fall season, a preliminary trend analysis indicates an increase in early season WTs later in the season and a decrease in late season WTs earlier in the season; that is, a shift toward a longer period of warm season patterns and a shorter, delayed period of cold season patterns. Using daily frequencies of the Early and Late Season WTs, the middle of Spring is identified and changes in its timing are identified through splitting the time series into two equal halves. The middle of the spring season is found to come 12-15 days earlier, significant at the 95% level, and occurs 2-4 days longer. Combined with more Late Season WTs occurring in April, this shows a shift toward warmer, more summer-like circulation patterns occurring earlier in the season. We intend to further investigate the changing length of the winter season by applying an AI model to classify winter season data as one of our observed clusters. This model will also be used to classify observed WTs in CMIP6 climate model data and statistical analysis performed to understand the models’ ability to capture daily weather patterns.
Speaker IV: Debra Duarte
Title: Review of Methodologies for Detecting an Observer Effect in Commercial Fisheries Data
Abstract: Observers are deployed on commercial fishing trips to collect representative samples of discard behavior. However, some fishermen change their fishing habits when an observer is onboard. If the extent of this “observer effect” is substantial, the observed data will not be representative of unobserved trips, potentially biasing the estimation of discards. This can impact catch monitoring, stock assessments, and fishery management. Further, the increased variance in discard estimation can lead to higher observer coverage requirements to achieve precision targets. The purpose of this study was to examine the power and error rate of several published methods for detecting an observer effect using trip metrics such as landings and trip duration.
The simplest methods (t-test and F-test for difference of means and variances) were unable to reliably detect bias of less than 30% and could not distinguish between an observer effect and a deployment effect (non-random allocation of observer coverage within a stratum). A generalized linear mixed effect model (GLMM) was also not reliable at detecting low levels of bias but was not confounded by deployment effects and was relatively robust to changing coverage rates, except at the lowest coverage levels (e.g., 5%). The most complicated tests involved comparing differences between subsequent trips for observed-unobserved and unobserved-unobserved pairs. These were able to detect smaller observer effects (15-20%) and were not confounded by deployment effects, but were least reliable at the highest coverage rates (>60%), producing both high false positive and false negative rates. Sensitivity tests also showed differing detection accuracy as the distribution of the metric of interest changed. Thus the optimal test for detecting an observer effect will depend on the metric of interest, the coverage rate, and whether a deployment effect exists. Results should be considered carefully when declaring that an observer effect is or is not occurring