The initial grid for GoM/GB FVCOM covers the entire GoM/GB region and is enclosed by an open boundary running from the New Jersey shelf to the Scotian shelf. At present, we have three generations of GOM/GB FVCOM available to meet different applications.
The first generation of FVCOM-GoM/GB is designed primarily for process-oriented studies with the requirement for fast computation. In this version, the geometry of the inter-bay and estuaries are smoothed to fit the selected horizontal resolution. The horizontal resolution of this version varies from 1 km along the coast and on GB to 10-15 km in the GoM and open ocean areas closed to the open boundary. The model remains the realistic bathymetry in the GoM, but in the open boundary area off the shelbreak the model bathymetry greated than 300 m is set to a constant depth of 300 m.
The second generation of FVCOM-GoM/GB is modified based on the first generation domain by increasing the horizontal resolution to 300-500 m in Nantucket Sound and over Stellwagan Bank. The computational domain was also extended upstreamward to include the parts of the inner bays and estuaries.
The third generation of FVCOM-GoM/GB is the current version that is being tested for the GoM/GB forecast system. This version consists of two nested computational domains: 1) the regional domain with the horizontal resolution of about 0.5-1.0 km in the coastal region, GB, and shelf break and 2) local coastal domains with the horizontal resolution ranging from 20 m to 1 km. The regional and local domains are connected by common node points with no need of the interpolation from one to another. The realistic high resolution bathymetry is used to configure this model without any special treatment as those done in the first and second generations of FVCOM-GoM/GB. With a new mass conservation open boundary condition, this model can be driven by both tides and subtidal forcing and also is capable to add the subtidal flux on the open boundary. This is the first model that includes all details of estuaries and innerbays in the GoM. This model has been validated by comparing with first and second generations of FVCOM-GoM/GB and will be placed in the forecast operation after we complete the hindcast model validation experiments for 1995-2006.
We have introduced so-called generalized topographic coordinate system into FVCOM. The third version of FVCOM-GoM/GB is built on this generalized coordinate, so it can be run in either sigma-, s- and z-coordinates. To ensure the accurate simulation of the surface mixed layer and bottom boundary layer on the slope, we have selected the generalized coordinate with the uniform thickness layers near the surface (upper 40 m) and near the bottom (20 m above the bottom).
In the first and second generation of FVCOM-GoM/GB, the vertical domain was divided into 30 layers. The regional domain version of the third generation of FVCOM-GoM/GB was also initially tested using the 30 non-uniform layers, and the upgraded to 41 and 71 layers.
The first generation of FVCOM-GoM/GB can be run efficiently using a 2-processors of Linux computer.The time step used in this version is 24.8 secs for the external mode and 248.4 secs for the internal mode, corresponding to 180 time steps of internal mode integration over an M2 tidal cycle. This code also runs very efficiently on a PC: it takes about 1.5 computer days to run a 1-month simulation on a 2.4 GHz single CPU Linux PC.
Due to the limitation of the horizontal resolution (300-500 m), the time step for the second generation of FVCOM-GoM/GB runs with a time step of 4 secs for external mode and 40 secs for tinternal mode. The regional domain of the third generation of FVCOM-GoM/GB uses a similar time steps as the second generation one.
The third generation of FVCOM-GoM/GB is operated on our high performance 256 processors' Linux cluster. Right now, this model can be run very efficiently only by using 16 nodes (32 CPUs). It takes about several hours of computational times to run a month. In the forecast mode, the model with inclusion of the local domains will be run with 120 nodes (240 CPUs), which includes both MM5/WR weather forecasting and FVCOM operations.