A model run consists of:
- Initialisation: Process of entering observation data into a model, to generate "initial conditions" (initialisation time) at which the model begins the calculation of conditions.
- Assimilation: Collection, quality control and normalisation of all observation data, which describe the initial state of the atmosphere. The result is an assembly of data in a format which allows the start of computation.
- Computation: Calculation of future atmospheric rates of change in time increments, called time step. Time steps typically are four seconds.
- Data extraction: During computation, data are extracted in regular forecast intervals. meteoblue uses 1 hour intervals for NMM- and NEMS-models.
- Data storage: After computation, the model data are written into accessible formats and stored.
- Data postprocessing: For certain applications (e.g. calculation of pictograms, station forecasts, energy forecasts), the data are further treated with special postprocessing routines.
Frequency and update times of model runs
For the NMM and NEMS models, meteoblue generally produces 2 model runs per day. The runs are based on the assimilations for 00:00 UTC and 12:00 UTC. Initialisation generally takes place 2 hours after assimilation. Actualisation are made between 6 and 8 hours after assimilation.
The time of initialisation of the model run can be verified from the image or data set. Generally, forecast range displayed is:
- 00:00 UTC initialisation: from 00:00 UTC onwards (first day has 24 hours of data).
- 12:00 UTC initialisation: from 12:00 UTC onwards (first day has 12 hours of data).
The actualisation times are displayed in the following way:
- Diagrams (e.g. meteogram): in the title or subtitle (in UTC).
- Data FTP: the actualisation times are the time stamps of the transfer file.
- Data for API: the data actualisation times are not displayed individually, to preserve file formats. They can be checked using a corresponding diagram.
Our model domains update at different times during the day. Depending on the geographical extent and the resolution, a computation takes more or less time. The following table gives the approximate update times. Rarely (in about 3% of cases), the model update might be later than indicated. This delay is caused by delayed transmission of input data, required to start our simulations.
* The current update time, as well as the domain name for any location can be checked live on the main meteoblue forecast page by clicking on additional variables.
** NEMSGLOBAL provides forecasts for every point on the globe. However, a local weather forecast will not primarily use this global forecast if another regional domain is available.
Quality of model runs
Other weather forecast sites update their forecasts up to four times per day: Why does meteoblue "only" calculate two model runs per day?
meteoblue computes 2 long-range (7 days) forecasts every day. Additionally, for the current day, we update our 12 hour forecasts much more frequently (nowcast), depending on the availability of observations. In some regions, the forecast for the current day can change slightly every 10 minutes.
With the meteoblue NMM technology, the results of 2 daily runs already produce a very high quality. More frequent updates, while increasing computional and storage efforts as well as costs considerably, would not improve the weather forecast. Here are some reasons for that:
- Assimilation quality
- Good global assimilation data, which is crucial to compute a good weather forecast, can be obtained twice a day at 00:00 and 12:00 UTC. The interim updates (06:00 and 18:00 UTC) are less complete due to fewer observational data and are only available from GFS. Therefore, the forecast quality of the model runs based on the 06:00 and 18:00 UTC assimilations is lower than from the 00:00 and 12:00 UTC assimilations. ECMWF only assimilates at 00:00 and 12:00 UTC, GFS is the most frequent global assimilation useful for long range forecasting with updates at 00:00, 06:00, 12:00 and 18:00 UTC.
- Local modelling
- regional meteoblue models are of high resolution and produce more localised data than the conventional global models, which requires less corrections through weather station measurements.
- Limited availability of observational data.
- Short-term updates, also called NOWCASTS (using weather stations or satellite measurements), only allow to improve the forecast for the next 1-12 hours. Statistically, a 6-hour forecast produced with a weather station measurement (e.g. a forecast produced at 10:00 UTC for 16:00 of the same day, based on weather station data from 09:00 UTC) is inferior to a model forecast for 16:00 based on 00:00 assimilation and issued at 06:00 UTC.
What matters is not how often, but how well we update !
Maximising model data quality
To offer constant best quality forecast, meteoblue uses a combination of strategies:
- Cover large areas with high spatial resolution
- We use the computation power of our supercomputer to calculate our own model forecasts for many domains, thereby improving the data quality in much larger areas.
- Statistical postprocessing
- Using more than 10'000 measurements stations and advanced statistical methods developed by meteoblue, we remove systematic errors from our model forecasts. Examples are MOS or the computation of consensus from different models.
- Nowcasting for special variables
- User reporting
- We offer a weather blog, which enables users from the entire world to report actual conditions, permitting a continuous comparison with reality.
- Special purpose
- For applications which are highly dependent on very precise local weather information (like wind power forecasting and building management), meteoblue makes specialised adaptations of the forecast (MOS) using local weather measurements. If you have archived weather observations for your location, we can significantly improve your forecast.
- Comparison to previous years
- With the meteogram Climate, we show how the predicted weather compares to previous years, giving users another measure of the frequency of an event or an expected value.
- Quality control and careful evaluation
- By continued comparison to measurements, we control our forecast quality and improve it continuously. If we change our forecast models we test and document the improvement by recalculating the weather for several years.
Thereby, we offer our users different options to receive the best forecast according to their specific requirements. "What users say" tells you some of the results.
Model data generation (for programmers)
A scheme of the meteoblue model forecast update process is shown in the graph on the right side. Timing of data availability depends on the model domain and the process steps involved in data generation (e.g. postprocessing, MOS) and data transmission (e.g. API, FTP , mail or others).