Automated System Work Flow
Jump to navigation
Jump to search
Roughly speaking:
- On a regular schedule the computer running Loggernet calls up a station and downloads the data to that computer.
- This regular schedule is station dependent. Sorry I can't be more precise than that. But typically hourly or more often.
- There are two ways of calling. One is through the standard LN Set up window and the other is done using cora commands.
- for sites sharing a base radio the stations are called sequentially using cora commands (e.g. Nome or Barrow networks).
- for the sites on their own radio / IP they are probably called using the LoggerNet scheduler.
- There are two ways of calling. One is through the standard LN Set up window and the other is done using cora commands.
- After data is downloaded it is pushed from the Loggernet computer (there are several) to ngeedata.
- The data arrives on ngeedata towards the top of the hour. A couple minutes into the hour the processing starts up.
- ngeedata has two processors and so each has a cron script to run through with a list of stations. They are balanced to roughly finish at the same time.
- If you want to know the precise order... check out the github repository.
- the file main_cron in https://github.alaska.edu/rcbusey/processing_bash_scripts shows the time things happen at.
- Also in that repo are the bash scripts outlining the sequence of operations.
- As soon as all of the data is processed a few additional diagnostic utilities are run to track bad data and things of that nature. All of these outputs are available as soon as the processing takes place on the www section of ngeedata.