PeerMesh is responsible for transferring complete or partial tables of data between Gds/2 servers. A HO Peer server (one that mirrors live) uses PeerMesh to maintain the replicated data
Source Code: RLSync.cpp and others
Gds/2 Peer Server
A peer server is a mirror copy of a Gds server. This might be used for redundancy or reporting purposes.
- The Peer Server defines its master Peer in gds.ctl
- MeshRouter.exe (source code PeerManager.cpp) handles the process
- Creates the directory import_datc for holding input data files
- Issues a DMEC command to verify and create the database if needed. This command will create databases in Sql/Server automatically if required (security permitting)
- Creates the table gds_replicationstatus if needed
- Issues the command /gnap/b/buck?3=retailmax.elink.sync.request&110=peerindex&113=[RANK] to retrieve the index from the server
- For each table listed, pulls the table data if required. Validations are used to skip tables that have not changed.
- Actual downloading is performed by 3 background threads. This is a multi step process consisting of
- Create or update the table using the DMEG info
- Physically downloading the transfer file
- Loading data to database. MeshRouter initiates this process by placing Cmd=4001 into the control ring, which is detected and handled by RetailLogic. MeshRouter does not wait for the data to load. When RetailLogic detects the 4001 command, it creates a MeshFileMaint task and queues it for background processing.
- Pauses for next download cycle
RetailLogic MeshFileMaint
Source Code: MeshFileMaint.cpp. This code is responsible for creating and loading transfer files. It is used by both PeerMesh (create and load) and SyncHidden (create only)
A number of threads are created, currently 3 by default, to handle all the internal tasks. These threads are all created at below_normal priority. Different stack sizes are used for threads and the threads select large or small tasks based on their available stack space. This essentially means that two large requests will execute sequentially on the same thread, but smaller tasks can use different threads in parallel. This was done as loading large tables can be memory and IO intensive
Load Task
Used to load a transfer file from disk to database. Source code MeshFileMaint / ApplyUpdate
- File presence is verified.
- File is checked to see if already loaded. This is a memory based check, and if already loaded a debug line 8712 is recorded with the name of the file. Loading stops.
- A primary key is attempted to be located if possible. Using local DMEG.f1103
- If the file size is larger than 350Mb, we skip out. Transfer files should not reach this size by design and should be in seperate chunks.
- Data is loaded. See source for exact logic/steps.
Dynamic Updating
The scheme outlined above is primary involved with full table replication. Peer mesh is also capable of handling individual single edits
- A mesh queue is created on /rm/RMSYSTEM/tubt/Rank/KEYOFTHEDAY
- When a TUBT is ready for routing, MeshRouter creates a TURI and inserts it into this queue. The from field is set to the dbid of the generator