SupportTrack
Today
Items
Projects
Time
Logs
Timesheet
CTO Time
Reports
Overview
Meeting Notes
Activity Summary
PMDS Activity Summary
Time by Category
Time by Item
Normalized Tracking
Taxonomy
Automation
PMDS
Templates
Knowledge Base
Wiki
Organigram
Servers
URLs
Firewall Rules
--:--
MAD
NY
--:--
· IN
--:--
⏱️
No tasks running
⏹️
Wiki
Back
Edit page
CommsNet Tracking
Title
Slug
Status
Draft
Verified
Needs Review
Context
No context
AOM
CommsNet
DS
ESB
FSS
GIS
HR
ICC Training
IPPC
MediaNet
Mission Pin Code
OpsData
Organization
OSM
Sage
UA Maps
UA Notifications
UN Base
UN Dashboard
Unite Aware
UN Vector Tiles
Categories
Application Info
Documentation
General
Known Issue
Procedures
Tshoot
Topics
Azure
CommsNet
Compass
Databases
DD Boost FS
Disk Space
Files
Gateways
HeidiSQL
iNeed
Keepass
Linux
MariaDB
MongoDB
MySQL
Network
Organization
PostgresSQL
PowerBI
Remote Desktop Manager
Replication
Service Desk
Setup
UA Notifications
UN Base
UNICC
Updates
WHO
Windows
Markdown
# CommsNet Tracking ## Purpose Middleware/aggregation layer for real-time location and telemetry data. Collects data from live mission sources, stores it, and exposes it to UA Maps. ## What it tracks - Vehicles - Flights - Radio devices - Any mission asset with a real-time location ## Architecture **Write path (ingest):** ``` [Frontend node 1 / Frontend node 2] → (pull) → [Live data sources] → [MongoDB Replica Set] ``` **Read path (consumption):** ``` [UA Maps] → [Netscaler (Load Balancer)] → [Frontend node 1 / Frontend node 2] → [MongoDB] ``` 1. Frontend nodes initiate connections and pull from live data sources (no load balancer on ingest) 2. Frontend nodes write to MongoDB replica set 3. UA Maps connects to Netscaler (load balancer) 4. Netscaler distributes read requests across frontend nodes 5. Frontend nodes serve location data back to UA Maps 6. UA Maps displays location + metadata on mission maps ## Tech Stack - **Load balancer:** Netscaler (read path only) - **Frontend:** 2 nodes (handle both ingest and query serving) - **Database:** MongoDB replica set — 4 nodes + 1 arbiter - 2 nodes in Valencia, 2 in Brindisi, 1 arbiter in Azure - **Hosting:** UN private cloud ## Infrastructure | Component | Location | Role | |-------------|------------------|-----------------------------------| | Netscaler | UN private cloud | Load balancer (read path only) | | frontend-01 | UN private cloud | Ingest + query node | | frontend-02 | UN private cloud | Ingest + query node | | mongo-node1 | Valencia | MongoDB replica member | | mongo-node2 | Valencia | MongoDB replica member | | mongo-node3 | Brindisi | MongoDB replica member | | mongo-node4 | Brindisi | MongoDB replica member | | arbiter | Azure | Arbiter (no data) | ## Integrations - **Upstream:** live mission data sources (vehicles, flights, radio devices) - Frontend nodes poll/pull data from sources (frontend initiates the connection) - **Downstream:** UA Maps - Connects via Netscaler → frontend nodes → MongoDB ## Diagram  ## Knowledge Gaps - [ ] Failover process if CommsNet goes down - [ ] Dev team / owner - [x] DRX process documented (VLC↔BDS via ISCP) - [ ] Actual hostnames of frontend nodes and Netscaler ## DRX — Disaster Recovery Exercise - **What it is:** Controlled exercise where an application is moved manually from one datacenter to another (VLC → BDS or vice versa) - **When triggered:** - Real emergency (unplanned failover) - DRX: planned, controlled exercise - **How it works:** Follow the ISCP Excel file step by step - **ISCP file:** Very detailed Excel covering: - Application info - Infrastructure - Backups - FW rules - Failover steps - Must be kept up to date - **MongoDB replica set:** 4 normal nodes + 1 arbiter (arbiter in Azure, nodes split VLC/BDS) ## Replication - **DB replication:** Automatic sync across MongoDB nodes - **File sync:** Automatic sync between API/frontend nodes ## CommsNet Support Runbook (Quick Ops) ### Scope Use this for day-to-day triage and context, not deep RCA. ### Baseline (must stay true) - MongoDB replica set: 4 data nodes + 1 Azure arbiter - Data nodes split across Valencia (2) and Brindisi (2) - UA Maps traffic path: UA Maps → Netscaler → frontend nodes → MongoDB - Ingest path: frontend nodes pull from live sources ### First checks when CommsNet is slow or failing 1. Confirm if impact is **ingest**, **read/query**, or both 2. Check frontend node health (both API/frontend nodes) 3. Check Netscaler status (only affects read/query path) 4. Check MongoDB replica set health and primary availability 5. Check data freshness in UA Maps (stale vs missing updates) ### MongoDB quick validation - Verify replica status and current primary using rs.status() - Verify member priorities/config using rs.conf() - Remember: writes use majority semantics in current setup; cross-DC behavior matters for write acknowledgement ### DR/DRX operational notes - DRX = controlled failover exercise between VLC and BDS - Real emergency failovers follow same ISCP backbone but with incident-driven execution - ISCP must be updated before DRX and after significant infra changes ### Escalation (known) - Dev contact: **Saurav Datta** (ESB developer context; coordinate if CommsNet/API dependencies are involved) - Security/FW rules are team-routed (not person-locked) ### What is still missing (to complete this runbook) - Exact frontend/Netscaler hostnames - Concrete failover decision tree (who decides, in what order) - Standard smoke-test checklist after failover
Cancel
Save
Stop Timer
Activity comments