Ribbon Warning Number - Warning-26-000009624
Announcement Summary
Ribbon has identified an issue affecting all SBC platforms (SBC 7000, SBC 5400, SBC SWe) in which SIP calls routed to targets defined by fully qualified domain names (FQDNs) will fail when the system uptime reaches 497 days or more.
Announcement Details
Overview
Ribbon has identified an issue affecting all SBC platforms (SBC 7000, SBC 5400, SBC SWe) in which SIP calls routed to targets defined by fully qualified domain names (FQDNs) will fail when the system reaches 497 days of uptime. The SBC will stop updating its DNS cache with data received in DNS responses from DNS servers. As a result, DNS lookups fail, preventing the SBC from delivering calls. In the DBG log, the issue appears as the following MAJOR events:
177 02192026 072524.832013:1.02.00.11222.MAJOR .DNSC: *DnsClientLookupCompleteCmd - Entry not found in cache for domainName:sip.domain.com type:1 dnsZoneId:5
177 02192026 072529.482298:1.02.00.11223.MAJOR .DNSC: *DnsClientLookupCompleteCmd - Entry not found in cache for domainName:_sip._udp.voip.otherdomain.com type:33 dnsZoneId:5
The scope of impact has been narrowed down to the following SBC releases:
Immediate Recommendation
Ribbon strongly recommends checking the uptime of all SBCs running any of the listed affected versions that rely on DNS lookups for routing calls to FQDN-based peers. Reboot nodes before they reach 497 days of uptime to prevent service interruption. Note that SBC nodes and SWe instances are rebooted during software upgrades; systems that were recently upgraded already have their uptime reset and are therefore not at immediate risk.
If you observe failing SIP calls accompanied by the above events and the active SBC node uptime is equal to or greater than 497 days, reboot the SBC nodes to restore service.
Example of Uptime on the Affected System
Below is an example from an impacted SBC pair with both active and standby nodes at 499 days of uptime:
admin@sbc2b> show table system serverStatus
MGMT DAUGHTER
PLATFORM APPLICATION REDUNDANCY APPLICATION UP LAST RESTART BOARD
NAME HW TYPE SERIAL NUM PART NUM VERSION VERSION ROLE UP TIME TIME REASON SYNC STATUS PRESENT CURRENT TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sbc2a SBC 5400 123456711 821-00504 V12.01.02R000 V12.01.02R000 standby 499 Days 03:29:01 12 Days 21:28:14 systemRestart syncCompleted true 2026/02/19 15:12:40
sbc2b SBC 5400 123456712 821-00504 V12.01.02R000 V12.01.02R000 active 499 Days 03:11:58 13 Days 08:52:24 systemRestart syncCompleted true 2026/02/19 15:12:40
[ok][2026-02-19 15:12:40]
Procedure to Reset Uptime / Restore Service
To restore service on an affected SBC or to reset uptime before it reaches the risk threshold, reboot both nodes one at a time. This ensures service continuity. Although a reboot doesn't disrupt service when performed on a standby node, Ribbon recommends performing the procedure during a scheduled maintenance window. Stopping the SBC application and powering off a standalone SBC/SBC SWe instance will cause downtime; the service will be restored once the instance is powered on.For hardware SBCs (SBC 5400/SBC 7000):
1) Stop the SBC application on the standby node.
2) Power off the standby SBC from BMC, then wait two minutes, and power it on.
3) After the standby node syncs with the active node, perform a manual switchover.
4) Repeat steps 1 and 2 on the new standby node and verify that the new standby comes up and syncs with the active node.
For software SBC (SWe):
1) Stop the SBC application on the standby node.
2) Power off the standby SBC from the VM host, then wait two minutes, and power it on.
3) After the standby node syncs with the active node, perform a manual switchover.
4) Repeat steps 1 and 2 on the new standby node and verify that the new standby comes up and syncs with the active node.
Date Notification Emailed3/25/2026 12:00 PM
Files
Announcement Summary
Ribbon has identified an issue affecting all SBC platforms (SBC 7000, SBC 5400, SBC SWe) in which SIP calls routed to targets defined by fully qualified domain names (FQDNs) will fail when the system uptime reaches 497 days or more.
Announcement Details
Overview
Ribbon has identified an issue affecting all SBC platforms (SBC 7000, SBC 5400, SBC SWe) in which SIP calls routed to targets defined by fully qualified domain names (FQDNs) will fail when the system reaches 497 days of uptime. The SBC will stop updating its DNS cache with data received in DNS responses from DNS servers. As a result, DNS lookups fail, preventing the SBC from delivering calls. In the DBG log, the issue appears as the following MAJOR events:
177 02192026 072524.832013:1.02.00.11222.MAJOR .DNSC: *DnsClientLookupCompleteCmd - Entry not found in cache for domainName:sip.domain.com type:1 dnsZoneId:5
177 02192026 072529.482298:1.02.00.11223.MAJOR .DNSC: *DnsClientLookupCompleteCmd - Entry not found in cache for domainName:_sip._udp.voip.otherdomain.com type:33 dnsZoneId:5
The scope of impact has been narrowed down to the following SBC releases:
- V10.01.05R004 and later;
- V10.01.06R000 and later;
- V11.01.02R000;
- V12.01.02R000 and later.
Immediate Recommendation
Ribbon strongly recommends checking the uptime of all SBCs running any of the listed affected versions that rely on DNS lookups for routing calls to FQDN-based peers. Reboot nodes before they reach 497 days of uptime to prevent service interruption. Note that SBC nodes and SWe instances are rebooted during software upgrades; systems that were recently upgraded already have their uptime reset and are therefore not at immediate risk.
If you observe failing SIP calls accompanied by the above events and the active SBC node uptime is equal to or greater than 497 days, reboot the SBC nodes to restore service.
Example of Uptime on the Affected System
Below is an example from an impacted SBC pair with both active and standby nodes at 499 days of uptime:
admin@sbc2b> show table system serverStatus
MGMT DAUGHTER
PLATFORM APPLICATION REDUNDANCY APPLICATION UP LAST RESTART BOARD
NAME HW TYPE SERIAL NUM PART NUM VERSION VERSION ROLE UP TIME TIME REASON SYNC STATUS PRESENT CURRENT TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sbc2a SBC 5400 123456711 821-00504 V12.01.02R000 V12.01.02R000 standby 499 Days 03:29:01 12 Days 21:28:14 systemRestart syncCompleted true 2026/02/19 15:12:40
sbc2b SBC 5400 123456712 821-00504 V12.01.02R000 V12.01.02R000 active 499 Days 03:11:58 13 Days 08:52:24 systemRestart syncCompleted true 2026/02/19 15:12:40
[ok][2026-02-19 15:12:40]
Procedure to Reset Uptime / Restore Service
To restore service on an affected SBC or to reset uptime before it reaches the risk threshold, reboot both nodes one at a time. This ensures service continuity. Although a reboot doesn't disrupt service when performed on a standby node, Ribbon recommends performing the procedure during a scheduled maintenance window. Stopping the SBC application and powering off a standalone SBC/SBC SWe instance will cause downtime; the service will be restored once the instance is powered on.For hardware SBCs (SBC 5400/SBC 7000):
1) Stop the SBC application on the standby node.
2) Power off the standby SBC from BMC, then wait two minutes, and power it on.
3) After the standby node syncs with the active node, perform a manual switchover.
4) Repeat steps 1 and 2 on the new standby node and verify that the new standby comes up and syncs with the active node.
For software SBC (SWe):
1) Stop the SBC application on the standby node.
2) Power off the standby SBC from the VM host, then wait two minutes, and power it on.
3) After the standby node syncs with the active node, perform a manual switchover.
4) Repeat steps 1 and 2 on the new standby node and verify that the new standby comes up and syncs with the active node.
Date Notification Emailed3/25/2026 12:00 PM
Files