Back

24 Shared Registration System (SRS) Performance

gTLDFull Legal NameE-mail suffixDetail
.AXISSaudi Telecom Companycentralnic.comView
Except where specified this answer refers to the operations of the Applicantʹs outsource Registry Service Provider, CentralNic.

24.1. Registry Type

CentralNic operates a ʺthickʺ registry in which the registry maintains copies of all information associated with registered domains. Registrars maintain their own copies of registration information, thus registry-registrar synchronization is required to ensure that both registry and registrar have consistent views of the technical and contact information associated with registered domains. The Extensible Provisioning Protocol (EPP) adopted supports the thick registry model. See §25 for further details.

24.2. Architecture

Figure 24.1 provides a diagram of the overall configuration of the SRS. This diagram should be viewed in the context of the overall architecture of the registry system described in §32.

The SRS is hosted at CentralNicʹs primary operations centre in London. It is is connected to the public Internet via two upstream connections, one of which is provided by Qube. Figure 32.1 provides a diagram of the outbound network connectivity. Interconnection with upstream transit providers is via two BGP routers which connect to the firewalls which implement access controls over registry services.

Within the firewall boundary, connectivity is provided to servers by means of resilient gigabit ethernet switches implementing Spanning Tree Protocol.
The registry system implements two interfaces to the SRS: the standard EPP system (described in §25) and the Registrar Console (described in §31). These systems interact with the primary registry database (described in §33). The database is the central repository of all registry data. Other registry services also interact with this database.

An internal ʺStaff Consoleʺ is used by CentralNic personnel to perform management of the registry system.

24.3. EPP System Architecture

A description of the characteristics of the EPP system is provided in §25. This response describes the infrastructure which supports the EPP system.
A network diagram for the EPP system is provided in Figure 24.2. The EPP system is hosted at the primary operations centre in London. During failover conditions, the EPP system operates from the Isle of Man Disaster Recovery site (see §34).

CentralNic’s EPP system has a three-layer logical and physical architecture, consisting of load balancers, a cluster of front-end protocol servers, and a pool of application servers. Each layer can be scaled horizontally in order to meet demand.

Registars establish TLS-secured TCP connections to the load balancers on TCP port 700. Load is balanced using DNS round-robin load balancing.
The load balancers pass sessions to the EPP protocol servers. Load is distributed using a weighted-least-connections algorithm. The protocol servers run the Apache web server with the mod_epp and mod_proxy_balancer modules. These servers process session commands (“hello”, “login” and “logout”) and function as reverse proxies for query and transform commands, converting them into plain HTTP requests which are then distributed to the application servers. EPP commands are distributed using a weighted-least-connections algorithm.

Application servers receives EPP commands as plain HTTP requests, which are handled using application business logic. Application servers process commands and prepare responses which are sent back to the protocol servers, which return responses to clients over EPP sessions.

Each component of the system is resilient: multiple inbound connections, redundant power, high availability firewalls, load balancers and application server clusters enable seamless operation in the event of component failure. This architecture also allows for arbitrary horizontal scaling: commodity hardware is used throughout the system and can be rapidly added to the system, without disruption, to meet an unexpected growth in demand.

The EPP system will comprise of the following systems:
• 4x load balancers (1U rack mount servers with quad-core Intel processors, 16GB RAM, 40GB solid-state disk drives, running the CentOS operating system using the Linux Virtual Server [see http:⁄⁄www.linuxvirtualserver.org⁄])
• 8x EPP protocol servers (1U rack mount servers with dual-core Intel processors, 16GB RAM, running the CentOS operating system using Apache and mod_epp)
• 20x application servers (1U rack mount servers with dual-core Intel processors, 4GB of RAM, running the CentOS operating system using Apache and PHP)

24.3.1. mod_epp

mod_epp is an Apache server module which adds support for the EPP transport protocol to Apache. This permits implementation of an EPP server using the various features of Apache, including CGI scripts and other dynamic request handlers, reverse proxies, and even static files. mod_epp was originally developed by Nic.at, the Austrian ccTLD registry. Since its release, a large number of ccTLD and other registries have deployed it and continue to support its development and maintenance. Further information can be found at http:⁄⁄sourceforge.net⁄projects⁄aepps. CentralNic uses mod_epp to manage EPP sessions with registrar clients, and to convert EPP commands into HTTP requests which can then be handled by backend application servers.

24.3.2. mod_proxy_balancer

mod_proxy_balancer is a core Apache module. Combined with the mod_proxy module, it implements a load-balancing reverse proxy, and includes a number of load balancing algorithms and automated failover between members of a cluster. CentralNic uses mod_proxy_balancer to distribute EPP commands to backend application servers.

24.4. Performance

CentralNic performs continuous remote monitoring of its EPP system, and this monitoring includes measuring the performance of various parts of the system. As of writing, the average round-trip times (RTTs) for various functions of the EPP system were as follows:
• connect time: 87ms
• login time: 75ms
• hello time: 21ms
• check time: 123ms
• logout time: 20ms

These figures include an approximate latency of 2.4ms due to the distant between the monitoring site and the EPP system. They were recorded during normal weekday operations during the busiest time of the day (around 1300hrs UTC) and compare very favourably to the requirement of 4,000ms for session commands and 2,000ms for query commands defined in the new gTLD Service Level Agreement. RTTs for overseas registrars will be higher than this due to the greater distances involved, but will remain well within requirements.

24.5. Scaling

Horizontal scaling is preferred over vertical scaling. Horizontal scaling refers to the introduction of additional nodes into a cluster, while vertical scaling involves using more powerful equipment (more CPU cores, RAM etc) in a single system. Horizontal scaling also encourages effective mechanisms to ensure high-availability, and eliminate single points of failure in the system.

Vertical scaling leverages Mooreʹs Law: when units are depreciated and replaced, the new equipment is likely to be significantly more powerful. If the average lifespan of a server in the system is three years, then its replacement is likely to be around four times as powerful as the old server.
For further information about Capacity Management and Scaling, please see §32.

24.6. Registrar Console

The Registrar Console is a web-based registrar account management tool. It provides a secure and easy-to-use graphical interface to the SRS. It is hosted on a virtual platform at the primary operations centre in London. As with the rest of the registry system, during a failover condition it is operated from the Isle of Man. The virtual platform is described in Figure 24.3.

The features of the Registrar Console are described in §31.

The virtual platform is a utility platform which supports systems and services which do not operate at significant levels of load, and which therefore do not require multiple servers or the additional performance that running on ʺbare metalʺ would provide. The platform functions as a private cloud, with redundant storage and failover between hosts.

The Registrar Console currently sustains an average of 6 page requests per minute during normal operations, with peak volumes of around 8 requests per minute. Volumes during weekends are significantly lower (fewer than 1 requests per minute). Additional load resulting from this and other new gTLDs is expected to result in a trivial increase in Registrar Console request volumes, and CentralNic does not expect additional hardware resources to be required to support it.

24.7. Quality Assurance

CentralNic employs the following quality assurance (QA) methods:
1. 24x7x365 monitoring provides reports of incidents to NOC
2. Quarterly review of capacity, performance and reliability
3. Monthly reviews of uptime, latency and bandwidth consumption
4. Hardware depreciation schedules
5. Unit testing framework
6. Frequent reviews by QA working group
7. Schema validation and similar technologies to monitor compliance on a real-time, ongoing basis
8. Revision control software with online annotation and change logs
9. Bug Tracking system to which all employees have access
10. Code Review Policy in place to enforce peer review of all changes to core code prior to deployment
11. Software incorporates built-in error reporting mechanisms to detect flaws and report to Operations team
12. Four stage deployment strategy: development environment, staging for internal testing, OT&E deployment for registrar testing, then finally production deployment
13. Evidence-based project scheduling
14. Specification development and revision
15. Weekly milestones for developers
16. Gantt charts and critical path analysis for project planning

Registry system updates are performed on an ongoing basis, with any user-facing updates (ie changes to the behaviour of the EPP interface) being scheduled at specific times. Disruptive maintenance is scheduled for periods during which activity is lowest.

24.8. Billing

CentralNic operates a complex billing system for domain name registry services to ensure registry billing and collection services are feature rich, accurate, secure, and accessible to all registrars. The goal of the system is to maintain the integrity of data and create reports which are accurate, accessible, secured, and scalable. The foundation of the process is debit accounts established for each registrar. CentralNic will withdraw all domain fees from the registrar’s account on a per-transaction basis. CentralNic will provide fee-incurring services (e.g., domain registrations, registrar transfers, domain renewals) to a registrar for as long as that registrar’s account shows a positive balance.

Once ICANN notifies Applicant that a registrar has been issued accreditation, CentralNic will begin the registrar on-boarding process, including setting up the registrarʹs financial account within the SRS.

24.9. Registrar Support

CentralNic provides a multi-tier support system on a 24x7 basis with the following support levels:
• 1st Level: initial support level responsible for basic customer issues. The first job of 1st Level personnel is to gather the customer’s information and to determine the customer’s issue by analyzing the symptoms and figuring out the underlying problem.
• 2nd Level: more in-depth technical support level than 1st Level support containing experienced and more knowledgeable personnel on a particular product or service. Technicians at this level are responsible for assisting 1st Level personnel solve basic technical problems and for investigating elevated issues by confirming the validity of the problem and seeking for known solutions related to these more complex issues.
• 3rd Level: the highest level of support in a three-tiered technical support model responsible for handling the most difficult or advanced problems. Level 3 personnel are experts in their fields and are responsible for not only assisting both 1st and 2nd level personnel, but with the research and development of solutions to new or unknown issues.

CentralNic provides a support ticketing system for tracking routine support issues. This is a web based system (available via the Registrar Console) allowing registrars to report new issues, follow up on previously raised tickets, and read responses from CentralNic support personnel.

When a new trouble ticket is submitted, it is assigned a unique ID and priority. The following priority levels are used: 
1. Normal: general enquiry, usage question, or feature enhancement request. Handled by 1st level support.
2. Elevated: issue with a non-critical feature for which a work-around may or may not exist. Handled by 1st level support.
3. Severe: serious issue with a primary feature necessary for daily operations for which no work-around has been discovered and which completely prevents the feature from being used. Handled by 2nd level support.
4. Critical: A major production system is down or severely impacted. These issues are catastrophic outages that affect the overall Registry System operations. Handled by 3rd level support.

Depending on priority, different personnel will be alerted to the existence of the ticket. For example, a Priority 1 ticket will cause a notification to be emailed to the registrar customer support team, but a Priority 4 ticket will result in a broadcast message sent to the pagers of senior operations staff including the CTO. The system permits escalation of issues that are not resolved within target resolution times.

24.10. Enforcement of Eligibility Requirements

The SRS supports enforcement of eligibility requirements, as required by specific TLD policies.

Figure 24.4 describes the process by which registration requests are validated. Prior to registration, the registrantʹs eligibility is validated by a Validation Agent. The registrant then instructs their registrar to register the domain. The SRS returns an ʺObject Pendingʺ result code (1001) to the registrar.

The request is sent to the Validation Agent by the registry. The Validation Agent either approves or rejects the request, having reconciled the registration information with that recorded during the eligibility validation. If the request has been approved, the domain is fully registered. If it is rejected, the domain is immediately removed from the database. A message is sent to the registrar via the EPP message queue in either case. The registrar then notifies the registrant of the result.

24.11. Interconnectivity With Other Registry Systems

The registry system is based on multiple resilient stateless modules. The SRS, Whois, DNS and other systems do not directly interact with each other. Interactions are mediated by the database which is the single authoritative source of data for the registry as a whole. Individuals modules perform ʺCRUDʺ (create, read, update, delete) actions upon the database. These actions then affect the behaviour of other registry systems: for example, when a registrar adds the ʺclientHoldʺ status to a domain object, this is recorded in the database. When a query is received for this domain via the Whois service, the presence of this status code in the database results in the ʺStatus: CLIENT HOLDʺ appearing in the whois record. It will also be noted by the zone generation system, resulting in the temporary removal of the delegation of the domain name from the DNS.

24.12. Resilience

The SRS has a stateless architecture designed to be fully resilient in order to provide an uninterrupted service in the face of failure or one or more parts of the system. This is achieved by use of redundant hardware and network connections, and by use of continuous ʺheartbeatʺ monitoring allowing dynamic and high-speed failover from active to standby components, or between nodes in an active-active cluster. These technologies also permit rapid scaling of the system to meet short-term increases in demand during ʺsurgeʺ periods, such as during the initial launch of a new TLD.

24.12.1. Synchronisation Between Servers and Sites

CentralNicʹs system is implemented as multiple stateless systems which interact via a central registry database. As a result, there are only a few situations where synchronisation of data between servers is necessary:
1. replication of data between active and standby servers (see §33). CentralNic implements redundancy in its database system by means of an active⁄standby database cluster. The database system used by CentralNic supports native real-time replication of data allowing operation of a reliable hot standby server. Automated heartbeat monitoring and failover is implemented to ensure continued access to the database following a failure of the primary database system.
2. replication is used to synchronise the primary operations centre with the Disaster Recovery site hosted in the Isle of Man (see §34). Database updates are replicated to the DR site in real-time via a secured VPN, providing a ʺhotʺ backup site which can be used to provide registry services in the event of a failure at the primary site.

24.13. Operational Testing and Evaluation (OT&E)

An Operational Testing and Evaluation (OT&E) environment is provided for registrars to develop and test their systems. The OT&E system replicates the SRS in a clean-room environment. Access to the OT&E system is unrestricted and unlimited: registrars can freely create multiple OT&E accounts via the Registrar Console.

24.14. Resourcing

As can be seen in the Resourcing Matrix found in Appendix 23.2, CentralNic will maintain a team of full-time developers and engineers which will contribute to the development and maintenance of this aspect of the registry system. These developers and engineers will not work on specific subsystems full-time, but a certain percentage of their time will be dedicated to each area. The total HR resource dedicated to this area is equivalent to more than one full-time post.

CentralNic operates a shared registry environment where multiple registry zones (such as CentralNicʹs domains, the .LA ccTLD, this TLD and other gTLDs) share a common infrastructure and resources. Since the TLD will be operated in an identical manner to these other registries, and on the same infrastructure, then the TLD will benefit from an economy of scale with regards to access to CentralNicʹs resources.

CentralNicʹs resourcing model assumes that the ʺdedicatedʺ resourcing required for the TLD (ie, that required to deal with issues related specifically to the TLD and not to general issues with the system as a whole) will be equal to the proportion of the overall registry system that the TLD will use. After three years of operation, the optimistic projection for the TLD states that there will be 1000 domains in the zone. CentralNic has calculated that, if all its TLD clients are successful in their applications, and all meet their optimistic projections after three years, its registry system will be required to support up to 4.5 million domain names. Therefore the TLD will require less than 0.1% of the total resources available for this area of the registry system.

In the event that registration volumes exceed this figure, CentralNic will proactively increase the size of the Technical Operations, Technical Development and support teams to ensure that the needs of the TLD are fully met. Revenues from the additional registration volumes will fund the salaries of these new hires. Nevertheless, CentralNic is confident that the staffing outlined above is sufficient to meet the needs of the TLD for at least the first 18 months of operation.
gTLDFull Legal NameE-mail suffixDetail
.nowXYZ.COM LLCxyz.comView
Except where specified this answer refers to the operations of the Applicantʹs outsource Registry Service Provider, CentralNic.

24.1. Registry Type
CentralNic operates a ʺthickʺ registry in which the registry maintains copies of all information associated with registered domains. Registrars maintain their own copies of registration information, thus registry-registrar synchronization is required to ensure that both registry and registrar have consistent views of the technical and contact information associated with registered domains. The Extensible Provisioning Protocol (EPP) adopted supports the thick registry model. See §25 for further details.

24.2. Architecture

Figure 24.1 provides a diagram of the overall configuration of the SRS. This diagram should be viewed in the context of the overall architecture of the registry system described in §32.
The SRS is hosted at CentralNicʹs primary operations centre in London. It is is connected to the public Internet via two upstream connections, one of which is provided by Qube. Figure 32.1 provides a diagram of the outbound network connectivity. Interconnection with upstream transit providers is via two BGP routers which connect to the firewalls which implement access controls over registry services.

Within the firewall boundary, connectivity is provided to servers by means of resilient gigabit ethernet switches implementing Spanning Tree Protocol.
The registry system implements two interfaces to the SRS: the standard EPP system (described in §25) and the Registrar Console (described in §31). These systems interact with the primary registry database (described in §33). The database is the central repository of all registry data. Other registry services also interact with this database.
An internal ʺStaff Consoleʺ is used by CentralNic personnel to perform management of the registry system.

24.3. EPP System Architecture

A description of the characteristics of the EPP system is provided in §25. This response describes the infrastructure which supports the EPP system.
A network diagram for the EPP system is provided in Figure 24.2. The EPP system is hosted at the primary operations centre in London. During failover conditions, the EPP system operates from the Isle of Man Disaster Recovery site (see §34).

CentralNic’s EPP system has a three-layer logical and physical architecture, consisting of load balancers, a cluster of front-end protocol servers, and a pool of application servers. Each layer can be scaled horizontally in order to meet demand.

Registars establish TLS-secured TCP connections to the load balancers on TCP port 700. Load is balanced using DNS round-robin load balancing.
The load balancers pass sessions to the EPP protocol servers. Load is distributed using a weighted-least-connections algorithm. The protocol servers run the Apache web server with the mod_epp and mod_proxy_balancer modules. These servers process session commands (“hello”, “login” and “logout”) and function as reverse proxies for query and transform commands, converting them into plain HTTP requests which are then distributed to the application servers. EPP commands are distributed using a weighted-least-connections algorithm.
Application servers receives EPP commands as plain HTTP requests, which are handled using application business logic. Application servers process commands and prepare responses which are sent back to the protocol servers, which return responses to clients over EPP sessions.

Each component of the system is resilient: multiple inbound connections, redundant power, high availability firewalls, load balancers and application server clusters enable seamless operation in the event of component failure. This architecture also allows for arbitrary horizontal scaling: commodity hardware is used throughout the system and can be rapidly added to the system, without disruption, to meet an unexpected growth in demand.

The EPP system will comprise of the following systems:
• 4x load balancers (1U rack mount servers with quad-core Intel processors, 16GB RAM, 40GB solid-state disk drives, running the CentOS operating system using the Linux Virtual Server [see http:⁄⁄www.linuxvirtualserver.org⁄])
• 8x EPP protocol servers (1U rack mount servers with dual-core Intel processors, 16GB RAM, running the CentOS operating system using Apache and mod_epp)
• 20x application servers (1U rack mount servers with dual-core Intel processors, 4GB of RAM, running the CentOS operating system using Apache and PHP)

24.3.1. mod_epp

mod_epp is an Apache server module which adds support for the EPP transport protocol to Apache. This permits implementation of an EPP server using the various features of Apache, including CGI scripts and other dynamic request handlers, reverse proxies, and even static files. mod_epp was originally developed by Nic.at, the Austrian ccTLD registry. Since its release, a large number of ccTLD and other registries have deployed it and continue to support its development and maintenance. Further information can be found at http:⁄⁄sourceforge.net⁄projects⁄aepps. CentralNic uses mod_epp to manage EPP sessions with registrar clients, and to convert EPP commands into HTTP requests which can then be handled by backend application servers.

24.3.2. mod_proxy_balancer

mod_proxy_balancer is a core Apache module. Combined with the mod_proxy module, it implements a load-balancing reverse proxy, and includes a number of load balancing algorithms and automated failover between members of a cluster. CentralNic uses mod_proxy_balancer to distribute EPP commands to backend application servers.

24.4. Performance

CentralNic performs continuous remote monitoring of its EPP system, and this monitoring includes measuring the performance of various parts of the system. As of writing, the average round-trip times (RTTs) for various functions of the EPP system were as follows:
• connect time: 87ms
• login time: 75ms
• hello time: 21ms
• check time: 123ms
• logout time: 20ms

These figures include an approximate latency of 2.4ms due to the distant between the monitoring site and the EPP system. They were recorded during normal weekday operations during the busiest time of the day (around 1300hrs UTC) and compare very favourably to the requirement of 4,000ms for session commands and 2,000ms for query commands defined in the new gTLD Service Level Agreement. RTTs for overseas registrars will be higher than this due to the greater distances involved, but will remain well within requirements.

24.5. Scaling

Horizontal scaling is preferred over vertical scaling. Horizontal scaling refers to the introduction of additional nodes into a cluster, while vertical scaling involves using more powerful equipment (more CPU cores, RAM etc) in a single system. Horizontal scaling also encourages effective mechanisms to ensure high-availability, and eliminate single points of failure in the system.
Vertical scaling leverages Mooreʹs Law: when units are depreciated and replaced, the new equipment is likely to be significantly more powerful. If the average lifespan of a server in the system is three years, then its replacement is likely to be around four times as powerful as the old server.
For further information about Capacity Management and Scaling, please see §32.

24.6. Registrar Console

The Registrar Console is a web-based registrar account management tool. It provides a secure and easy-to-use graphical interface to the SRS. It is hosted on a virtual platform at the primary operations centre in London. As with the rest of the registry system, during a failover condition it is operated from the Isle of Man. The virtual platform is described in Figure 24.3.
The features of the Registrar Console are described in §31.
The virtual platform is a utility platform which supports systems and services which do not operate at significant levels of load, and which therefore do not require multiple servers or the additional performance that running on ʺbare metalʺ would provide. The platform functions as a private cloud, with redundant storage and failover between hosts.
The Registrar Console currently sustains an average of 6 page requests per minute during normal operations, with peak volumes of around 8 requests per minute. Volumes during weekends are significantly lower (fewer than 1 requests per minute). Additional load resulting from this and other new gTLDs is expected to result in a trivial increase in Registrar Console request volumes, and CentralNic does not expect additional hardware resources to be required to support it.

24.7. Quality Assurance

CentralNic employs the following quality assurance (QA) methods:

1. 24x7x365 monitoring provides reports of incidents to NOC
2. Quarterly review of capacity, performance and reliability
3. Monthly reviews of uptime, latency and bandwidth consumption
4. Hardware depreciation schedules
5. Unit testing framework
6. Frequent reviews by QA working group
7. Schema validation and similar technologies to monitor compliance on a real-time, ongoing basis
8. Revision control software with online annotation and change logs
9. Bug Tracking system to which all employees have access
10. Code Review Policy in place to enforce peer review of all changes to core code prior to deployment
11. Software incorporates built-in error reporting mechanisms to detect flaws and report to Operations team
12. Four stage deployment strategy: development environment, staging for internal testing, OT&E deployment for registrar testing, then finally production deployment
13. Evidence-based project scheduling
14. Specification development and revision
15. Weekly milestones for developers
16. Gantt charts and critical path analysis for project planning

Registry system updates are performed on an ongoing basis, with any user-facing updates (ie changes to the behaviour of the EPP interface) being scheduled at specific times. Disruptive maintenance is scheduled for periods during which activity is lowest.

24.8. Billing

CentralNic operates a complex billing system for domain name registry services to ensure registry billing and collection services are feature rich, accurate, secure, and accessible to all registrars. The goal of the system is to maintain the integrity of data and create reports which are accurate, accessible, secured, and scalable. The foundation of the process is debit accounts established for each registrar. CentralNic will withdraw all domain fees from the registrar’s account on a per-transaction basis. CentralNic will provide fee-incurring services (e.g., domain registrations, registrar transfers, domain renewals) to a registrar for as long as that registrar’s account shows a positive balance.
Once ICANN notifies Applicant that a registrar has been issued accreditation, CentralNic will begin the registrar on-boarding process, including setting up the registrarʹs financial account within the SRS.

24.9. Registrar Support

CentralNic provides a multi-tier support system on a 24x7 basis with the following support levels:

• 1st Level: initial support level responsible for basic customer issues. The first job of 1st Level personnel is to gather the customer’s information and to determine the customer’s issue by analyzing the symptoms and figuring out the underlying problem.
• 2nd Level: more in-depth technical support level than 1st Level support containing experienced and more knowledgeable personnel on a particular product or service. Technicians at this level are responsible for assisting 1st Level personnel solve basic technical problems and for investigating elevated issues by confirming the validity of the problem and seeking for known solutions related to these more complex issues.
• 3rd Level: the highest level of support in a three-tiered technical support model responsible for handling the most difficult or advanced problems. Level 3 personnel are experts in their fields and are responsible for not only assisting both 1st and 2nd level personnel, but with the research and development of solutions to new or unknown issues.
CentralNic provides a support ticketing system for tracking routine support issues. This is a web based system (available via the Registrar Console) allowing registrars to report new issues, follow up on previously raised tickets, and read responses from CentralNic support personnel.

When a new trouble ticket is submitted, it is assigned a unique ID and priority. The following priority levels are used: 
1. Normal: general enquiry, usage question, or feature enhancement request. Handled by 1st level support.
2. Elevated: issue with a non-critical feature for which a work-around may or may not exist. Handled by 1st level support.
3. Severe: serious issue with a primary feature necessary for daily operations for which no work-around has been discovered and which completely prevents the feature from being used. Handled by 2nd level support.
4. Critical: A major production system is down or severely impacted. These issues are catastrophic outages that affect the overall Registry System operations. Handled by 3rd level support.

Depending on priority, different personnel will be alerted to the existence of the ticket. For example, a Priority 1 ticket will cause a notification to be emailed to the registrar customer support team, but a Priority 4 ticket will result in a broadcast message sent to the pagers of senior operations staff including the CTO. The system permits escalation of issues that are not resolved within target resolution times.

24.10. Enforcement of Eligibility Requirements

The SRS supports enforcement of eligibility requirements, as required by specific TLD policies.

Figure 24.4 describes the process by which registration requests are validated. Prior to registration, the registrantʹs eligibility is validated by a Validation Agent. The registrant then instructs their registrar to register the domain. The SRS returns an ʺObject Pendingʺ result code (1001) to the registrar.

The request is sent to the Validation Agent by the registry. The Validation Agent either approves or rejects the request, having reconciled the registration information with that recorded during the eligibility validation. If the request has been approved, the domain is fully registered. If it is rejected, the domain is immediately removed from the database. A message is sent to the registrar via the EPP message queue in either case. The registrar then notifies the registrant of the result.

24.11. Interconnectivity With Other Registry Systems

The registry system is based on multiple resilient stateless modules. The SRS, Whois, DNS and other systems do not directly interact with each other. Interactions are mediated by the database which is the single authoritative source of data for the registry as a whole. Individuals modules perform ʺCRUDʺ (create, read, update, delete) actions upon the database. These actions then affect the behaviour of other registry systems: for example, when a registrar adds the ʺclientHoldʺ status to a domain object, this is recorded in the database. When a query is received for this domain via the Whois service, the presence of this status code in the database results in the ʺStatus: CLIENT HOLDʺ appearing in the whois record. It will also be noted by the zone generation system, resulting in the temporary removal of the delegation of the domain name from the DNS.

24.12. Resilience

The SRS has a stateless architecture designed to be fully resilient in order to provide an uninterrupted service in the face of failure or one or more parts of the system. This is achieved by use of redundant hardware and network connections, and by use of continuous ʺheartbeatʺ monitoring allowing dynamic and high-speed failover from active to standby components, or between nodes in an active-active cluster. These technologies also permit rapid scaling of the system to meet short-term increases in demand during ʺsurgeʺ periods, such as during the initial launch of a new TLD.

24.12.1. Synchronisation Between Servers and Sites

CentralNicʹs system is implemented as multiple stateless systems which interact via a central registry database. As a result, there are only a few situations where synchronisation of data between servers is necessary:

1. replication of data between active and standby servers (see §33). CentralNic implements redundancy in its database system by means of an active⁄standby database cluster. The database system used by CentralNic supports native real-time replication of data allowing operation of a reliable hot standby server. Automated heartbeat monitoring and failover is implemented to ensure continued access to the database following a failure of the primary database system.
2. replication is used to synchronise the primary operations centre with the Disaster Recovery site hosted in the Isle of Man (see §34). Database updates are replicated to the DR site in real-time via a secured VPN, providing a ʺhotʺ backup site which can be used to provide registry services in the event of a failure at the primary site.

24.13. Operational Testing and Evaluation (OT&E)

An Operational Testing and Evaluation (OT&E) environment is provided for registrars to develop and test their systems. The OT&E system replicates the SRS in a clean-room environment. Access to the OT&E system is unrestricted and unlimited: registrars can freely create multiple OT&E accounts via the Registrar Console.

24.14. Resourcing

As can be seen in the Resourcing Matrix found in Appendix 23.2, CentralNic will maintain a team of full-time developers and engineers which will contribute to the development and maintenance of this aspect of the registry system. These developers and engineers will not work on specific subsystems full-time, but a certain percentage of their time will be dedicated to each area. The total HR resource dedicated to this area is equivalent to more than one full-time post.

CentralNic operates a shared registry environment where multiple registry zones (such as CentralNicʹs domains, the .LA ccTLD, this TLD and other gTLDs) share a common infrastructure and resources. Since the TLD will be operated in an identical manner to these other registries, and on the same infrastructure, then the TLD will benefit from an economy of scale with regards to access to CentralNicʹs resources.

CentralNicʹs resourcing model assumes that the ʺdedicatedʺ resourcing required for the TLD (ie, that required to deal with issues related specifically to the TLD and not to general issues with the system as a whole) will be equal to the proportion of the overall registry system that the TLD will use. After three years of operation, the optimistic projection for the TLD states that there will be [10,000] domains in the zone. CentralNic has calculated that, if all its TLD clients are successful in their applications, and all meet their optimistic projections after three years, its registry system will be required to support up to 4.5 million domain names. Therefore the TLD will require [0.22]% of the total resources available for this area of the registry system.

In the event that registration volumes exceed this figure, CentralNic will proactively increase the size of the Technical Operations, Technical Development and support teams to ensure that the needs of the TLD are fully met. Revenues from the additional registration volumes will fund the salaries of these new hires. Nevertheless, CentralNic is confident that the staffing outlined above is sufficient to meet the needs of the TLD for at least the first 18 months of operation.