Back

24 Shared Registration System (SRS) Performance

gTLDFull Legal NameE-mail suffixDetail
.TIROLpunkt Tirol GmbHtirol.comView
High-level SRS systems description:

The Shared Registry System is based on the Extensible Provisioning Protocol (EPP) and employs a multi-tiered architecture with public facing interfaces completely segregated from backend functions (such as database and management interfaces). An overview of the functionality provided by the SRS is as follows:

Registrars connect to and authenticate against the EPP frontend systems.
Those frontends receive and parse all EPP commands to perform checks of business logic (including any policy requirements) and subsequently perform (or reject) the requested action against the back-end data storage.
The back-end data storage is handled by a Relational Database Management System (described in detail in response to Question 33).

The server elements used for the SRS employ a number of technologies to ensure service availability and reliability. These include the use of multiple, virtualised Linux servers and several layers of high-availability functionality such as active-active load balancing, standby components and replication of full virtual machine images. A significant amount of design and implementation effort has focussed on removing any potential single point of failure in the SRS architecture. This architecture has also been fully tested and verified on a functionally identical prototype system, and is operational for the “.bh” migration. Some of that work included training and verification by independent third party system architecture experts, in particular the critical system availability functions such as cluster failover and real-time block device level replication.

The SRS software itself is readily available at the time of this submission. It is implemented and operated in accordance with the requirements of Specification 6 („Registry Interoperability and Continuity Specifications“) and the respective SLAs in Specification 10. The SRS uses EPP as its core provisioning protocol and supports, amongst other RFCs, the following provisioning RFCs as required in Section 1.2 of Specification 6:

RFC 5730 (EPP Base Specification)
RFC 5731 (EPP Domain Name Mapping)
RFC 5732 (EPP Host Mapping)
RFC 5733 (EPP Contact Mapping)
RFC 5734 (EPP TCP Transport).
RFC 3735 (EPP Extension Guidelines)
RFC 5910 (DNSSEC Mapping)
RFC 3915 (Grace Period Mapping)

For maximum interoperability, only EPP functionality that is documented in the above RFCs is implemented – i.e. there are no proprietary EPP extensions used in the SRS. Further details about the implementation of the EPP Registry Services are contained in response to Question 25 (“EPP”).

It is understood that SRS availability and 100% data integrity are absolute key requirements for the deployment of a successful TLD operation. As a result the SRS implementation was developed with a strong focus on those key factors.

The core software platform employed by the SRS, with its powerful modular policy functionality has been in production use for the „.at“ ccTLD (nic.at) since 2003, additionally the „.no“ ccTLD (norid) successfully migrated to the registry software over the course of 2010. Another installation of the software was recently rolled out to support the migration of the “.bh” ccTLD (Kingdom of Bahrain) from the incumbent operator to the Regulatory Authority of Bahrain. Furthermore, this core software is currently being used to provide SRS implementations for ENUM (Electronic Numbering) Registries in Austria (+43) and Ireland (+353). Finally, test instances of this customised software for ENUM are deployed in The Netherlands and Australia.

The modular and highly extensible structure of the SRS software allows for customized per-TLD policies that are implemented on top of an identical core registry system. This allows for code reuse between different TLD implementations, regardless of the policy framework required.

The implementation of this software, specifically for this new gTLD, has been customized to the needs of the registry operator and to meet or exceed ICANN’s policy and SLA requirements set out in the Applicant Guidebook for new gTLDs. A detailed description of the architecture supporting the SRS software is contained in answer to Question 32 (Architecture).

Details for the DNS elements of the TLD service, including zone file creation, signing, dissemination and testing procedures are contained in answers to Question 35 (DNS) and answers to Question 43 (DNSSEC).

Policy and additional documentation about Internationalized Domain Name (IDN) usage in the TLD is contained in answer to Question 44 (IDN).

The SRS is fully IPv6 compliant: It accepts IPv6 addresses as Glue Records for host objects and is reachable via native IPv6 transport. Additional details about IPv6 support are contained in answer to Question 36 (IPv6).

For reference, the Performance Specifications relevant to the SRS as required by Specification 10 (Registry Performance Specifications) are included in Table Q24-01. As indicated, the SRS performance meets or exceeds all SLA requirements and significant effort has been taken to verify these SLA requirements on a physical installation of the SRS architecture⁄software. Hence, the performance metrics included in Table Q24-01 are real measurements, rather than theoretical assumptions or estimations.

Note: The performance SLAs have been verified by setting up a prototype system that is functionally and architecturally identical to the registry system, but has limited hardware resources compared to the proposed production architecture. Hence, the performance of the actual production system is expected to exceed the measured performance values on the prototype system indicated in Table Q24-01. Details on the measurements are contained in the responses to Question 33.

Table Q24-01: see attachment

The measurements used to achieve the individual Service Levels are discussed in the following sections:

Performance – Shared Registry Service (EPP)

EPP service availability

The EPP interface of the SRS is provided by two front end server processes on two physically separate machines. Both front ends are accessible via a single IP address, and the load is dynamically shared between these two frontends. In the case that a single frontend system is unresponsive, it is automatically removed from the load balanced group. When a frontend returns to service, it is automatically added back into the load balanced group configuration. In addition, alerts to the NOC are triggered for all such events so that the operations team is notified of error conditions immediately.

For security reasons, access to all EPP interfaces is restricted and is only permitted from network ranges of authorized registrars.

Using this architecture, the SRS for the proposed TLD will exceed ICANN’s „EPP service availability“ requirement of 98%. A production implementation of the Registry System (for the “.at” TLD) with similar software & architecture has surpassed 99.6% monthly availability for each month during 2009, 2010 and 2011 (with most months above 99.9% availability).

EPP command performance notes

The performance of the SRS for EPP session, query and transform commands was extensively evaluated. Please refer to the response to question 33 for the measurements and figures indicating the performance under a realistic base load of the proposed registry system. These figures show that the EPP session, query and transform command RTTs clearly meet the 2000ms and 4000ms thresholds, respectively, for at least 90% of the commands.

Additional Performance figures

The response to Question 33 (Database) contains some additional performance figures for the SRS, again gathered on a prototype system.

Network Overview & Number of Servers

The SRS servers make use of the two data center locations “Vienna” and “Salzburg”, (distance approximately 300km⁄185miles). The data centers are equipped with multiple, independent upstream connections to the internet (from different service providers) and two Layer 2 crosslinks. The backend registry operator also operates a Local Internet Registry (LIR), allocates IP space from its own address pool, and operates its own Autonomous Systems (ASes). The high-level network structure is shown in Figure Q-24-02, it is further detailed in Figures Q-32-07 and Q-32-08 as part of the answer to Question 32.


As shown, a significant focus of the network design work has been to remove any single point of failure. Also, each server is connected to two access routers, so that an outage of any single network component does not affect server and consequently service availability. More information about the network infrastructure at each individual location is contained in response to Question 35. A complete and detailed overview of machinery in place for this TLD is given in Table Q32-11 of the answer to Question 32.

The server infrastructure of the gTLD’s SRS consists of the following set of machines (this list does not include the actual DNS network):

Two physically separate, dedicated servers running SRS frontend instances and the production database, clustered in active-active (Frontends) and active-standby (Database) configuration. Database as well as SRS frontends are segregated from each other via virtualization. In terms of scalability, should the TLD exceed 500,000 registered domains, provisions are in place to add further, dedicated machines as needed.
A total of 6 physical machines provide the additional functions of the gTLD, including zone generation, DNSSEC signing, zone deployment (via Hidden Masters), backup, management, and a test instance of the SRS. The functions on those 6 “infrastructure” machines are shared among up to 4 gTLD installations, and adding more machines is planned depending on growth projections for each individual TLD. The existing infrastructure scales to at least 500,000 domain names per TLD without requiring additional servers. Services on those physical servers are again segregated from each other using virtualization.
Additionally, several other servers are involved in supplementary functionality, such as monitoring, tape backup, logging & reporting services.

All servers used for the operation of the TLD are (and will be) rack mountable, data center grade machines with active maintenance contracts from the supplier.

Interconnectivity with other Registry Services

The SRS, as well as the infrastructure required to perform the other critical Registry Services are installed on servers located in the same data center (under emergency conditions, services may be moved to servers in the backup data center). Therefore, the services are inter-connected using either Local Area Networking (LAN) or redundant private layer 2 links (linking the “Vienna” and “Salzburg” locations). In addition to the redundant layer 2 links, infrastructure is in place to securely tunnel traffic between those two locations over the public internet in the unlikely case that both the private site cross-links fail. From a security perspective, multiple firewall layers are used to filter network traffic between the various network segments, i.e. between the public Internet, perimeter and internal networks.

Zone dissemination or transfer from the hidden primary to the public nameserver network is performed over the public internet however all such communications are cryptographically secured. Both locations have redundant upstream connectivity from independent providers with a minimum total bandwidth of 2x1 Gbit⁄s.

Both networks are also connected to the “Vienna Internet Exchange” (VIX), where peering relationships with many other organizations have been established. This provides an optimal routing path to the Registry Systems and services for those organisations.

In terms of data integrity and consistency the following provisions have been put in place to ensure correct synchronization of Registry systems:

The active and standby database servers are synced in real-time using block device level replication functionality provided by the DRBD technology.
DNS zone servers are synchronized every 15 minutes.
For failover purposes, full machine images or snapshots of all virtual machines are copied to the standby data center once per day (please see the response to Question 37 for details)
Synchronization between SRS and registry helpdesk systems occurs every few minutes.
It is important to note that as WHOIS data is provided directly from the backend registry database there is no need to synchronize WHOIS data.

The synchronization strategy used differs from service to service. For the SRS frontends themselves, an active-active setup with OSPF-based load-balancing is employed. The registry database uses an active-standby setup with real-time synchronization and automatic failover.

Resourcing Plan

It should be noted that the architecture and basic development work for the SRS software has already been completed at the time of this submission (except policy adjustments for the TLD), which reduces the time and number of personnel required to perform the necessary development and maintenance work.

The Registry Backend Operator employs 4 developers (totalling to 3 FTEs) responsible for developing and maintaining the SRS software, for example implementing per-TLD policy customisations. Those developers also work on the development and maintenance of RDDS, and their work is shared amongst the operation of multiple TLDs.

Additionally, 2 system engineers (2 FTEs) are responsible for performing the actual deployment of the SRS for a new TLD including the subsequent hand over of the newly installed systems to the Network Operations team.

A minimum of 8 people are fully trained to perform day-to-day and ongoing maintenance operations of the SRS systems and software.

The required hardware for the SRS is described above and all related costs are bundled with the “Software as a Service” fees that the Registry Operator pays to the Registry Backend Operator. This also includes all resources that are required to operate the hardware for the SRS, such as data center or other infrastructure expenses, maintenance contracts and hardware replacement.
gTLDFull Legal NameE-mail suffixDetail
.immodotimmobilie GmbHdotimmobilie.deView
High-level SRS systems description:

The Shared Registry System is based on the Extensible Provisioning Protocol (EPP) and employs a multi-tiered architecture with public facing interfaces completely segregated from backend functions (such as database and management interfaces). An overview of the functionality provided by the SRS is as follows:

Registrars connect to and authenticate against the EPP frontend systems.
Those frontends receive and parse all EPP commands to perform checks of business logic (including any policy requirements) and subsequently perform (or reject) the requested action against the back-end data storage.
The back-end data storage is handled by a Relational Database Management System (described in detail in response to Question 33).

The server elements used for the SRS employ a number of technologies to ensure service availability and reliability. These include the use of multiple, virtualised Linux servers and several layers of high-availability functionality such as active-active load balancing, standby components and replication of full virtual machine images. A significant amount of design and implementation effort has focussed on removing any potential single point of failure in the SRS architecture. This architecture has also been fully tested and verified on a functionally identical prototype system, and is operational for the “.bh” migration. Some of that work included training and verification by independent third party system architecture experts, in particular the critical system availability functions such as cluster failover and real-time block device level replication.

The SRS software itself is readily available at the time of this submission. It is implemented and operated in accordance with the requirements of Specification 6 („Registry Interoperability and Continuity Specifications“) and the respective SLAs in Specification 10. The SRS uses EPP as its core provisioning protocol and supports, amongst other RFCs, the following provisioning RFCs as required in Section 1.2 of Specification 6:

RFC 5730 (EPP Base Specification)
RFC 5731 (EPP Domain Name Mapping)
RFC 5732 (EPP Host Mapping)
RFC 5733 (EPP Contact Mapping)
RFC 5734 (EPP TCP Transport).
RFC 3735 (EPP Extension Guidelines)
RFC 5910 (DNSSEC Mapping)
RFC 3915 (Grace Period Mapping)

For maximum interoperability, only EPP functionality that is documented in the above RFCs is implemented – i.e. there are no proprietary EPP extensions used in the SRS. Further details about the implementation of the EPP Registry Services are contained in response to Question 25 (“EPP”).

It is understood that SRS availability and 100% data integrity are absolute key requirements for the deployment of a successful TLD operation. As a result the SRS implementation was developed with a strong focus on those key factors.

The core software platform employed by the SRS, with its powerful modular policy functionality has been in production use for the „.at“ ccTLD (nic.at) since 2003, additionally the „.no“ ccTLD (norid) successfully migrated to the registry software over the course of 2010. Another installation of the software was recently rolled out to support the migration of the “.bh” ccTLD (Kingdom of Bahrain) from the incumbent operator to the Regulatory Authority of Bahrain. Furthermore, this core software is currently being used to provide SRS implementations for ENUM (Electronic Numbering) Registries in Austria (+43) and Ireland (+353). Finally, test instances of this customised software for ENUM are deployed in The Netherlands and Australia.

The modular and highly extensible structure of the SRS software allows for customized per-TLD policies that are implemented on top of an identical core registry system. This allows for code reuse between different TLD implementations, regardless of the policy framework required.

The implementation of this software, specifically for this new gTLD, has been customized to the needs of the registry operator and to meet or exceed ICANN’s policy and SLA requirements set out in the Applicant Guidebook for new gTLDs. A detailed description of the architecture supporting the SRS software is contained in answer to Question 32 (Architecture).

Details for the DNS elements of the TLD service, including zone file creation, signing, dissemination and testing procedures are contained in answers to Question 35 (DNS) and answers to Question 43 (DNSSEC).

Policy and additional documentation about Internationalized Domain Name (IDN) usage in the TLD is contained in answer to Question 44 (IDN).

The SRS is fully IPv6 compliant: It accepts IPv6 addresses as Glue Records for host objects and is reachable via native IPv6 transport. Additional details about IPv6 support are contained in answer to Question 36 (IPv6).

For reference, the Performance Specifications relevant to the SRS as required by Specification 10 (Registry Performance Specifications) are included in Table Q24-01. As indicated, the SRS performance meets or exceeds all SLA requirements and significant effort has been taken to verify these SLA requirements on a physical installation of the SRS architecture⁄software. Hence, the performance metrics included in Table Q24-01 are real measurements, rather than theoretical assumptions or estimations.

Note: The performance SLAs have been verified by setting up a prototype system that is functionally and architecturally identical to the registry system, but has limited hardware resources compared to the proposed production architecture. Hence, the performance of the actual production system is expected to exceed the measured performance values on the prototype system indicated in Table Q24-01. Details on the measurements are contained in the responses to Question 33.

Table Q24-01: see attachment

The measurements used to achieve the individual Service Levels are discussed in the following sections:

Performance – Shared Registry Service (EPP)

EPP service availability

The EPP interface of the SRS is provided by two front end server processes on two physically separate machines. Both front ends are accessible via a single IP address, and the load is dynamically shared between these two frontends. In the case that a single frontend system is unresponsive, it is automatically removed from the load balanced group. When a frontend returns to service, it is automatically added back into the load balanced group configuration. In addition, alerts to the NOC are triggered for all such events so that the operations team is notified of error conditions immediately.

For security reasons, access to all EPP interfaces is restricted and is only permitted from network ranges of authorized registrars.

Using this architecture, the SRS for the proposed TLD will exceed ICANN’s „EPP service availability“ requirement of 98%. A production implementation of the Registry System (for the “.at” TLD) with similar software & architecture has surpassed 99.6% monthly availability for each month during 2009, 2010 and 2011 (with most months above 99.9% availability).

EPP command performance notes

The performance of the SRS for EPP session, query and transform commands was extensively evaluated. Please refer to the response to question 33 for the measurements and figures indicating the performance under a realistic base load of the proposed registry system. These figures show that the EPP session, query and transform command RTTs clearly meet the 2000ms and 4000ms thresholds, respectively, for at least 90% of the commands.

Additional Performance figures

The response to Question 33 (Database) contains some additional performance figures for the SRS, again gathered on a prototype system.

Network Overview & Number of Servers

The SRS servers make use of the two data center locations “Vienna” and “Salzburg”, (distance approximately 300km⁄185miles). The data centers are equipped with multiple, independent upstream connections to the Internet (from different service providers) and two Layer 2 crosslinks. The backend registry operator also operates a Local Internet Registry (LIR), allocates IP space from its own address pool, and operates its own Autonomous Systems (ASes). The high-level network structure is shown in Figure Q-24-02, it is further detailed in Figures Q-32-07 and Q-32-08 as part of the answer to Question 32.


As shown, a significant focus of the network design work has been to remove any single point of failure. Also, each server is connected to two access routers, so that an outage of any single network component does not affect server and consequently service availability. More information about the network infrastructure at each individual location is contained in response to Question 35. A complete and detailed overview of machinery in place for this TLD is given in Table Q32-11 of the answer to Question 32.

The server infrastructure of the gTLD’s SRS consists of the following set of machines (this list does not include the actual DNS network):

Two physically separate, dedicated server running SRS frontend instances and the production database, clustered in active-active (Frontends) and active-standby (Database) configuration. Database as well as SRS frontends are segregated from each other via virtualization. In terms of scalability, should the TLD exceed 500,000 registered domains, provisions are in place to add further, dedicated machines as needed.
A total of 6 physical machines provide the additional functions of the gTLD, including zone generation, DNSSEC signing, zone deployment (via Hidden Masters), backup, management, and a test instance of the SRS. The functions on those 6 “infrastructure” machines are shared among up to 4 gTLD installations, and adding more machines is planned depending on growth projections for each individual TLD. The existing infrastructure scales to at least 500,000 domain names per TLD without requiring additional servers. Services on those physical servers are again segregated from each other using virtualization.
Additionally, several other servers are involved in supplementary functionality, such as monitoring, tape backup, logging & reporting services.

All servers used for the operation of the TLD are (and will be) rack mountable, data center grade machines with active maintenance contracts from the supplier.

Interconnectivity with other Registry Services

The SRS, as well as the infrastructure required to perform the other critical Registry Services are installed on servers located in the same data center (under emergency conditions, services may be moved to servers in the backup data center). Therefore, the services are inter-connected using either Local Area Networking (LAN) or redundant private layer 2 links (linking the “Vienna” and “Salzburg” locations). In addition to the redundant layer 2 links, infrastructure is in place to securely tunnel traffic between those two locations over the public Internet in the unlikely case that both the private site cross-links fail. From a security perspective, multiple firewall layers are used to filter network traffic between the various network segments, i.e. between the public Internet, perimeter and internal networks.

Zone dissemination or transfer from the hidden primary to the public nameserver network is performed over the public Internet however all such communications are cryptographically secured. Both locations have redundant upstream connectivity from independent providers with a minimum total bandwidth of 2x1 Gbit⁄s.

Both networks are also connected to the “Vienna Internet Exchange” (VIX), where peering relationships with many other organizations have been established. This provides an optimal routing path to the Registry Systems and services for those organisations.

In terms of data integrity and consistency the following provisions have been put in place to ensure correct synchronization of Registry systems:

The active and standby database servers are synced in real-time using block device level replication functionality provided by the DRBD technology.
DNS zone servers are synchronized every 15 minutes.
For failover purposes, full machine images or snapshots of all virtual machines are copied to the standby data center once per day (please see the response to Question 37 for details)
Synchronization between SRS and registry helpdesk systems occurs every few minutes.
It is important to note that as WHOIS data is provided directly from the backend registry database there is no need to synchronize WHOIS data.

The synchronization strategy used differs from service to service. For the SRS frontends themselves, an active-active setup with OSPF-based load-balancing is employed. The registry database uses an active-standby setup with real-time synchronization and automatic failover.

Resourcing Plan

It should be noted that the architecture and basic development work for the SRS software has already been completed at the time of this submission (except policy adjustments for the TLD), which reduces the time and number of personnel required to perform the necessary development and maintenance work.

The Registry Backend Operator employs 4 developers (totalling to 3 FTEs) responsible for developing and maintaining the SRS software, for example implementing per-TLD policy customisations. Those developers also work on the development and maintenance of RDDS, and their work is shared amongst the operation of multiple TLDs.

Additionally, 2 system engineers (2 FTEs) are responsible for performing the actual deployment of the SRS for a new TLD including the subsequent hand over of the newly installed systems to the Network Operations team.

A minimum of 8 people are fully trained to perform day-to-day and ongoing maintenance operations of the SRS systems and software.

The required hardware for the SRS is described above and all related costs are bundled with the “Software as a Service” fees that the Registry Operator pays to the Registry Backend Operator. This also includes all resources that are required to operate the hardware for the SRS, such as data center or other infrastructure expenses, maintenance contracts and hardware replacement.