Back

24 Shared Registration System (SRS) Performance

gTLDFull Legal NameE-mail suffixDetail
.banqueGEXBAN SASgexban.netView
Table of Contents

1 - Global description
2 - Shared Registration System (SRS) architecture
3 - SRS architecture diagram
4 - Detailed infrastructure
5 - Rate limitation
6 - Interconnectivity and synchronization with other systems
7 - Performance and scalability
8 - Resources
8.1 - Initial implementation
8.2 - On-going maintenance


------------------------
1 - Global description

As one of the critical registry functions, the SRS is part of the core of AFNIC back-end registry solution as deployed to fit the needs of the .banque TLD.
It both provides services for registrars and generates the data used for DNS publication and resolution service. In that aspect, it is responsible for most of the SLA’s to be respected. The following description will provide full and detailed description of the architecture of the SRS both from an application and from an infrastructure point of view.
This architecture is the same as the one used in production by AFNIC to operate .fr zone and has been fully functional for the last 15 years, with the ability to meet stringent SLAs as well as to scale from the management of a few thousands domain names in operations to over 2 million in late 2011.


------------------------
2 - Shared Registration System (SRS) architecture

AFNIC SRS is based on a three-layer architecture : front-end, business logic, middleware.
These three layers are supported by the data layer which is described in detail in Question 33 (Database Capabilities).

= Front end : Extensible Provisioning Protocol (EPP) and extranet =

The automated front-end of the SRS is EPP.
The EPP interface and implementation complies with RFCs 3735 and 5730-5734. It is itself described in detail in Question 25 (EPP).
An extranet web interface also offers the same functions as the EPP interface.
Both theses interfaces are supported by the same middleware layer.

= Business logic : flexible policies =

The Business logic enables configurability in order to allow for the adjustment of registry systems to the chosen registry policies. Various policy-related parameters such as delay for redemption, access rate-limiting and penalties can be configured in this layer.
The Business logic also incorporates a scheduler which provides for semi-automated processes with human validation in order to address specific policy needs which cannot or should not be fully automated.

= Middleware : a guaranty for evolution and scalability =

The Middleware layer guarantees a consistent and registry oriented access for all the TLD data. All registry applications operate through this layer in order to centralize object management rules. It enables access through different programming languages (Java, php and Perl in AFNIC solution) with same rules and ease of switching from one language to another in case of application refactoring or migration.

= Data =

The Data layer is the structured data repository for domain, contact, operations, historization of transactions, as well as registrars and contracts data. It provides all the necessary resilient mechanisms to ensure 100% uptime and full recovery and backup.
It also provides a complete toolbox for the fine tuning of the various applications. This layer is described in more details in Question 33 (Database capacities).


------------------------
3 - SRS architecture diagram

[see attached diagram Q24_3_SRS_architecture_diagram.pdf]
Diagram : SRS architecture diagram
Description : This diagram shows global interaction between Internet, DMZ (Demilitarized Zone) and private network zones. Topology of network and servers is illustrated including dedicated IP address scheme and network flows.

This diagram does not shows additional sandbox and preproduction services. These services are offered respectively for registrars and back-end developer team to stabilize developments before production delivery. They are fully iso-functional to the SRS description above.

= SRS logical diagram =

Our robust infrastructure shows dual Internet Service Provider (ISP) connectivity both in IPv4 and IPv6 (Jaguar and RENATER), redundant firewall and switching infrastructure. This part of the architecture is mutualised for all TLDs hosted.

The networking architecture dedicates LAN for administration, backup and production.

Servers are hosted on different network zones : database for database, private for servers not visible on the internet and public for external servers visible on the DMZ. Dedicated zones are also set up for monitoring servers, administration servers or desktop and backup servers.
Each server is load balanced and the service is not impacted by the loss of one server, the capacity of each server being sized to be able to host the whole traffic.

Servers hosting the .banque TLD are shared with up to an estimated number of 20 TLDs of comparable scale and use case.

= SRS physical diagram =

The IP scheme used is the following :

2001:67c:2218:1::4:0⁄64 for IPv6 Internet homing
192.134.4.0⁄24 for Ipv4 Internet homing

Production LAN

192.134.4.0⁄24 for public network IP range
10.1.50.0⁄24, 10.1.30.0⁄24 for private network IP ranges distributed on the zones described above.

Backup LAN

172.x.y.0⁄24 : x is different on each network zone. y is fixed to the value of the associated production LAN in the same zone (for example Private zone production LAN being 10.1.”50”.0⁄24, Private zone backup LAN is 172.16.”50”.0⁄24)

Administration LAN

172.z.y.0⁄24 : z is the value of x+1, x being the digit chosen for the corresponding Backup LAN in the same zone. y is fixed to the value of the associated production LAN in the same zone (for example Private zone production LAN being 10.1.”50”.0⁄24, Private zone administration LAN is 172.17.”50”.0⁄24).

Hot standby of the production database is automatically taken into account by the SRS Oracle Transparent Network Substrate configuration. Therefore if the database are migrated in hot standby due to failure of part of the system, the SRS access is automatically swapped to the new base.


------------------------
4 - Detailed infrastructure

The SRS modules play a central role in the back-end registry infrastructure. This is highlighted in terms of capacity expenditures (CAPEX) by the fact that SRS modules account for approximately 30% of the global CAPEX of the solution.

In the following description “server” will refer to either a physical or a virtual server.
Due to very fast growth of performance in storage and processors technologies, the infrastructure described below could be replaced by more powerful one available at the time of the set up for the same cost.

At the applicative and system level, AFNIC’s SRS systems are shared with up to an estimated number of 20 TLDs of comparable scale and use case.

AFNIC has invested in very efficient VMWare Vsphere virtualization infrastructure. It provides a flexible approach to recovery both through quick activation of a new fresh server in case of local failure (cold standby) and through global failover to a mirrored infrastructure on another site.
This comes in addition to natural redundancy provided by the load balanced servers.

Nevertheless, internal protocols and best practices for server virtualization have shown that very high I⁄O-intensive (Input⁄Output) application servers are not good clients for virtualization. The SRS is therefore hosted on virtualized infrastructure to the exception of the database, which presents very high rate of I⁄O, and is hosted on a dedicated physical infrastructure.

The whole SRS service is located in the primary datacenter used by AFNIC in production, the secondary datacenter serves as failover capacity.

The Front end is hosted on two load balanced virtual servers and two load balanced reverse proxies ensuring authentication of registrars.

The Business logic is hosted on two load balanced dedicated virtual servers. Scalability of these servers is ensured by quick resizing offered by virtualization technology if needed.

The Middleware is hosted on two load balanced dedicated virtual servers. It can be extended to any amount of servers needed to ensure performance commensurate with the amount of traffic expected. The dual use of Apache HAproxy and of a centralized lock mechanism ensure good queuing of each request in the system despite heavy load and parallelized middleware data access.

Scalability of all these servers are ensured by quick resizing offered by virtualization technology if needed.

All databases are based on Oracle technologies. The main database is replicated logically on two sites. Full local recovery processes are in place in case of loss of integrity through the Oracle redolog functions which provides full recovery by replay of historized logged requests.

The whole SRS service is located in the primary Tier 3 datacenter used by AFNIC in production, the secondary datacenter serves as failover capacity. Continuity mechanisms at a datacenter level are described in Questions 34 (Geographic Diversity), 39 (Registry Continuity) and 41 (Failover testing).

The detailed list of infrastructures involved can be described as follows :

This infrastructure is designed to host up to an estimated number of 20 TLDs of comparable scale and use case.

= Virtual servers =

EPP proxy : 2 servers
* Processor: 1 bi-core CPU
* Main memory: 8 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

EPP service : 2 servers
* Processor: 1 quad-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 1 TB

Business logic : 2 servers
* Processor: 1 bi-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

Data Gateway : 2 servers
* Processor: 1 quad-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 1 TB

= Data storage : see Question 33 (Database Capabilities) =

= Physical server =

Rate limiting database : 1 server
* Processor: 1 bi-core CPU
* Main memory: 8 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

Back up servers, backup libraries, Web whois server : mutualized with the global registry service provider infrastructure

= Additionnal infrastructure =

Failover infrastructure : 6 servers
* 1 bi-core CPU, 8 GB of RAM, RedHat RHEL 6, 500 GB

Sandbox infrastructure : 6 servers
* 1 bi-core CPU, 8 GB of RAM, RedHat RHEL 6, 500 GB

Preproduction infrastructure : 1 server
* 1 quad-core CPU, 16 GB of RAM, RedHat RHEL 6, 1 TB


------------------------
5 - Rate limitation

To ensure resiliency of the SRS a rate limitation and penalty mechanisms are in place.
Rate limitation and penalties are directly implemented on the front end server.

Access is rate limited through token-bucket algorithms with rate-limiting IP data stored on a dedicated database.
Penalties are applied as follow :
* Any command that follows a login command is immediately executed but the next one is only taken into account 2 seconds later. The following commands are not penalized (unless they do not follow one of the limitation rules).
* For the same domain name, the domain:check commands will not be able to follow themselves more than 2 times every 4 seconds. Beyond this rate, a 2 second penalty will be applied on the following domain:check commands (for the same domain name). For instance, it is possible to have a domain:check follow a domain:create command that already followed a first domain:check on a same domain name without any penalty.
* On the other hand, a customer making several domain:check commands on a same domain name will need to respect a 4 second delay between the first and the third call if he wishes not to be penalized.
* Any domain:create command on an already existing domain name induce an additional 2 seconds in the answer time of this command.
* Any domain:info command on a domain name that is not in your portfolio and for which you do not indicate the auth_info induce an additional 1 second in the answer time of this command.

The rate limiting database is hosted on one physical dedicated physical server. This server represents no failure point as a failure of the rate limiting system doesn’t affect the service (a standard uniform limitation is then applied instead of intelligent rate limiting).


------------------------
6 - Interconnectivity and synchronization with other systems

= Whois (RDDS) =

The whois service will be described in detail in question 27. It is hosted on two servers directly connected to the main production database through read only API. Data updated by the SRS are immediately visible in the Whois with no further synchronisation needed. Rate limitation is applied on RDDS service to avoid any load on the database due to Whois direct access. Hot standby of the production database is automatically taken into account by the Whois Oracle Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the Whois service is automatically swapped to the new architecture.

= Back office⁄billing⁄Escrow =

Back-office, escrow and billing system is hosted on mutualized server. It operates directly on production data through the middleware layer to ensure integrity of data. These can be considered as fully synchronous applications. Hot standby of the production database is automatically taken into account by the Middleware layer Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the back office and billing service are automatically swapped to the new architecture.

= Monitoring =

Monitoring is operated through probes and agents scanning systems with a 5 minutes period. The monitoring system gets snmp data from all servers described in the SRS architecture and also from dedicated Oracle monitoring agent for the database. A specific prove for EPP simulating a full domain creation is also activated, still with the 5 minutes period.

= Dispute resolution =

Any operation on domain names triggered in the context of a dispute resolution is made through a back-office tool (see Back office)

= DNS publication =

DNS publication relies on a specific table of the production database hosted on the same oracle instance. These data are directly generated by the SRS system. Dynamic Update batches are generated at each operation. The use of theses batches for DNS Dynamic update or of the whole data for full zonefile generation are made directly from these production data. No further synchronization is needed. The detail of frequency and workflow for dns publication is described in Question 35 (DNS) and Question 32 (Architecture). Hot standby of the production database is automatically taken into account by the DNS publication Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the dns publication is automatically swapped to the new architecture.


------------------------
7 - Performance and scalability

The Registry’s SRS offers high level production SLAs and derives from the branch of systems that have evolved over the last 15 years to successfully operate a set of french ccTLDs.

The Registry’s SRS is used to operate .fr, .re, .yt, .pm, .tf, .wf TLDs. It is used by more than 800 registrars in parallel managing more than 2 millions domain names.

AFNIC’s SRS is designed to meet ICANN’s Service-level requirements as specified in Specification 10 (SLA Matrix) attached to the Registry Agreement.

Actual and current average performance of AFNIC’s SRS is :
* SRS availability : 99,4%
* SRS session-command RTT : 400ms for 99,4% of requests
* SRS query command RTT : 500ms
* SRS transform command RTT : 1,4 s on availability period
* SRS max downtime : 2 hours⁄month

As described in Question 31 (Technical Overview) in relation to each of the phases of the TLD’s operations, the following transaction loads are expected on the SRS :
* launch phase : up to 150 queries⁄hour
* routine ongoing operations : up to 1,000 queries⁄hour

The system is designed to handle up to 50,000 domain names and up to 2 requests per second.

The targeted TLD size being approximately 500 domain names after 3 years of operations and the expected peak transaction rate being 1,000 queries⁄hour, this ensures that enough capacity is available to handle the launch phase, unexpected demand peaks, as well as rapid scalability needs.

Capacity planning indicators are set up to anticipate exceptional growth of the TLD.
Technologies used enables quick upgrade on all fields :
* Servers : virtual resizing to add CPUs or disk space if resource is available on the production ESX servers. If not, 2 spare additional ESX servers can be brought live if additional performance is needed.
* Database : database capacity has been greatly oversized to avoid need of replacement of this physical highly capable server. Precise capacity planning will ensure that sufficient delay will be available to acquire new server if needed. A threshold of 40% of CPU use or total storage capacity triggers alert for acquisition.


------------------------
8 - Resources

Four categories of profiles are needed to run the Registry’s Technical Operations : Registry Operations Specialists (I), Registry Systems Administrators (II), Registry Software Developer (III) and Registry Expert Engineers (IV). These categories, skillset and global availability of resources have been detailed in Question 31 (Technical Overview of Proposed Registry) including specific resources set and organisation to provide 24⁄7 coverage and maintenance capacity.
Specific workload for SRS management is detailed below.

------------------------
8.1 - Initial implementation

The set up is operated on the pre-installed virtualization infrastructure. It implies actions by system, database and network administrators to create the virtual servers and install the applicative packages.

Then, developers, assisted by a team of experts and senior staff members apply proper configuration for the given TLD. Specific policy rules are configured and tested.

The initial implementation effort is estimated as follows :

Database Administrator 0.03 man.day
Network Administrator 0.03 man.day
System Administrator 0.03 man.day
Software Developer 0.10 man.day
Database Engineer 0.10 man.day
Software Engineer 0.20 man.day
DNS Expert Engineer 0.10 man.day

------------------------
8.2 - On-going maintenance

On-going maintenance on the SRS includes integration of new policy rules, evolution of technology, bug fixing, infrastructure evolution, failover testing.

Although all the defined technical profiles are needed for such on-going maintenance operations, on a regular basis, it is mainly a workload handled by monitoring and development teams for alert management and new functional developments, respectively.

The on-going maintenance effort per year is estimated as follows, on a yearly basis :

Operations Specialist 0.40 man.day
Database Administrator 0.10 man.day
Network Administrator 0.10 man.day
System Administrator 0.10 man.day
Software Developer 0.20 man.day
Database Engineer 0.05 man.day
Network Engineer 0.05 man.day
System Engineer 0.05 man.day
Software Engineer 0.05 man.day
gTLDFull Legal NameE-mail suffixDetail
.MUTUELLEFédération Nationale de la Mutualité Françaisemutualite.frView
Table of Contents

1 - Global description
2 - Shared Registration System (SRS) architecture
3 - SRS architecture diagram
4 - Detailed infrastructure
5 - Rate limitation
6 - Interconnectivity and synchronization with other systems
7 - Performance and scalability
8 - Resources
8.1 - Initial implementation
8.2 - On-going maintenance


------------------------
1 - Global description

As one of the critical registry functions, the SRS is part of the core of AFNIC back-end registry solution as deployed to fit the needs of the .MUTUELLE TLD.
It both provides services for registrars and generates the data used for DNS publication and resolution service. In that aspect, it is responsible for most of the SLA’s to be respected. The following description will provide full and detailed description of the architecture of the SRS both from an application and from an infrastructure point of view.
This architecture is the same as the one used in production by AFNIC to operate .fr zone and has been fully functional for the last 15 years, with the ability to meet stringent SLAs as well as to scale from the management of a few thousands domain names in operations to over 2 million in late 2011.


------------------------
2 - Shared Registration System (SRS) architecture

AFNIC SRS is based on a three-layer architecture : front-end, business logic, middleware.
These three layers are supported by the data layer which is described in detail in Question 33 (Database Capabilities).

= Front end : Extensible Provisioning Protocol (EPP) and extranet =

The automated front-end of the SRS is EPP.
The EPP interface and implementation complies with RFCs 3735 and 5730-5734. It is itself described in detail in Question 25 (EPP).
An extranet web interface also offers the same functions as the EPP interface.
Both theses interfaces are supported by the same middleware layer.

= Business logic : flexible policies =

The Business logic enables configurability in order to allow for the adjustment of registry systems to the chosen registry policies. Various policy-related parameters such as delay for redemption, access rate-limiting and penalties can be configured in this layer.
The Business logic also incorporates a scheduler which provides for semi-automated processes with human validation in order to address specific policy needs which cannot or should not be fully automated.

= Middleware : a guaranty for evolution and scalability =

The Middleware layer guarantees a consistent and registry oriented access for all the TLD data. All registry applications operate through this layer in order to centralize object management rules. It enables access through different programming languages (Java, php and Perl in AFNIC solution) with same rules and ease of switching from one language to another in case of application refactoring or migration.

= Data =

The Data layer is the structured data repository for domain, contact, operations, historization of transactions, as well as registrars and contracts data. It provides all the necessary resilient mechanisms to ensure 100% uptime and full recovery and backup.
It also provides a complete toolbox for the fine tuning of the various applications. This layer is described in more details in Question 33 (Database capacities).


------------------------
3 - SRS architecture diagram

[see attached diagram Q24_3_SRS_architecture_diagram.pdf]
Diagram : SRS architecture diagram
Description : This diagram shows global interaction between Internet, DMZ (Demilitarized Zone) and private network zones. Topology of network and servers is illustrated including dedicated IP address scheme and network flows.

This diagram does not shows additional sandbox and preproduction services. These services are offered respectively for registrars and back-end developer team to stabilize developments before production delivery. They are fully iso-functional to the SRS description above.

= SRS logical diagram =

Our robust infrastructure shows dual Internet Service Provider (ISP) connectivity both in IPv4 and IPv6 (Jaguar and RENATER), redundant firewall and switching infrastructure. This part of the architecture is mutualised for all TLDs hosted.

The networking architecture dedicates LAN for administration, backup and production.

Servers are hosted on different network zones : database for database, private for servers not visible on the internet and public for external servers visible on the DMZ. Dedicated zones are also set up for monitoring servers, administration servers or desktop and backup servers.
Each server is load balanced and the service is not impacted by the loss of one server, the capacity of each server being sized to be able to host the whole traffic.

Servers hosting the .MUTUELLE TLD are shared with up to an estimated number of 20 TLDs of comparable scale and use case.

= SRS physical diagram =

The IP scheme used is the following :

2001:67c:2218:1::4:0⁄64 for IPv6 Internet homing
192.134.4.0⁄24 for Ipv4 Internet homing

= Production LAN =

192.134.4.0⁄24 for public network IP range
10.1.50.0⁄24, 10.1.30.0⁄24 for private network IP ranges distributed on the zones described above.


= Backup LAN =

172.x.y.0⁄24 : x is different on each network zone. y is fixed to the value of the associated production LAN in the same zone (for example Private zone production LAN being 10.1.”50”.0⁄24, Private zone backup LAN is 172.16.”50”.0⁄24)

= Administration LAN =

172.z.y.0⁄24 : z is the value of x+1, x being the digit chosen for the corresponding Backup LAN in the same zone. y is fixed to the value of the associated production LAN in the same zone (for example Private zone production LAN being 10.1.”50”.0⁄24, Private zone administration LAN is 172.17.”50”.0⁄24).

Hot standby of the production database is automatically taken into account by the SRS Oracle Transparent Network Substrate configuration. Therefore if the database are migrated in hot standby due to failure of part of the system, the SRS access is automatically swapped to the new base.


------------------------
4 - Detailed infrastructure

The SRS modules play a central role in the back-end registry infrastructure. This is highlighted in terms of capacity expenditures (CAPEX) by the fact that SRS modules account for approximately 30% of the global CAPEX of the solution.

In the following description “server” will refer to either a physical or a virtual server.
Due to very fast growth of performance in storage and processors technologies, the infrastructure described below could be replaced by more powerful one available at the time of the set up for the same cost.

At the applicative and system level, AFNIC’s SRS systems are shared with up to an estimated number of 20 TLDs of comparable scale and use case.

AFNIC has invested in very efficient VMWare Vsphere virtualization infrastructure. It provides a flexible approach to recovery both through quick activation of a new fresh server in case of local failure (cold standby) and through global failover to a mirrored infrastructure on another site.
This comes in addition to natural redundancy provided by the load balanced servers.

Nevertheless, internal protocols and best practices for server virtualization have shown that very high I⁄O-intensive (Input⁄Output) application servers are not good clients for virtualization. The SRS is therefore hosted on virtualized infrastructure to the exception of the database, which presents very high rate of I⁄O, and is hosted on a dedicated physical infrastructure.

The whole SRS service is located in the primary datacenter used by AFNIC in production, the secondary datacenter serves as failover capacity.

The Front end is hosted on two load balanced virtual servers and two load balanced reverse proxies ensuring authentication of registrars.

The Business logic is hosted on two load balanced dedicated virtual servers. Scalability of these servers is ensured by quick resizing offered by virtualization technology if needed.

The Middleware is hosted on two load balanced dedicated virtual servers. It can be extended to any amount of servers needed to ensure performance commensurate with the amount of traffic expected. The dual use of Apache HAproxy and of a centralized lock mechanism ensure good queuing of each request in the system despite heavy load and parallelized middleware data access.

Scalability of all these servers are ensured by quick resizing offered by virtualization technology if needed.

All databases are based on Oracle technologies. The main database is replicated logically on two sites. Full local recovery processes are in place in case of loss of integrity through the Oracle redolog functions which provides full recovery by replay of historized logged requests.

The whole SRS service is located in the primary Tier 3 datacenter used by AFNIC in production, the secondary datacenter serves as failover capacity. Continuity mechanisms at a datacenter level are described in Questions 34 (Geographic Diversity), 39 (Registry Continuity) and 41 (Failover testing).

The detailed list of infrastructures involved can be described as follows :

This infrastructure is designed to host up to an estimated number of 20 TLDs of comparable scale and use case.

= Virtual servers =

EPP proxy : 2 servers
* Processor: 1 bi-core CPU
* Main memory: 8 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

EPP service : 2 servers
* Processor: 1 quad-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 1 TB

Business logic : 2 servers
* Processor: 1 bi-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

Data Gateway : 2 servers
* Processor: 1 quad-core CPU
* Main memory: 16 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 1 TB

= Data storage : see Question 33 (Database Capabilities) =

= Physical server =

Rate limiting database : 1 server
* Processor: 1 bi-core CPU
* Main memory: 8 GB of RAM
* Operating system: RedHat RHEL 6
* Disk space: 500 GB

Back up servers, backup libraries, Web whois server : mutualized with the global registry service provider infrastructure

= Additionnal infrastructure =

Failover infrastructure : 6 servers
* 1 bi-core CPU, 8 GB of RAM, RedHat RHEL 6, 500 GB

Sandbox infrastructure : 6 servers
* 1 bi-core CPU, 8 GB of RAM, RedHat RHEL 6, 500 GB

Preproduction infrastructure : 1 server
* 1 quad-core CPU, 16 GB of RAM, RedHat RHEL 6, 1 TB


------------------------
5 - Rate limitation

To ensure resiliency of the SRS a rate limitation and penalty mechanisms are in place.
Rate limitation and penalties are directly implemented on the front end server.

Access is rate limited through token-bucket algorithms with rate-limiting IP data stored on a dedicated database.
Penalties are applied as follow :
* Any command that follows a login command is immediately executed but the next one is only taken into account 2 seconds later. The following commands are not penalized (unless they do not follow one of the limitation rules).
* For the same domain name, the domain:check commands will not be able to follow themselves more than 2 times every 4 seconds. Beyond this rate, a 2 second penalty will be applied on the following domain:check commands (for the same domain name). For instance, it is possible to have a domain:check follow a domain:create command that already followed a first domain:check on a same domain name without any penalty.
* On the other hand, a customer making several domain:check commands on a same domain name will need to respect a 4 second delay between the first and the third call if he wishes not to be penalized.
* Any domain:create command on an already existing domain name induce an additional 2 seconds in the answer time of this command.
* Any domain:info command on a domain name that is not in your portfolio and for which you do not indicate the auth_info induce an additional 1 second in the answer time of this command.

The rate limiting database is hosted on one physical dedicated physical server. This server represents no failure point as a failure of the rate limiting system doesn’t affect the service (a standard uniform limitation is then applied instead of intelligent rate limiting).


------------------------
6 - Interconnectivity and synchronization with other systems

= Whois (RDDS) =

The whois service will be described in detail in question 27. It is hosted on two servers directly connected to the main production database through read only API. Data updated by the SRS are immediately visible in the Whois with no further synchronisation needed. Rate limitation is applied on RDDS service to avoid any load on the database due to Whois direct access. Hot standby of the production database is automatically taken into account by the Whois Oracle Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the Whois service is automatically swapped to the new architecture.

= Back office⁄billing⁄Escrow =

Back-office, escrow and billing system is hosted on mutualized server. It operates directly on production data through the middleware layer to ensure integrity of data. These can be considered as fully synchronous applications. Hot standby of the production database is automatically taken into account by the Middleware layer Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the back office and billing service are automatically swapped to the new architecture.

= Monitoring =

Monitoring is operated through probes and agents scanning systems with a 5 minutes period. The monitoring system gets snmp data from all servers described in the SRS architecture and also from dedicated Oracle monitoring agent for the database. A specific prove for EPP simulating a full domain creation is also activated, still with the 5 minutes period.

= Dispute resolution =

Any operation on domain names triggered in the context of a dispute resolution is made through a back-office tool (see Back office)

= DNS publication =

DNS publication relies on a specific table of the production database hosted on the same oracle instance. These data are directly generated by the SRS system. Dynamic Update batches are generated at each operation. The use of theses batches for DNS Dynamic update or of the whole data for full zonefile generation are made directly from these production data. No further synchronization is needed. The detail of frequency and workflow for dns publication is described in Question 35 (DNS) and Question 32 (Architecture). Hot standby of the production database is automatically taken into account by the DNS publication Transparent Network Substrate configuration. Therefore if SRS and database are migrated in hot standby due to failure of part of the system, the dns publication is automatically swapped to the new architecture.


------------------------
7 - Performance and scalability

The Registry’s SRS offers high level production SLAs and derives from the branch of systems that have evolved over the last 15 years to successfully operate a set of french ccTLDs.

The Registry’s SRS is used to operate .fr, .re, .yt, .pm, .tf, .wf TLDs. It is used by more than 800 registrars in parallel managing more than 2 millions domain names.

AFNIC’s SRS is designed to meet ICANN’s Service-level requirements as specified in Specification 10 (SLA Matrix) attached to the Registry Agreement.

Actual and current average performance of AFNIC’s SRS is :
* SRS availability : 99,4%
* SRS session-command RTT : 400ms for 99,4% of requests
* SRS query command RTT : 500ms
* SRS transform command RTT : 1,4 s on availability period
* SRS max downtime : 2 hours⁄month

As described in Question 31 (Technical Overview) in relation to each of the phases of the TLD’s operations, the following transaction loads are expected on the SRS :
* launch phase : up to 400 queries⁄hour
* routine ongoing operations : up to 1,500 queries⁄hour

The system is designed to handle up to 50,000 domain names and up to 2 requests per second.

The targeted TLD size being approximately 1,500 domain names after 3 years of operations and the expected peak transaction rate being 1,500 queries⁄hour, this ensures that enough capacity is available to handle the launch phase, unexpected demand peaks, as well as rapid scalability needs.

Capacity planning indicators are set up to anticipate exceptional growth of the TLD.
Technologies used enables quick upgrade on all fields :
* Servers : virtual resizing to add CPUs or disk space if resource is available on the production ESX servers. If not, 2 spare additional ESX servers can be brought live if additional performance is needed.
* Database : database capacity has been greatly oversized to avoid need of replacement of this physical highly capable server. Precise capacity planning will ensure that sufficient delay will be available to acquire new server if needed. A threshold of 40% of CPU use or total storage capacity triggers alert for acquisition.


------------------------
8 - Resources

Four categories of profiles are needed to run the Registry’s Technical Operations : Registry Operations Specialists (I), Registry Systems Administrators (II), Registry Software Developer (III) and Registry Expert Engineers (IV). These categories, skillset and global availability of resources have been detailed in Question 31 (Technical Overview of Proposed Registry) including specific resources set and organisation to provide 24⁄7 coverage and maintenance capacity.
Specific workload for SRS management is detailed below.

------------------------
8.1 - Initial implementation

The set up is operated on the pre-installed virtualization infrastructure. It implies actions by system, database and network administrators to create the virtual servers and install the applicative packages.

Then, developers, assisted by a team of experts and senior staff members apply proper configuration for the given TLD. Specific policy rules are configured and tested.

The initial implementation effort is estimated as follows :

Database Administrator 0.03 man.day
Network Administrator 0.03 man.day
System Administrator 0.03 man.day
Software Developer 0.10 man.day
Database Engineer 0.10 man.day
Software Engineer 0.20 man.day
DNS Expert Engineer 0.10 man.day

------------------------
8.2 - On-going maintenance

On-going maintenance on the SRS includes integration of new policy rules, evolution of technology, bug fixing, infrastructure evolution, failover testing.

Although all the defined technical profiles are needed for such on-going maintenance operations, on a regular basis, it is mainly a workload handled by monitoring and development teams for alert management and new functional developments, respectively.

The on-going maintenance effort per year is estimated as follows, on a yearly basis :

Operations Specialist 0.40 man.day
Database Administrator 0.10 man.day
Network Administrator 0.10 man.day
System Administrator 0.10 man.day
Software Developer 0.20 man.day
Database Engineer 0.05 man.day
Network Engineer 0.05 man.day
System Engineer 0.05 man.day
Software Engineer 0.05 man.day