Introducing UltraAPI: Bash bots and secure APIs.

Three Load-Balancing Techniques for DDoS Mitigation and Web Application Firewall 

Three Load-Balancing Techniques for DDoS Mitigation and Web Application Firewall 

I was talking to a customer several weeks ago about protection for their public-facing websites and services, and they asked, “Can you do something like a BGP anycast IP address that is present in all of your Points of Presence, but then load-balances to our different datacenters to protect us from DDoS, web application attacks, and datacenter and server outages?” And the answer is “Yes, yes, and more yes.” Even better, customers can set it up today with configuration options that exist on both UltraWAF and the proxy option for UltraDDoS Protect

The flexibility of UltraWAF and UltraDDoS Protect.

Both solutions function as proxies, but at different levels of the OSI model. UltraDDoS Protect uses a layer 3/4 proxy that is similar in function to Destination Network Address Translation, or DNAT. This is a deployment option for DDoS mitigation customers that have services where they can’t or don’t want to use a BGP-routed solution such as cloud-hosted applications. We also support a full application-layer (OSI layers 5-7) proxy for HTTP and HTTPS. This application-layer proxy is mostly used for UltraWAF but can also be used for UltraDDoS Protect. 

Configurations inside a proxy.

Inside a proxy, there are several things that can be configured: 

  • A front end, which is a client-facing service port and a protocol. 
    • TCP, UDP, DNS, and SSL_Bridge protocols are layer 3/4. 
    • HTTP and SSL protocols are application-layer proxies at layer 5-7.  “SSL” adds TLS termination on the proxy and requires a certificate. Either of these protocols can have a WAF and/or Bot Mangement policy applied. 
  • One or more back ends and service ports (maximum of 4), which are the authoritative servers for the application. These can be IP addresses, but best practice is to use a DNS entry. We can also translate ports on the proxy (the front end port is different than the back-end port). 
  • Monitoring of the back end for availability and failover. This ranges in methods from ICMP (for the layer 3/4 proxy) to HTTP/HTTPS, depending on the front-end service. 
  • A load balancer method, which is the algorithm to decide which back end to use. This is ignored for a single back end. There are 13 algorithms: 
    • Destination IP Hash
    • Domain Hash 
    • Least Bandwidth 
    • Least Connection (default) 
    • Least Packets 
    • Least Requests 
    • Least Response Time 
    • LRTM 
    • Round Robin 
    • Source Destination IP Hash 
    • Source IP Hash 
    • Source IP Source Port Hash 
    • URL Hash 

  • Load Balancer Persistence Type, which is a way of consistently sending the user to the same back-end server. For layer 3/4 proxy, this is based on source IP address. For application-layer proxies, this can also be cookie-based. 
  • Surge Protection and TCP Buffering, which keep the proxy from overloading the back end with retries if it is having problems keeping up with traffic volume. 
  • X-Forwarded – for, a HTTP header that contains the IP address that the proxy received the request from. This is only available for an application-layer proxy. 
  • When a new proxy service is created, a Virtual IP address VIP is automatically assigned to the asset. 

Onboarding to a proxy.

Typical onboarding to a proxy involves the following: 

  • Create a copy of the existing DNS record for the service – e.g., www.foo.com gets cloned to a back end entry. This should be something non-guessable such as www-backend<randomvalue>. For example, www-backend-pxep9ydk.foo.com. 
  • Test that the service is available through the front end via VIP address. 
  • Optionally: Set up a WAF and Bot Management policy and assign it to the asset. 
  • Using /etc/hosts entries to test the service functions to make sure that nothing has broken. 
  • Change the DNS record for the service to point to the VIP address. 

Sample configurations in UltraWAF and UltraDDoS Protect.

We have 2 sample configurations below:

  1. Layer 3 proxy
  2. Application layer proxy

Layer 3 proxy.

For layer 3, we have the following configured:

  • The Asset Name (Friendly name) is “Load Balancing Demo”. 
  • The assigned VIP is 156.154.121.16. 
  • There is a single Layer 3/4 front end enabled for TCP 443. 
  • There are 4 back ends set up as a server pool with monitoring enabled. 
  • We’re using the default load-balancer method, which is “Least Connection”. 
  • Persistence is via source IP address. 
  • Surge Protection and TCP Buffering are enabled. 
Ultra security portal - Layer 3 configuration

Application-layer proxy.

For application-layer proxy, we have the following configured: 

  • Front end on HTTP port 80. 
  • 4 back-end servers as a load-balancing pool with monitoring enabled. 
  • We’re using the default load-balancer method, which is “Least Connection”. 
  • Persistence is via HTTP cookie. 
  • Surge Protection and TCP Buffering are enabled. 
  • X-Forwarded – for uses the default header name. 
Ultra security portal - Application-layer proxies

DNS-based load balancing inside UltraDNS and UltraDNS2 will work!

But wait, there’s more!

Because the back end can be a Fully Qualified Domain Name, it means that the DNS-based load balancing inside UltraDNS and UltraDNS2, our full-featured and distributed authoritative DNS solutions, will work. The advantage to this is that you can use DNS-based load balancing to designate active-active and active-passive datacenters and servers. Another advantage is that you can have more than the 4 back ends that a proxy asset will allow you to list.  The downside is that you lose the persistence mechanisms (source IP address and HTTP cookie) that you have in a proxy asset. 

In the example below, we have a single back end listed in the proxy asset. However, that back end is a load-balancing pool inside of UltraDNS. Here I have the following configured: 

  • Front end on SSL port 443. 
  • A single back-end server listed by FQDN. 
  • Load Balancer Balance Method is not used for a single back end. 
  • Persistence is not used for a single back end. 
  • Surge Protection and TCP Buffering are enabled. 
  • X-Forwarded-For uses the default header name. 
UltraDNS - DNS-based load balancing

And then I set up a SiteBacker pool on UltraDNS. 

Here I have the following configured: 

  • One Sitebacker pool for www-pool.foo.com 
  • 4 Entries for each of the servers/datacenters that are a CNAME to the appropriate IP addresses. 
    • 2 Active servers (dc1 and 2) and 2 failover (dc3 and dc4) datacenters/servers. 
  • Selected the round-robin load balancing algorithm. 
  • Enabled failover and polling. 

Additionally, UltraDNS can support sub-pools, so dc1.foo.com is another pool that then load-balances between individual servers.  

Learn more.

In this blog post, we’ve described the capabilities available in the powerful traffic management options available through UltraDDoS Protect and UltraWAF. We believe that this gives our customers a huge advantage when it comes to ensuring the reliability of their sites and services. 

To learn more about how UltraDDoS Protect and UltraWAF can transform your ability to manage traffic and protect your online experience, visit our Solutions Overview page.  

Last Updated: March 27, 2024

Interested in learning more?

May 13, 2024

Vercara’s Attack Trend Analysis: April 2024 Insight Blog

When it comes to cybersecurity, staying a step ahead of bad actors plays a critical role in building defensive controls...

April 15, 2024

Vercara’s Attack Trend Analysis: March 2024 Insight Blog

When it comes to cybersecurity, staying a step ahead of bad actors plays a critical role in building robust defenses...

March 14, 2024

February Attack and Traffic Analysis

In the intricate world of cybersecurity, staying ahead of emerging threats and understanding the intricate dynamics of online infrastructure is...

View all content.