Tomcat Behind Apache

 

When running Tomcat behind a load balancer or proxy, default behavior is for Tomcat to log the proxy or load balancer IP address as the client IP. Desired behavior is to log the actual client IP. Tomcat, the embedded servlet container, features a valve known as the RemoteIpValve. This valve is a port of the Apache remotipmodule. The primary purpose of this valve is to treat the 'useragent which initiated the request as the originating useragent' for 'the purposes of authorization and logging'. While preventing direct web access to the Tomcat server (which is generally considered unsecure), it still makes Tomcat web applications publicly available via requests to the Apache web server. This setup served well for deploying our (mostly eXist -driven) web applications.

  1. Apache Tomcat Download
  2. Tomcat Behind Apache Download

Summary


Below are some of the reasons for using your Tomcat app server behind Apache: You get high availability of Apache server when load balancing is provided between multiple Tomcat app servers. The process and delivery of static content speeds up by using Apache as front-end. It is advisable to run Tomcat standalone, not connected through Apache httpd, because you will lose at least 50% of Tomcat's response performance by proxying all requests through an Apache httpd.

Running cluster of Tomcat servers behind the Web server can be demandingtask if you wish to archive maximum performance and stability.This article describes best practices how to accomplish that.
By Mladen Turk

Fronting Tomcat

One might ask a question Why to put the Web server in front of Tomcatat all? Thanks to the latest advances in Java Virtual Machines (JVM)technology and the Tomcat core itself, the Tomcat standalone is quitecomparable with performance to the native web servers.Even when delivering static content it is only 10%slower than recent Apache 2 web servers.
The answer is: scalability.

Tomcat can serve many concurrent users by assigning a separate thread ofexecution to each concurrent client connection. It can do that nicely butthere is a problem when the number of those concurrent connections rise.The time the Operating System will spend on managing those threads will degradethe overall performance. JVM will spend more time managing and switching thosethreads then doing a real job, serving the requests.

Besides the connectivity there is one more significant problem, and it causedby the applications running on the Tomcat. A typical application will processclient data, access the database, do some calculations and present the databack to the client. All that can be a time consuming job that in most casesmust be finished inside half a second, to achieve user perception of a workingapplication. Simple math will show that for a 10ms application response time youwill be able to serve at most 50 concurrent users, before your users startcomplaining. So what to do if you need to support more users?The simplest thing is to buy a faster hardware, add more CPU or add more boxes.A two 2-way boxes are usually cheaper then a 4-way one, so adding more boxesis generally a cheaper solution then buying a mainframe.

First thing to ease the load from the Tomcat is to use the Web serverfor serving static content like images, etc.

Figure 1. Generic configuration

Figure 1. shows the simplest possible configuration scenario. Here theWeb server is used to deliver static context while Tomcat only does thereal job - serving application. In most cases this is all that you will need.With 4-way box and 10ms application time you'll be capable of serving 200concurrent users, thus giving 3.5 million hits per day, that is by allmeans a respectable number.

For that kind of load you generally do not need the Web server in front ofTomcat. But here comes the second reason why to put the Web server in front, andthat is creating an DMZ (demilitarized zone). Putting Web server on acomputer host inserted as a 'neutral zone' between a company's private networkand the internet or some other outside public network gives the applicationshosted on Tomcat capability to access company private data, while securingthe access to other private resources.

Figure 2. Secure generic configuration

Beside having DMZ and secure access to a private network there canbe many other factors like the need for the custom authentication for example.

If you need to handle more load you will eventually have to add more Tomcatapplication servers. The reason for that can be either caused by the factthat your client load just can not be handled by a single box or that youneed some sort of failover in case one of the nodes breaks.
Figure 3. Load balancing configuration

Configuration containing multiple Tomcat application servers needs a load balancerbetween web server and Tomcat. For Apache 1.3, Apache 2.0 and IIS Web serversyou can use Jakarta Tomcat Connector (also known as JK), because it offersboth software load balancing and sticky sessions. For the upcoming Apache 2.1/2.2use the advanced mod_proxy_balancer that is a new module designed and integratedwithin the Apache httpd core.

Calculating Load

When determining the number of Tomcat servers that you will need to satisfythe client load, the first and major task is determining the Average ApplicationResponse Time (hereafter AART). As said before, to satisfy the user experiencethe application has to respond within half of second. The content received by the clientbrowser usually triggers couple of physical requests to the Web server (e.g. images). Theweb page usually consists of html and image data, so client issues a seriesof requests, and the time that all this gets processed and delivered iscalled AART. To get most out of Tomcat you should limit the number of concurrentrequests to 200 per CPU.

So we can come with the simple formula to calculate the maximumnumber of concurrent connections a physical box can handle:

The other thing that you must care is the Network throughput between theWeb server and Tomcat instances. This introduces a new variable calledAverage Application Response Size (hereafter AARS), that is the number ofbytes of all context on a web page presented to the user. On a standard100Mbps network card with 8 Bits per Byte, the maximum theoreticalthroughput is 12.5 MBytes.

For a 20KB AARS this will give a theoretical maximum of 625 concurrentrequests. You can add more cards or use faster 1Gbps hardware if needto handle more load.


Behind

The formulas above will give you rudimentary estimation of the number ofTomcat boxes and CPU's that you will need to handle the desirednumber of concurrent client requests.If you have to deploy the configuration withouthaving actual hardware, the closest you can get is to measure the AART ona test platform and then compare the hardware vendor Specmarks.

Fronting Tomcat with Apache

If you need to put the Apache in front of Tomcat use the Apache2 withworker MPM. You can use Apache1.3 or Apache2 with prefork MPM for handlingsimple configurations like shown on the Figure 1. If you need to frontseveral Tomcat boxes and implement load balancing use Apache2 and workerMPM compiled in.

MPM or Multi-Processing Module is Apache2 core feature and it is responsiblefor binding to network ports on the machine, accepting requests,and dispatching children to handle the requests.MPMs must be chosen during configuration, and compiled into the server.Compilers are capable of optimizing a lot of functions if threads are used,but only if they know that threads are being used. Because some MPMs use threadson Unix and others don't, Apache will always perform better if the MPM ischosen at configuration time and built into Apache.

Worker MPM offers a higher scalability compared to a standard preforkmechanism where each client connection creates a separate Apache process.It combines the best from two worlds, having a set of child processes eachhaving a set of separate threads. There are sites that are running10K+ concurrent connections using this technology.


Connecting to Tomcat

In a simplest scenario when you need to connect to single Tomcat instanceyou can use mod_proxy that comes as a part of every Apache distribution.However, using the mod_jk connector will provide approximately double the performance.There are several reasons for that and the major is that mod_jk manages apersistent connection pool to the Tomcat, thus avoiding opening and closingconnections to Tomcat for each request. The other reason is that mod_jk uses a customprotocol named AJP an by that avoids assembling and disassembling headerparameters for each request that are already processed on the Web server.You can find more details about AJPprotocol on the Jakarta Tomcat connectors site.

For those reasons you can use mod_proxy only for the low load sitesor for the testing purposes. From now on I'll focus on mod_jk for frontingTomcat with Apache, because it offers better performance and scalability.

One of the major design parameters when fronting Tomcat with Apacheor any other Web server is to synchronize the maximum number of concurrentconnections. Developers often leave default configuration values from both Apache andTomcat, and are faced with spurious error messages in theirlog files. The reason for that is very simple. Tomcat and Apache can each accept onlya predefined number of connections. If thosetwo configuration parameters differs, usually with Tomcat havinglower configured number of connections, you will be faced with thesporadic connection errors. If the load gets even higher, your users willstart receiving HTTP 500 server errors even if your hardware is capableof dealing with the load.

Tomcat Behind Apache

Determining the number of maximum of connections to the Tomcatin case of Apache web server depends on the MPM used.

MPMconfiguration parameter
PreforkMaxClients
WorkerMaxClients
WinNTThreadsPerChild
NetwareMaxThreads

On the Tomcat side the configuration parameter that limits the numberof allowed concurrent requests is maxProcessors with default value of20. This number needs to be equal to the MPM configuration parameter.


Load balancing

Load balancing is one of the ways to increase the number of concurrentclient connections to the application server. There are two types ofload balancers that you can use. The first one is hardware load balancerand the second one is software load balancer. If you are using load balancinghardware, instead of a mod_jk or proxy, it must support a compatible passiveor active cookie persistence mechanism, and SSL persistence.

Mod_jk has an integrated virtual load balancer worker that can containany number of physical workers or particular physical nodes.Each of the nodes can have its own balance factor or the worker'squota or lbfactor. Lbfactor is how much we expect this workerto work, or the workers's work quota.This parameter is usually dependent on the hardware topology itself, andit offers to create a cluster with different hardware node configurations.Each lbfactor is compared to all other lbfactors in the cluster and itsrelationship gives the actual load. If the lbfactors are equal the workersload will be equal as well (e.g. 1-1, 2-2, 50-50, etc..). If firstnode has lbfactor 2 while second has lbfactor 1, than the first nodewill receive two times more requests than second one.This asymmetric load configuration enables to have nodes with differenthardware architecture.

In the simplest load balancer topology with only two nodes in thecluster, the number of concurrent connections on a web server sidecan be as twice as high then on a particular node. But ..

The upper statement means that the sum of allowed connections on aparticular nodes does not give the total number of connections allowed.This means that each node has to allow a slightly higher number ofconnections than the desired total sum. This number is usually a20% higher and it means that

So if you wish to have a 100 concurrent connections with two nodes,each of the node will have to handle the maximum of 60 connections.The 20% margin factor is experimental, and depends on the Apacheserver used. For prefork MPMs it can rise up to 50%, while forthe NT or Netware its value is 0%. The reason for that is thateach particular child process menages its own balance statisticsthus giving this 20% error for multiple child process web servers.

The minimum configuration for a three node cluster shown in theupper example will give the 25%-50%-25% distribution of the load,meaning that the node2 will get as much load as the rest of the two members.It will also impose the following number of maxProcessors for each particularnode in case of the MaxClients=200.

Using simple math the load should be 50-100-50 but we needed to add the20% load distribution error. In case this 20% additional load is not sufficient,you will need to set the higher value up to the 50%. Of course the averagenumber of connections for each particular node will still follow theload balancer distribution quota.


Apache Tomcat Download

Sticky sessions and failower

One of the major problems with having multiple backendapplication servers is determining the client-server relationship.Once the client makes a request to a server application thatneeds to track user actions over a designated time period,some sort of state has to be enforced inside a stateless httpprotocol. Tomcat issues a session identifier thatuniquely distinguishes each user. The problem with that sessionidentifier is that he does not carry any information about theparticular Tomcat instance that issued that identifier.

Tomcat in that case adds an extra jvmRoute configurablemark to that session. The jvmRoute can be any name that willuniquely identify the particular Tomcat instance in the cluster.On the other side of the wire the mod_jk will use that jvmRouteas the name of the worker in it's load balancer list. This meansthat the name of the worker and the jvmRoute must be equal.

When having multiple nodes in a cluster you can improve your applicationavailability by implementing failover. The failover means that if theparticular elected node can not fulfill the request the another nodewill be selected automatically. In case of three nodes you are actually doubling yourapplication availability. The application responsetime will be slower during failover, but noneof your users will be rejected. Inside the mod_jk configuration thereis a special configuration parameter called worker.retries that has default value of 3, butthat needs to be adjusted to the actual number of nodes in the cluster.

If you add more then three workers to the load balanceradjust the retries parameter to reflect that number.It will ensure that even in the worse case scenario the requestgets served if there is a single operable node. Of course, therequest will be rejected if there are no free connections available on theTomcat side , so you should increase the allowed number of connectionson each Tomcat instance. In the three node scenario (1-2-1)if one of the nodes goes down, the othertwo will have to take its load. So if the load is divided equally you will needto set the following Tomcat configuration:

This configuration will ensure that 200 concurrent connections willalways be allowable no matter which of the nodes goes down. The reason fordoubling the number of processors on node1 and node3 is because theyneed to handle the additional load in case node2 goes down (load 1-1).Node2 also needs the adjustment becauseif one of the other two nodes goes down, the load will be 1-2. As youcan see the 20% load error is always calculated in.

Figure 4. Three node example load balancer
Figure 5. Failover for node2

As shown in the two figures above setting maxProcessors depends bothon 20% load balancer error and expected single node failure. Thecalculation must include the node with the highest lbfactor asthe worst case scenario.


Domain Clustering model

Since JK version 1.2.8 there is a new domain clustering model andit offers horizontal scalability and performance of tomcat cluster.

Tomcat cluster does only allow session replication to all nodes in the cluster.Once you work with more than 3-4 nodes there is too much overhead and risk inreplicating sessions to all nodes. We split all nodes into clustered groups.The newly introduced worker attribute domain letmod_jk know, to which other nodes a session gets replicated (all workers withthe same value in the domain attribute). So a load balancing worker knows, onwhich nodes the session is alive. If a node fails or is being taken downadministratively, mod_jk chooses another node that has a replica of the session.

For example if you have a cluster with four nodes you can maketwo virtual domains and replicate the sessions only inside the domains.This will lower the replication network traffic by half

Figure 6. Domain model clustering

For the above example the configuration would look like:

Now assume you have multiple Apaches and Tomcats. The Tomcats are clustered andmod_jk uses sticky sessions. Now you are going to shut down (maintenance) onetomcat. All Apache will start connections to all tomcats. You end up with alltomcats getting connections from all apache processes, so the number of threadsneeded inside the tomcats will explode.If you group the tomcats to domain as explained above, the connections normallywill stay inside the domain and you will need much less threads.

Fronting Tomcat with IIS

Just like Apache Web server for Windows, Microsoft IIS maintainsa separate child process and thread pool for serving concurrent clientconnections. For non server products like Windows 2000 Professional orWindows XP the number of concurrent connections is limited to 10.This mean that you can not use workstation products for productionservers unless the 10 connections limit will fulfil your needs.The server range of products does not impose that 10 connectionlimit, but just like Apache, the 2000 connections is a limit whenthe thread context switching will take its share and slow down theeffective number of concurrent connections.If you need higher load you will need to deploy additional web serversand use Windows Network Load Balancer (WNLB) in front of Tomcat servers.

Figure 7. WNLB High load configuration

For topologies using Windows Network Load Balancer the same rules are in placeas for the Apache with worker MPM. This means that each Tomcat instancewill have to handle 20% higher connection load per node than its real lbfactor.The workers.properties configuration must beidentical on each node that constitutes WNLB, meaning that you will have toconfigure all four Tomcat nodes.

Apache 2.2 and new mod_proxy

For the new Apache 2.1/2.2 mod_proxy has been rewriten and hasa new AJP capable protocol module (mod_proxy_ajp) and integratedsoftware load balancer (mod_proxy_balancer).

Because it can maintain a constant connection pool to backedservers it can replace the mod_jk functionality.

The above example shows how easy is to configure a Tomcat cluster withproxy loadbalancer. One of the major advantages of using proxy is theintegrated caching, and no need to compile external module.

Mod_proxy_balancer has integrated manager for dynamic parameter changes.It offers changing session routes or disabling a node for maintenance.

Figure 8. Changing BalancerMember parameters

The future development of mod_proxy will include the option todynamically discover the particular node topology. It will also allowto dynamically update loadfactors and session routes.

About the Author

Mladen Turk is a Developer and Consultant for JBoss Inc in Europe, where he isresponsible for native integration. He is a long time commiter for Jakarta Tomcat Connectors,Apache Httpd and Apache Portable Runtime projects.

Links and Resources

Jakarta Tomcat connectors documentation
Apache 2.0 documentation
Apache 2.1 documentation

Apache Tomcat and the World Wide Web

Usually when running an application server, such as Apache Tomcat, you bind a connector directly on port 80. This way users visiting your web application will be able to navigate through your server just by calling your domain instead of calling your domain and special port (http://yourdomain.com:8080). If there is no option to bind a Tomcat connector on port 80 (some systems ban this functionality for security purposes), there are other ways to achieve this behavior such as setting a redirect on port 80 to port 8080 (Tomcat’s default, or any other) using IPTables or any other port redirection tool. Both options are really simple procedures, but are a great issue if you need to run a simple HTTP server on your machine too.

Apache HTTP and mod_proxy

To solve this problem we can run Apache HTTPD as a front-end proxy for Apache Tomcat and redirect traffic to the application server based on a set of rules. In this tutorial we will use mod_proxy, although there are many other options available. Soft mac torrent movies.

This tutorial assumes that Apache Tomcat is already installed and configured with the default connector settings (port 8080) and Apache HTTP is installed too with the default listener settings (port 80).

For this tutorial we are going to assume that there are 2 different domains (tomcatserver.com and httpserver.com) pointing to the same IP address. The user expects to reach the application server when navigating to one domain and the web server when navigating to the other.

First step is make sure that the file httpd.conf has mod_proxy enabled (which is by default), so in case it isn’t, uncomment the following line.

LoadModule proxy_module modules/mod_proxy.so

Taking into account that there are 2 domains, we need to use the NameVirtualHost directive and define two virtual hosts based on the different domains.

NameVirtualHost *:80

Next we define the virtual host that will redirect traffic to tomcat. In case tomcat has some virtual hosts defined too, we’ll add a ServerAlias for each domain that needs to reach tomcat

2
4
6
8
serveradmin root@localhost
servername httpserver.com
serveralias httpserver1.com
</virtualhost>

proxy rules

in this example we’ve assumed there are two different domains (or more) and that each domain will point to a different type of server. mod_proxy allows us to define more advanced rules, to proxy content based on other rules such as the context path.