Introduction
This topic includes tomcat startup boost & performance optimization, JVM & GC configuration, and some customizations dedicated to Magnolia CMS.
HA, Zero downtime support, load balancer sticky session, and clustering consideration when deploying Magnolia CMS will be presented in another topic.
Please use this with your highly cautious! We are optimizing Apache Tomcat solely for Magnolia by disabling some unused features and bypass some built-in functions which might affect your existing project if you're using them. However you could easily detect that by go through our guideline and just apply some suitable points which would definitely help you some way.
Part I - Faster startup
Mostly from Tomcat official documentation with some fine-tuned and specific configuration for Magnolia CMS.
Configure your web application
There are two options that can be specified in your WEB-INF/web.xml file:
Set metadata-complete="true" attribute on the <web-app> element.
Add an empty <absolute-ordering /> element.
Setting metadata-complete="true" disables scanning your web application and its libraries for classes that use annotations to define components of a web application (Servlets etc.). The metadata-complete option is not enough to disable all of annotation scanning. If there is a SCI with a @HandlesTypes annotation, Tomcat has to scan your application for classes that use annotations or interfaces specified in that annotation.
The <absolute-ordering> element specifies which web fragment JARs (according to the names in their WEB-INF/web-fragment.xml files) have to be scanned for SCIs, fragments and annotations. An empty <absolute-ordering/> element configures that none are to be scanned.
Remove unnecessary JARs
Remove any JAR files you do not need. When searching for classes every JAR file needs to be examined to find the needed class. If the jar file is not there - there is nothing to search.
Exclude JARs from scanning
In Tomcat 7 JAR files can be excluded from scanning by listing their names or name patterns in a system property. Those are usually configured in the conf/catalina.properties file.
Sample conf/catalina.properties file which excludes all Magnolia related jars from scanning:
Disable WebSocket support
The class names to filter can be detected by looking into META-INF/services/javax.servlet.ServletContainerInitializer files in Tomcat JARs. For WebSocket support the name is org.apache.tomcat.websocket.server.WsSci, for JSP support the name is org.apache.jasper.servlet.JasperInitializer. e.g.:
<Context containerSciFilter="WsSci" />
The impact of disabling WebSocket support will depend on how many JARs were being scanned for WebSocket annotations and whether any other SCIs trigger annotation scans. Generally, it is the first SCI scan that has the biggest performance impact. The impact of additional scans is minimal.
$ rm tomcat/lib/websocket-api.jar
$ rm tomcat/lib/tomcat-websocket.jar
Entropy Source
Tomcat 7+ heavily relies on SecureRandom class to provide random values for its session ids and in other places. Depending on your JRE it can cause delays during startup if entropy source that is used to initialize SecureRandom is short of entropy. You will see warning in the logs when this happens, e.g.:
<DATE> org.apache.catalina.util.SessionIdGenerator createSecureRandom
INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [5172] milliseconds.
There is a way to configure JRE to use a non-blocking entropy source by setting the following system property: -Djava.security.egd=file:/dev/./urandom
Set this in your 'bin/setenv.sh' under Tomcat folder. Here is a sample 'setenv.sh' file:
Starting several web applications in parallel
With Tomcat 7.0.23+ you can configure it to start several web applications in parallel. This is disabled by default but can be enabled by setting the startStopThreads attribute of a Host to a value greater than one.
Please don't use this function if you're deploying author and public Magnolia instances in the same tomcat folder or download our bundle from website because this could lead to unable to start!
Enable Resources Caching
Turn on root context resource caching and its max size based on your instance memory like below, our object max size usually less than 10Mb, and the configuration is in bytes (default cacheMaxSize value is 10240 - 10 megabytes)
<Resources cachingAllowed="true" cacheMaxSize="200000"/>
-- https://tomcat.apache.org/tomcat-8.5-doc/config/resources.html
Disable JARs scanner
The Jar Scanner element represents the component that is used to scan the web application for JAR files and directories of class files. It is typically used during web application start to identify configuration files such as TLDs or web-fragment.xml files that must be processed as part of the web application initialisation.
A Jar Scanner element MAY be nested inside a Context component.
In Magnolia CMS, we're not using TLDs and web-fragment so we can safely disable it.
<JarScanner scanClassPath="false" scanAllFiles="false" scanAllDirectories="false"/>
-- https://tomcat.apache.org/tomcat-8.5-doc/config/jar-scanner.html
Part 2 - Usage optimizing options
Use Tomcat native for better production performance
According to Apache Tomcat official documenation, native library is highly recommended for production grade environment. Some installation steps will need to do including install Apache Portable Runtime (APR) library - link here: http://apr.apache.org/
Documentation and how to install Tomcat Native is here: http://tomcat.apache.org/native-doc/
Note that Tomcat native source is located within your downloaded version under 'bin' folder with name 'tomcat-native-1.x.xx-src', don't need to download it from website for compatibility.
Optimizing JVM Memory Allocation
Even if you're not getting any OOME messages, properly configuring your JVM's memory allocation is an essential part of getting the best performance out of Tomcat. JVM memory reallocation is an expensive process that can tie up power you want going to serving requests. Cutting down on the number of times it happens will give you a solid performance boost.
Step 1 - Eliminate Excessive Garbage Collection
Excessive garbage collection can stress your server's request-serving power. Starting the JVM with a higher maximum heap memory by using the -Xmx switch will decrease the frequency with which garbage collection occurs.
Additionally, if you don't mind lowering the total garbage collection throughput of your application, consider using the -Xincgc switch to enable incremental garbage collection.
Step 2 - Properly Configure Memory Reallocation
Utilizing these techniques along with the -Xms switch, which sets initial heap memory equal to the maximum heap memory, will eliminate any need for the JVM to resize or reallocate the heap memory, leaving more to be used by other memory-intensive processes.
Recommendation
This value Xms and Xmx is recommended for evaluation and experience Magnolia CMS, depends on your website demands, please use higher numbers.
JAVA_OPTS=”-server -Xms2G -Xmx2G”
Further reading Tomcat JVM - What You Need To Know
Tomcat JVM
Apache Tomcat is a Java servlet container, and is run on a Java Virtual Machine, or JVM. Tomcat utilizes the Java servlet specification to execute servlets generated by requests, often with the help of JSP pages, allowing dynamic content to be generated much more efficiently than with a CGI script.
If you want to run a high-performing installation of Tomcat, taking some time to learn about your JVM is essential. In this article, we'll learn how Tomcat and the JVM interact, look at a few of the different JVMs available, explain how to tune the JVM for better performance, and provide information about some of the tools available for monitoring your JVM's performance.
How Tomcat Interacts With The JVM
Utilizing servlets allows the JVM to handle each request within a separate Java thread, as each servlet is in fact a standard Java class, with special elements that allow it to respond to HTTP requests.
Tomcat's main function is to pass HTTP requests to the correct components to serve them, and return the dynamically generated results to the correct location after the JVM has processed them. If the JVM can't efficiently serve the requests Tomcat passes to it, Tomcat's performance will be negatively affected.
Choosing the Right JVM
There are many JVMs to choose from, and Tomcat runs on many of them, from open source projects such as Sun Microsystem's HotSpot or Apache Harmony, to proprietary JVMs like Azul VM.
Despite the wide variety of available JVM flavors, the majority of Tomcat users favor Sun Microsystem's HotSpot JVM, because its just-in-time compilation and adaptive optimization features are particularly suited for efficiently handling Tomcat's servlet requests.
So for the majority of Tomcat users, HotSpot is the JVM to use. However, if you are attracted a feature that is specific to a certain JDK, there is nothing wrong with installing Tomcat on two different JVMs and running some benchmarks to see which solution is best for your needs. In the end, it's a balancing act. Choose the JVM that provides the best balance of performance and features for your site.
How to Configure Tomcat's Default JVM Preferences
Once you decide on a JVM and install it on your server, configuring Tomcat to run on it is a very simple process. Simply edit catalina.sh, found in Tomcat's bin folder, and change the JAVA_HOME environment variable to the directory of your chosen JVM's JDK. When you restart Tomcat, it will be running on your new JVM.
Optimizing your JVM for Best Performance
The better your JVM performs, the better your installation of Tomcat will perform. It's as simple as that. Getting the most out of your JVM is a matter of configuring its settings to match your real-world performance needs as closely as possible. Update your JVM to the latest version, establish some accurate benchmarks so you have a way of quantifying any changes you make, and then get down to business.
Effective Memory Management
The main thing to consider when tuning your JVM for Tomcat performance is how to avoid wasting memory and draining your server's power to process requests. Certain automatic JVM processes, such as garbage collection and memory reallocation, can chew through memory if they occur more frequently than necessary. You can make sure these processes only occur when they need to by using the JAVA_OPTS -Xmx and -Xms switches to control how JVM handles its heap memory.
If your JVM is invoking garbage collection too frequently, use the -Xmx switch to start the JVM with a higher maximum heap memory. This will free up CPU time for the processes you really care about.
To get even more out of this change, you can include the -Xms switch. This switch makes the JVM's initial heap memory size equal to the maximum allocated memory. This means the JVM will never have to reallocate more memory, a costly process that can eat up power you want being used to serve incoming requests.
If your web applications can handle a lower garbage collection throughput, you can also experiment with the -Xincgc switch, which enables incremental garbage collection. This means that rather than halting in place to perform garbage collection tasks, the JVM will execute garbage collection in small phases.
It can be tricky to determine the most balanced configuration for your site's needs. Fortunately, there's an easy way to capture data on how your JVM is handling garbage collection. Simply use the -verbose:gc switch to generate logs you can use to help you arrive at the best solution.
Configuring Threads
Next, let's take a look at the way your JVM handles threads. There are two types of Java threads - green and native. Native threads are scheduled by your OS, while green threads are managed entirely within the user space of your Java Virtual Machine. If your JVM supports both, you should try both models to determine the best choice for your site.
Generally, native threads offer the best performance, especially if you are running a lot of I/O bound applications (which is very likely, since you are running Tomcat). However, green threads outperform native threads in some specific areas, such as synchronization and thread activation. Try both and see which option gives you the biggest performance boost.
Managing Your JVM
Tuning for performance is not a finite process. Usage situations change over time, and problems that are not immediately apparent can expose themselves over a longer period of time. There are a number of tools available to help you keep an eye on your JVM's performance.
One of the most convenient solution is VisualVM, a tool that is packaged with the JDK, and can provide you with great performance statistics. Other commonly used JVM monitoring tools included with the SDK include console, jps, and jstack. Run regular tests on your JVM to make sure its configuration still suits your needs, and you can be sure that your Tomcat instances will always perform at their best!
-- source: https://www.mulesoft.com/tcat/tomcat-jvm
Compression
The Connector may use HTTP/1.1 GZIP compression in an attempt to save server bandwidth. The acceptable values for the parameter is "off" (disable compression), "on" (allow compression, which causes text data to be compressed), "force" (forces compression in all cases), or a numerical integer value (which is equivalent to "on", but specifies the minimum amount of data before the output is compressed). If the content-length is not known and compression is set to "on" or more aggressive, the output will also be compressed. If not specified, this attribute is set to "off".
Note: There is a tradeoff between using compression (saving your bandwidth) and using the sendfile feature (saving your CPU cycles). If the connector supports the sendfile feature, e.g. the NIO connector, using sendfile will take precedence over compression. The symptoms will be that static files greater that 48 Kb will be sent uncompressed. You can turn off sendfile by setting useSendfile attribute of the connector, as documented below, or change the sendfile usage threshold in the configuration of the DefaultServlet in the default conf/web.xml or in the web.xml of your web application.
Set compression
=
"on"
in your Server Connector configuration.
Understanding Tomcat Connectors
HTTP, HTTPS, and HTTPD
In general, using HTTP instead of HTTPS will result in much better Tomcat performance. However, HTTP may not be right for your site. If you require the security of HTTPS, despite its slow speed compared to HTTP, you may have to consider adding additional servers closer to your users to increase speed. The problem lies in the verbose traffic HTTPS generates during requests, which increases the overall serve time for users with higher pings.
Whatever you do, using Apache HTTPD to proxy your requests should be avoided at all costs, as it will decrease your performance by nearly 50%.
Connector elements are Tomcat's links to the outside world, allowing Catalina to receive requests, pass them to the correct web application, and send back the results through the Connector as dynamically generated content.
In this article, we'll learn how Tomcat uses Connectors in its element hierarchy, take a look at some basic syntax for configuring Connectors, and explain the uses of Tomcat's two Connector types: HTTP and AJP.
How A Connector Works
Each Connector element represents a port that Tomcat will listen to for requests. By arranging these Connector elements within hierarchies of Services and Engines, a Tomcat administrator is able to create a logical infrastructure for data to flow in and out of their site.
In addition to routing user-generated requests to the appropriate Services, connectors can also be used to link Tomcat to other supporting web technologies, such as an Apache web server, to efficiently balance the load of work across the network.
The Connector element only has one job - listening for requests, passing them on to an Engine, and returning the results to its specified port.
On its own, the Connector can't function - the only information this element contains is a port to listen on and talk to, and some attributes that tell it exactly how to listen and talk.
Information about what Server the specified port is located on, what Service the connector is a part of, and what Engine connections should be passed to is provided to the Connector by its location Tomcat's nested element hierarchy.
Nesting Connector Elements
To learn how to nest an Connector to achieve the functionality you need, let's look at a simplified Tomcat server configuration:
<Server>
<Service>
<Connector port="8443"/>
<Connector port="8444"/>
<Engine>
<Host name="yourhostname">
<Context path="/webapp1"/>
<Context path="/webapp2"/>
</Host>
</Engine>
</Service>
</Server>
There are two Connector elements here, listening for connections on ports 8443 and 8444. It is important to note that an OS will only allow one connector on each port, so every connector you define will require its own unique port.
As you can see, both Connector elements are nested inside a single generic Service element, which is in turn contained within a single Server. This arrangement tells the Connectors to listen to their specified ports on their containing server, and to pass any connections only to the Engine belonging to their containing Service, which will process the requests and pass the results back to the Connectors.
Using the current arrangement, both Connectors will pass all requests to the same Engine, which will in turn pass all these requests to both of its contained web applications. This means that each request will potentially generate two responses, one from each application.
Now let's assume that we want to change this configuration, so that instead of receiving two responses for every request received by either Connector, we want each Connector to pass requests from its port only to one specific web application. To achieve this functionality, we simply need to rearrange the element hierarchy so that it resembles something like this:
<Server>
<Service name="Catalina">
<Connector port="8443"/>
<Engine>
<Host name="yourhostname">
<Context path="/webapp1"/>
</Host>
</Engine>
</Service>
<Service name="Catalina8444">
<Connector port="8444"/>
<Engine>
<Host name="yourhostname">
<Context path="/webapp2"/>
</Host>
</Engine>
</Service>
</Server>
Great! Now we have two different Services, with two different Connectors, passing connections from two different ports on the same Server to two different Engines for processing. Although obviously more complicated in real-world situations, all Tomcat Connector-related configuration builds upon these simple rules of element hierarchy.
Types of Connectors
There are two basic Connector types available in Tomcat - HTTP and AJP. Here's some information about how they differ from one another, and situations in which you might use them.
HTTP Connectors
Although Tomcat was primarily designed as a servlet container, part of what makes it so powerful is Catalina's ability to function as a stand-alone web server. This functionality is made possible by the HTTP Connector element.
This Connector element, which supports the HTTP/1.1 protocol, represents a single Connector component listening to a specific TCP port on a given Server for connections.
The HTTP Connector has many attributes that can be modified to specify exactly how it functions, and access functions such as proxy forwarding and redirects.
Two of the most important attributes of this Connector are the "protocol" and "SSLEnabled" attributes.
The "protocol" attribute, which defines the protocol the Connector will use to communicate, is set by default to HTTP/1.1, but can be modified to allow access to more specialized protocols. For example, if you wanted to expose the connectors low level socket properties for fine tuning, you could use the "protocol" attribute to enable the NIO protocol. Setting the "SSLEnabled" attribute to "true" causes the connector to use SSL handshake/encryption/decryption.
HTTP Connectors can also be used as part of a load balancing scheme, in conjunction with an HTTP load balancer that supports session stickiness, such as mod_proxy. However, as AJP tends to handle proxying better than HTTP, this usage is not common.
For an exhaustive overview of HTTP Connector attributes, consult the most recent Apache Tomcat Documentation site.
AJP Connectors
AJP Connectors work in the same way as HTTP Connectors, but they use the AJP protocol in place of HTTP. Apache JServ Protocol, or AJP, is an optimized binary version of HTTP that is typically used to allow Tomcat to communicate with an Apache web server. AJP Connectors are most commonly implemented in Tomcat through the plug-in technology mod_jk, a re-write of the defunct mod_jserv plug-in with extensive optimization, support for more protocols through the jk library, and Tomcat-specific functionality. The mod_jk binaries and extensive documentation are available on the Tomcat Connector project website.
This functionality is typically required in a high-traffic production situation, where Tomcat clusters are being run behind an Apache web server.
This allows the Apache server to deliver static content and proxy requests in order to balance request loads effectively across the network and let the Tomcat servers focus on delivering dynamic content.
Want to learn more? There are many detailed articles about fronting Tomcat with Apache, load balancing, and other AJP Connector related subjects available on Apache's Tomcat Documentation site.
Recommendation:
Use AJP/HTTP Rather Than HTTPS
Of course, you cannot use HTTP exclusively because HTTPS is essential for secure or confidential data. However, HTTPS should not be used where it is not necessary because it substantially increases the number of times the client and server send messages over the network.
These tips for Tomcat, like our other series of blog advice and tips on critical and/or frequently used applications, will help you optimize your time. Who wants to figure out the ins and outs of an application from the get go? Don’t you have more important things to do?
Web Servers For Static Content
Tomcat's major strength is dynamic content generation, and it will balance loads better if it is not responsible for anything else. Dedicating a web server in front of Tomcat to serve any static content your site requires is a quick way to free up more power to serve requests.
Further reading - performance comparison b/w a poorly tuned vs. fair tuned instance
Poorly Tuned Instances
In this second scenario, we will run a JMeter test which execute 5000 requests against both Tomcat instances whereby we will not change the configuration of the AJP connector from the default values and where we have a maximum of 10 database connections in the pool to service database requests. We can view the AJP connector settings, database pool settings and JVM heap settings used below.
1) AJP Connector configuration
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/>
2) Database Pool Configuration
<Resource name="jdbc/productdb" auth="Container" type="javax.sql.DataSource"
maxTotal="10" maxIdle="30" maxWaitMillis="10000" logAbandoned="true"
username="root" password="admin" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/products"/>
</Context>
3) JVM Settings
We have set the minimum and maximum heap size to 1GB respectively as below:
export CATALINA_OPTS="-Xms1024m -Xmx1024m"
4) Results
Although JMeter provides us with some useful performance statistics, we will use JConsole to monitor the performance of the test. We can observe below in Figures 2 and 3 that the maximum time to process a request out of 1878 requests processed by one of the Tomcat servers took 4858 milliseconds - whereby it took 373041 milliseconds to process 1878 requests.
In Figure 3, we can find out metrics for each of the AJP threads used to process requests. We have provided an example of just one here whereby it took just 73 milliseconds to process the last request,whilst the maximum time to process any single request on this thread took 4744 milliseconds.
Figure 2: GlobalRequestProcessor Mbean Attribute Values
Figure 3: RequestProcessor Mbean Attribute Values
Optimized Tomcat Instances
In this final test scenario, we will perform some basic tuning on both Tomcat instances to the AJP connector configuration in server.xml, the connection pool configuration described in context.xml and the JVM heap size allocated to each Tomcat instance.
1) AJP Connector configuration
The AJP connector configuration below is configured so that there are two threads allocated to accept new connections. This should be configured to the number of processors on the machine however two should be suffice here. We have also allocated 400 threads to process requests, the default value is 200. The "acceptCount" is set to 100 which denotes the maximum queue length to be used for incoming connections. The default value is 10. Lastly we have set the minimum threads to 20 so that there are always 20 threads running in the pool to service requests:
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" acceptorThreadCount="2" maxThreads="400" acceptCount="200" minSpareThreads="20"/>
2) Database Pool Configuration
We have modified the maximum number of pooled connections to 200 so that there are ample connections in the pool to service requests.
<Context>
<Resource name="jdbc/productdb" auth="Container" type="javax.sql.DataSource" maxTotal="200" maxIdle="30" maxWaitMillis="10000" logAbandoned="true" username="xxxx" password="xxxx" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/products"/>
</Context>
3) JVM Settings
Since we have increased the maximum number of pooled connections and AJP connector thread thresholds above, we should increase the heap size appropriately. We have set the minimum and maximum heap size to 2GB respectively as below:
export CATALINA_OPTS="-Xms2048m -Xmx2048m"
4) Results
We can observe from the JConsole Mbean metrics below there is a significant improvement in performance. The maximum time it took to process a request is 2048 milliseconds, and the overall processing time to handle 3464 requests is 206741 milliseconds.
If we observe the result sin Figure 5 from an individual AJP thread, we can observe it took 46 milliseconds to process the last request whereby the maximum time it took to process a request on this thread is 1590 miliseconds. This particular thread has processed 141 requests whereby it took a total time of 5843 milliseconds to process these requests.
Figure 4: GlobalRequestProcessor Mbean Attribute Values
Figure 5: RequestProcessor Mbean Attribute Values
For more details on Tomcat 8 connector parameters, please visit this this link at Apache
-- source https://www.c2b2.co.uk/middleware-blog/tomcat-performance-monitoring-and-tuning.php
References
https://wiki.apache.org/tomcat/HowTo/FasterStartUp
http://skybert.net/java/improve-tomcat-startup-time/
https://tomcat.apache.org/articles/performance.pdf
https://tomcat.apache.org/tomcat-8.5-doc/config/http.html
https://www.mulesoft.com/tcat/tomcat-performance
https://www.mulesoft.com/tcat/tomcat-jvm
http://www.monitis.com/blog/18-java-tomcat-application-optimization-tips/
http://www.theserverside.com/tip/Two-most-commonly-misconfigured-Tomcat-performance-settings
https://javamaster.wordpress.com/2013/03/13/apache-tomcat-tuning-guide/
https://www.c2b2.co.uk/middleware-blog/tomcat-performance-monitoring-and-tuning.php