This guideline will walk you through directly pushing Magnolia logs to ElasticSearch without any 3rd party program like fluentd. Since ElasticSearch is provided as an AWS service so you can easily use this guideline if you're hosting Magnolia CMS on the AWS cloud. By passing 3rd party intervention, we can reduce system loads as well as simplify our deployment model, saving maintenance time, running costs and system management efforts.
Introduction
Below is a typical deployment model for fluentd which uses td-agent program when people integrate your logs and push it to ElasticSearch. It has some weakness with performance impact and system overhead risks that you will find later when we propose the direct logs pushing from our running Tomcat to the server.
Typical integration model
https://www.fluentd.org/guides/recipes/elasticsearch-and-s3
Apply to Magnolia deployment
Weakness
Once Tomcat instance run into error, td-agent will keep reading 'catalina.out' file because you have lots of output in your error log. So Fluentd need to process a lot of output log to get the latest log. td-agent also have to push updated info via its channel to fluentd log aggregator in log collector instance. This would make your system run into overhead of bandwidth, processing and memory.
Proposed model
Implement the model
Install Elasticsearch and Kibana from their official website
Note that we will use https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
Elasticsearch installation
https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html
Kibana installation
https://www.elastic.co/guide/en/kibana/current/targz.html
Connect them together
https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html
Update your pom.xml to use integration libraries
After researching for a possible integration way, we can go with https://github.com/rfoltyns/log4j2-elasticsearch which includes few libraries that you can update your pom.xml as below:
|
Remember to build your webapp and bundle it again.
Update your log4j2.xml for asynchronous bulk appender
Add Elasticsearch appender
Setting your log4j2.xml follow their guideline by adding below appender to your existing one:
|
Add a new AppenderRef to your existing appenders
<AppenderRef ref="elasticsearchAsyncBatch"/>
Example:
Running and verifying results
Run Elasticsearch
Go to its 'bin' folder and run
./elasticsearch
Run Kibana
Go to its folder and run
./bin/kibana
Run your webapp
It should have some console output such as:
Then discover your Kirbana log maybe using this link (depends on your deployment) http://localhost:5601/app/kibana#/discover?_g=()
Double check it with our console output to verify its accuracy
Hope this helps!