Centralized log management (Graylog, Logstash, Fluentd)
This guide explains how you can send your logs to a centralized log management system like Graylog, Logstash (inside the Elastic Stack or ELK - Elasticsearch, Logstash, Kibana) or Fluentd (inside EFK - Elasticsearch, Fluentd, Kibana).
There are a lot of different ways to centralize your logs (if you are using Kubernetes, the simplest way is to log to the console and ask you cluster administrator to integrate a central log manager inside your cluster).
In this guide, we will expose how to send them to an external tool using the quarkus-logging-gelf
extension that can use TCP or UDP to send logs in the Graylog Extended Log Format (GELF).
The quarkus-logging-gelf
extension will add a GELF log handler to the underlying logging backend that Quarkus uses (jboss-logmanager).
By default, it is disabled, if you enable it but still use another handler (by default the console handler is enabled), your logs will be sent to both handlers.
Prerequisites
To complete this guide, you need:
-
Roughly 15 minutes
-
An IDE
-
JDK 17+ installed with
JAVA_HOME
configured appropriately -
Apache Maven 3.9.9
-
Docker and Docker Compose or Podman, and Docker Compose
-
Optionally the Quarkus CLI if you want to use it
-
Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build)
Example application
The following examples will all be based on the same example application that you can create with the following steps.
Create an application with the quarkus-logging-gelf
extension. You can use the following command to create it:
For Windows users:
-
If using cmd, (don’t use backward slash
\
and put everything on the same line) -
If using Powershell, wrap
-D
parameters in double quotes e.g."-DprojectArtifactId=gelf-logging"
If you already have your Quarkus project configured, you can add the logging-gelf
extension
to your project by running the following command in your project base directory:
quarkus extension add logging-gelf
./mvnw quarkus:add-extension -Dextensions='logging-gelf'
./gradlew addExtension --extensions='logging-gelf'
This will add the following dependency to your build file:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-logging-gelf</artifactId>
</dependency>
implementation("io.quarkus:quarkus-logging-gelf")
For demonstration purposes, we create an endpoint that does nothing but log a sentence. You don’t need to do this inside your application.
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import org.jboss.logging.Logger;
@Path("/gelf-logging")
@ApplicationScoped
public class GelfLoggingResource {
private static final Logger LOG = Logger.getLogger(GelfLoggingResource.class);
@GET
public void log() {
LOG.info("Some useful log message");
}
}
Configure the GELF log handler to send logs to an external UDP endpoint on the port 12201:
quarkus.log.handler.gelf.enabled=true
quarkus.log.handler.gelf.host=localhost
quarkus.log.handler.gelf.port=12201
Send logs to Graylog
To send logs to Graylog, you first need to launch the components that compose the Graylog stack:
-
MongoDB
-
Elasticsearch
-
Graylog
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: docker.io/elastic/elasticsearch:8.15.0
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- graylog
mongo:
image: mongo:4.0
networks:
- graylog
graylog:
image: graylog/graylog:4.3.0
ports:
- "9000:9000"
- "12201:12201/udp"
- "1514:1514"
environment:
GRAYLOG_HTTP_EXTERNAL_URI: "http://127.0.0.1:9000/"
# CHANGE ME (must be at least 16 characters)!
GRAYLOG_PASSWORD_SECRET: "forpasswordencryption"
# Password: admin
GRAYLOG_ROOT_PASSWORD_SHA2: "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"
networks:
- graylog
depends_on:
- elasticsearch
- mongo
networks:
graylog:
driver: bridge
Then, you need to create a UDP input in Graylog. You can do it from the Graylog web console (System → Input → Select GELF UDP) available at http://localhost:9000 or via the API.
This curl example will create a new Input of type GELF UDP, it uses the default login from Graylog (admin/admin).
curl -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "X-Requested-By: curl" -X POST -v -d \
'{"title":"udp input","configuration":{"recv_buffer_size":262144,"bind_address":"0.0.0.0","port":12201,"decompress_size_limit":8388608},"type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","global":true}' \
http://localhost:9000/api/system/inputs
Launch your application, you should see your logs arriving inside Graylog.
Send logs to Logstash / the Elastic Stack (ELK)
Logstash comes by default with an Input plugin that can understand the GELF format, we will first create a pipeline that enables this plugin.
Create the following file in $HOME/pipelines/gelf.conf
:
input {
gelf {
port => 12201
}
}
output {
stdout {}
elasticsearch {
hosts => ["http://elasticsearch:9200"]
}
}
Finally, launch the components that compose the Elastic Stack:
-
Elasticsearch
-
Logstash
-
Kibana
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
# Launch Elasticsearch
version: '3.2'
services:
elasticsearch:
image: docker.io/elastic/elasticsearch:8.15.0
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- elk
logstash:
image: docker.io/elastic/logstash:8.15.0
volumes:
- source: $HOME/pipelines
target: /usr/share/logstash/pipeline
type: bind
ports:
- "12201:12201/udp"
- "5000:5000"
- "9600:9600"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.io/elastic/kibana:8.15.0
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
Launch your application, you should see your logs arriving inside the Elastic Stack; you can use Kibana available at http://localhost:5601/ to access them.
GELF alternative: Send logs to Logstash in the ECS (Elastic Common Schema) format
You can also send your logs to Logstash using a TCP input in the ECS format.
To achieve this we will use the quarkus-logging-json
extension to format the logs in JSON format and the socket handler to send them to Logstash.
For this you can use the same docker-compose.yml
file as above but with a different Logstash pipeline configuration.
input {
tcp {
port => 4560
coded => json
}
}
filter {
if ![span][id] and [mdc][spanId] {
mutate { rename => { "[mdc][spanId]" => "[span][id]" } }
}
if ![trace][id] and [mdc][traceId] {
mutate { rename => {"[mdc][traceId]" => "[trace][id]"} }
}
}
output {
stdout {}
elasticsearch {
hosts => ["http://elasticsearch:9200"]
}
}
Then configure your application to log in JSON format instead of GELF
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-logging-json</artifactId>
</dependency>
implementation("io.quarkus:quarkus-logging-json")
and specify the host and port of your Logstash endpoint. To be ECS compliant, specify the log format.
# to keep the logs in the usual format in the console
quarkus.log.console.json=false
quarkus.log.socket.enable=true
quarkus.log.socket.json=true
quarkus.log.socket.endpoint=localhost:4560
# to have the exception serialized into a single text element
quarkus.log.socket.json.exception-output-type=formatted
# specify the format of the produced JSON log
quarkus.log.socket.json.log-format=ECS
Send logs to Fluentd (EFK)
First, you need to create a Fluentd image with the needed plugins: elasticsearch and input-gelf.
You can use the following Dockerfile that should be created inside a fluentd
directory.
FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
RUN ["gem", "install", "fluent-plugin-input-gelf", "--version", "0.3.1"]
You can build the image or let docker-compose build it for you.
Then you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf
<source>
type gelf
tag example.gelf
bind 0.0.0.0
port 12201
</source>
<match example.gelf>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
</match>
Finally, launch the components that compose the EFK Stack:
-
Elasticsearch
-
Fluentd
-
Kibana
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: docker.io/elastic/elasticsearch:8.15.0
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- efk
fluentd:
build: fluentd
ports:
- "12201:12201/udp"
volumes:
- source: $HOME/fluentd
target: /fluentd/etc
type: bind
networks:
- efk
depends_on:
- elasticsearch
kibana:
image: docker.io/elastic/kibana:8.15.0
ports:
- "5601:5601"
networks:
- efk
depends_on:
- elasticsearch
networks:
efk:
driver: bridge
Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.
GELF alternative: use Syslog
You can also send your logs to Fluentd using a Syslog input. As opposed to the GELF input, the Syslog input will not render multiline logs in one event, that’s why we advise to use the GELF input that we implement in Quarkus.
First, you need to create a Fluentd image with the elasticsearch plugin.
You can use the following Dockerfile that should be created inside a fluentd
directory.
FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
Then, you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf
<source>
@type syslog
port 5140
bind 0.0.0.0
message_format rfc5424
tag system
</source>
<match **>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
</match>
Then, launch the components that compose the EFK Stack:
-
Elasticsearch
-
Fluentd
-
Kibana
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: docker.io/elastic/elasticsearch:8.15.0
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- efk
fluentd:
build: fluentd
ports:
- "5140:5140/udp"
volumes:
- source: $HOME/fluentd
target: /fluentd/etc
type: bind
networks:
- efk
depends_on:
- elasticsearch
kibana:
image: docker.io/elastic/kibana:8.15.0
ports:
- "5601:5601"
networks:
- efk
depends_on:
- elasticsearch
networks:
efk:
driver: bridge
Finally, configure your application to send logs to EFK using Syslog:
quarkus.log.syslog.enable=true
quarkus.log.syslog.endpoint=localhost:5140
quarkus.log.syslog.protocol=udp
quarkus.log.syslog.app-name=quarkus
quarkus.log.syslog.hostname=quarkus-test
Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.
Elasticsearch indexing consideration
Be careful that, by default, Elasticsearch will automatically map unknown fields (if not disabled in the index settings) by detecting their type. This can become tricky if you use log parameters (which are included by default), or if you enable MDC inclusion (disabled by default), as the first log will define the type of the message parameter (or MDC parameter) field inside the index.
Imagine the following case:
LOG.info("some {} message {} with {} param", 1, 2, 3);
LOG.info("other {} message {} with {} param", true, true, true);
With log message parameters enabled, the first log message sent to Elasticsearch will have a MessageParam0
parameter with an int
type;
this will configure the index with a field of type integer
.
When the second message will arrive to Elasticsearch, it will have a MessageParam0
parameter with the boolean value true
, and this will generate an indexing error.
To work around this limitation, you can disable sending log message parameters via logging-gelf
by configuring quarkus.log.handler.gelf.include-log-message-parameters=false
,
or you can configure your Elasticsearch index to store those fields as text or keyword, Elasticsearch will then automatically make the translation from int/boolean to a String.
See the following documentation for Graylog (but the same issue exists for the other central logging stacks): Custom Index Mappings.
Configuration Reference
Configuration is done through the usual application.properties
file.
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
Type |
Default |
---|---|---|
Determine whether to enable the GELF logging handler Environment variable: Show more |
boolean |
|
Hostname/IP-Address of the Logstash/Graylog Host By default it uses UDP, prepend tcp: to the hostname to switch to TCP, example: "tcp:localhost" Environment variable: Show more |
string |
|
The port Environment variable: Show more |
int |
|
GELF version: 1.0 or 1.1 Environment variable: Show more |
string |
|
Whether to post Stack-Trace to StackTrace field. Environment variable: Show more |
boolean |
|
Only used when Environment variable: Show more |
int |
|
Whether to perform Stack-Trace filtering Environment variable: Show more |
boolean |
|
Java date pattern, see Environment variable: Show more |
string |
|
The logging-gelf log level. Environment variable: Show more |
|
|
Name of the facility. Environment variable: Show more |
string |
|
Type |
Default |
|
Additional field value. Environment variable: Show more |
string |
required |
Additional field type specification. Supported types: String, long, Long, double, Double and discover. Discover is the default if not specified, it discovers field type based on parseability. Environment variable: Show more |
string |
|
Whether to include all fields from the MDC. Environment variable: Show more |
boolean |
|
Send additional fields whose values are obtained from MDC. Name of the Fields are comma-separated. Example: mdcFields=Application,Version,SomeOtherFieldName Environment variable: Show more |
string |
|
Dynamic MDC Fields allows you to extract MDC values based on one or more regular expressions. Multiple regexes are comma-separated. The name of the MDC entry is used as GELF field name. Environment variable: Show more |
string |
|
Pattern-based type specification for additional and MDC fields. Key-value pairs are comma-separated. Example: my_field.*=String,business\..*\.field=double Environment variable: Show more |
string |
|
Maximum message size (in bytes). If the message size is exceeded, the appender will submit the message in multiple chunks. Environment variable: Show more |
int |
|
Include message parameters from the log event Environment variable: Show more |
boolean |
|
Include source code location Environment variable: Show more |
boolean |
|
Origin hostname Environment variable: Show more |
string |
|
Bypass hostname resolution. If you didn’t set the Environment variable: Show more |
boolean |
|
This extension uses the logstash-gelf
library that allow more configuration options via system properties,
you can access its documentation here: https://logging.paluch.biz/ .