Centralized Logging with Graylog on Docker

Logs are the answer that when something goes wrong. When you work on an enterprise scale, you need a centralized logging mechanism. (You can’t jump one server to other and tail that streams)

For the central log management, you need something like Graylog, logstash, ELK… the list goes on.

Setting up Graylog Server

Life was tough before docker, thankfully we can setup Graylog server through docker, follow the instructions on docker page.

I prefer to use docker-compose, and something like this should work.

In most cases, it just works out of the box.

Protip: Graylog use MongoDB for settings, configuration, etc. and it holds log data on the Elasticsearch. So, be careful when setting RAM usage for the JVM. 

Sending the logs

There are many ways to send logs. We use monolog from the sending custom logs or WordPress level logs.

For the syslog;

(probably rsyslog pre-installed),

  • Create new file under the “/etc/rsyslog.d” directory – 90-graylog2.conf
  • *.* @SERVER_IP_ADDRESS:PORT;RSYSLOG_SyslogProtocol23Format
  • restart the service > “service rsyslog restart”
  • voila!

Conclusion

We are holding more than 1.5 billion logs on the single machine, that logs data about 1.1 TB and the logs come from WordPress, custom logs, syslog, HAProxy across the ~20 servers.

ProtipIf the logs are not that critical and you don’t have high IO rate. You might avoid using SSD disks. That will reduce the cost, and you can hold a lot of indices on the same machine.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s