OSSEC Log Management with Elasticsearch

ossec-does-elasticsearch-225x225Among the many useful features of OSSEC is its capability to send alerts to any system that can consume syslog data. This makes it easy to combine OSSEC with a number of 3rd party SIEMs to store, search and visualize security events.  Splunk for OSSEC is one such system that works on top of the Splunk platform.

Splunk can be expensive though, particularly if you collect a lot of log data. So I’ve been working on a solution for collecting OSSEC security alerts based on Elasticsearch that provides a cost effective alternative to Splunk.


Log Management System Architecture

The OSSEC log management system I’ll discuss here relies on three open source technologies, in addition to OSSEC:

  • Logstash – Parses and stores syslog data to Elasticsearch
  • Elasticsearch – General purpose indexing and data storage system
  • Kibana – User interface that comes with ElasticSearch


Logstash is configured to receive OSSEC syslog output then parse it and forward to Elasticsearch for indexing and long terms storage. Kibana is designed to easily submit queries to Elasticsearch and display results in a number of user designed dashboards. So the steps involved for developing an OSSEC log management system with Elasticsearch are:

  1. Configure OSSEC to output alerts to syslog.
  2. Install and configure Logstash to input OSSEC alerts, parse them and input the fields to Elasticsearch.
  3. Install and configure Elasticsearch to store OSSEC alerts from Logstash.
  4. Install and configure Kibana to work with Elasticsearch.

Configure OSSEC Syslog Output

To keep this article as brief as possible, I won’t go over how to install OSSEC. That is well documented on the OSSEC Project website. To configure OSSEC to send alerts to another system via syslog follow these steps:

  1. Login as root to the OSSEC server.
  2. Open /var/ossec/etc/ossec.conf in an editor.
  3. Let’s assume you want to send the alerts to a syslog server at listening on UDP port 9000.  Add these lines to ossec.conf right above the </ossec_config> statment:
  1. Enable syslog output with this command:
/var/ossec/bin/ossec-control enable client-syslog
  1. Restart the OSSEC server with this command:
/var/ossec/bin/ossec-control start

Install and Configure Logstash

Now Logstash needs to be configured to receive OSSEC syslog output on UDP port 9000 or whatever port you decide to use. The configuration file you need to capture and parse syslog input is adapted from the rsyslog recipe from Logstash cookbook with a few tweaks for OSSEC derived from the blog by Dan Parriott, my colleague on the OSSEC Project team, who was an early adopter of Logstash and Elasticsearch:

input {
# stdin{}
  udp {
     port => 9000
     type => "syslog"

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_host} %{DATA:syslog_program}: Alert Level: %{BASE10NUM:Alert_Level}; Rule: %{BASE10NUM:Rule} - %{GREEDYDATA:Description}; Location: %{GREEDYDATA:Details}" }
      add_field => [ "ossec_server", "%{host}" ]
    mutate {
      remove_field => [ "syslog_hostname", "syslog_message", "syslog_pid", "message", "@version", "type", "host" ]

output {
#  stdout {
#    codec => rubydebug
#  }
   elasticsearch_http {
     host => ""

Lines [1 – 7] Every Logstash syslog configuration file contains input, filter and output sections. The input section in this case tells Logstash to listen for syslog UDP packets on any IP address and port 9000. For debugging you can uncomment line 2 to get input from stdin. This is handy when testing your parsing code in the filter section

Lines [9 – 11] The filter section divides up the incoming syslog lines that are placed in the Logstash input field called “message” with the “match” directive. Logstash grok filters do the basic pattern matching and parsing. You can get a detailed explanation of how grok works on the Logstash grok documentation page. The syntax for parsing fields is %{<pattern>:<field>}, where <pattern> is what will be searched for and <field> is the name of the field that is found.

Line [12] The syslog_timestamp, syslog_host, syslog_program and syslog_pid fields are parsed first. The next three fields are specific to OSSEC: Alert_level, Rule and Description. The remainder of the message is placed into Details. Here is the parsing sequence for these fields:

  1. Alert_level – skip past the " Alert level: " string then extract the numeric characters up to the next space.
  2. Rule – skip past the " Rule: " string then extract the numeric characters up to the ” – ” string.
  3. Description – skip past the " - " string then extract any characters, including spaces, up to the "; Location: " string.
  4. Details – skip past the "; Location: " string then extract the remaining characters, including spaces, from the original “message” field.

Line [13] The host field, containing the name of the host on which Logstash is running is mapped to the logstash_host field with the add_field directive in grok.

Lines [15 – 17] All the fields are parsed so the extraneous fields are trimmed from the output with the remove_field directive in the mutate section.

Lines [21 – 24] The output section sends the parsed output to Elasticsearch or to stdout.  You can uncomment codec => rubydebug statement to output the parsed fields in JSON format for debugging.

Lines [25 – 26] The elasticsearch_http directive sends the Logstash output to the Elasticsearch instance running at the IP address specified by the host field.  In this case Elasticsearch is running at IP address

If you store the Logstash configuration in your home directory in a file called logstash.conf and Logstash is installed in /usr/local/share/logstash, then you can start running logstash like this:

/usr/share/logstash/bin/logstash --config ${HOME}/logstash.conf

Install and Configure Elasticsearch

The easiest way to install Elasticsearch is from RPMs or DEB packages. I use CentOS most of the time so I’ll discuss how to install from RPMs. You can install Elasticsearch in a cluster, but to keep things simple,  I’ll cover installation on a single server and will assume that is the same system where Logstash is installed.

With that said, here is how you install and configure Elasticsearch:

  1. Download the RPMS:
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.noarch.rpm --no-check-certificate
  1. Login as root.
  2. Install the RPMs with this command:
rpm -Uvh elasticsearch-0.90.7.noarch.rpm
  1. The RPM will install Elasticsearch in /usr/share/elasticsearch and the configuration files /etc/elasticsearch/elasticsearch.yml and <tt?/etc/sysconfig/elasticsearch. It also creates a service script to start, stop and check the status of Elasticsearch. Start Elasticsearch with the service command:
service elasticsearch start

By default, the Elasticsearch files are maintained in /var/lib/elasticsearch and logs in /var/log/elasticsearch. You can change that in elasticsearch.yml, but for now leave them as is. However let’s set the name of the Elasticsearch cluster to my cluster to match the cluster name setting from the Logstash config file of the previous section.  To do that open /etc/elasticsearch/elasticsearch.yml and set the following line as shown:

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
cluster.name: mycluster

Install and Configure Kibana

At this point you are able to collect OSSEC alerts and query them with the Elasticsearch RESTful API. But Elasticsearch provides a web console called Kibana which enables you to build consoles that post queries automatically to your Elasticsearch backend. To install and configure Kibana follow this procedure.

  1. Download Kibana
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0milestone4.zip --no-check-certificate
  1. Unzip the downloaded package.
  2. Copy the src directory in the unzipped Kibana directory to your the Apache web server htdocs directory or Tomcat webapps directory depending on which web server you are using.
  3. Change the name of the source directory to “kibana”.
  4. Open the kibana/config.js file in an editor.
  5. Change the “elasticsearch:” field value to the IP address of your Elasticsearch system. For the example system I’ve been using so far the IP would be so the line would look like this (including the comma):
elasticsearch: "",

To test the installation, open the Kibana URL – – in a browser. You should get a screen that looks like this:


To get to the console screen, click on the Logstash Dashboard link in the Yes bullet point under Are you a Logstash User?

Query Elasticsearch with Kibana

If you let your OSSEC system run for a while you should have collected some alerts that were stored in Elasticsearch. After going to the Logstash Dashboard, you’ll see a screen that has some panels on it. The top panel queries Elasticsearch for all alerts by default.

To get specific alerts, you enter a query string for one of the OSSEC fields, such as “Rule = 70001”, then you’ll see the results in a the panel called EVENTS OVER TIME that shows counts of the events returned from Elasticsearch over time. You can do additional queries by clicking on the plus icon of the most recent query then entering the new query strings and clicking on the magnifying glass icon. The illustration below shows results for three queries that I entered looking for alerts for OSSEC rules 700001, 591 and 700012.

Logstash search

The alerts fields are displayed in the panel below  EVENTS OVER TIME. You select the fields you want to see by clicking on the checkboxes for the fields you want to display in the Fields list shown in the lower left hand corner of the illustration. In this case, I’ve selected @timestamp, Alert_level, Rule, Description and Details.

As new alerts are stored in Elasticsearch, they will appear in the Kibana console if your refresh the screen in your browser. Alternatively you can have the console refresh automatically by clicking the time scale menu item, which is labeled a day ago to a few seconds ago, then select Auto-refresh > and one of the several refresh times ranging from seconds to 1 day. The panels will then refresh at every interval you specified and you should see new alerts pop up on the screen, assuming those OSSEC alerts are generated on your OSSEC agent systems.

When you get this system working try experimenting with different queries for other OSSEC alerts. I’ve just scratched the surface of what can be done with Elasticsearch.


Article by Vic Hargrave

Software developer, blogger and family man enjoying life one cup of coffee at a time. I like programming and writing articles on tech topics. And yeah, I like coffee.


    1. I tried that, but unfortunately the omelasticsearch plugin for rsyslog would not load on my CentOS 6.3 system. Also, as you point out, Logstash is more versatile.

      1. Great post .
        question : why you need to ossec server . cant you send all to logstash ?
        isn’t it better to install ossec as standalone and to send all by rsyslog to logstash ?

        1. OSSEC is designed to detect and process security events based on rules it detects in system logs, logstash is not. It’s true you could fashion an agent-server log forwarding system with Logstash, but then you’d have to devise a rule engine to analyze the log data and create the alerts.

          1. I am wary of loosing the clientip field as the above writer eludes to. There’s some really valuable localization information in those logs, not just dangerous things.

            I am more wondering about local analysis, and an alert on danger to the logstash server, perhaps listening on a logstash input{} port of choice, thus making data acquisition simple and specific.

            This would appear to provide what I want, please poke holes in it for me if there’s “cutout” marks obvious?

  1. I used beaver instead syslog (it has cache possibilities and tagging, nice If you have more Ossec servers), also Logstash rules are more specific. Kibana3 dashboard can be than very rich and faster in searching. Only missing part is correct parsed parts interpretation as ElasticSearch splits everything to string, even numbers. So special index is needed (so far no luck to get it work completely) but the result is still better than expensive Splunk. You can see examples of my configs at GitHub

        1. Hey vasek

          I have installed logstash 1.4.x version and zeromq 3.x. I’ve used your zeromq configuration but I havent manage to get logs from ossec. Also I set configuration on ossec config file.

          When I start logstash with debug mod, it stucked and not be able to kill process with ctrl + x.

          I dont know what is my mistake. I’ve spend whole weekend already..

          Please send email to me; mehmet@mehmetince.net

  2. I’ve small problem, namely all logs are coming from central OSSEC server and syslog_host is the same even for OSSEC agents. How to sort the logs based on specific agent?

    1. Good question. As it turns out, the IP address of the agent is usually returned somewhere in the “Details” field and not always in a consistent position. This makes sorting based on agent more difficult. This is a limitation of OSSEC. It may make sense to add this as a consistently placed field on the syslog output stream, but for now it does not exist.

  3. Vic,
    Great tutorial, I am just having one issue I cant fix. I am running everything on the same server (OSSEC, Elasticsearch, Kibana) I change the config.js like your directions but I keep getting this error:

    Could not contact Elasticsearch at http:localhost:9200. Please ensure that Elasticsearch is reachable from your system.

    I also tried putting in the hostname and doing it as hostname/elasticsearch all with no luck.

  4. How can I filter on the following fields?
    Src IP:
    User: root

    I’ve noticed these fields are not in the above setup/config.
    Do I need to add them to the logstash.conf ?


      1. I am no logstash guru but received a config from a colleague who is using older versions of above products but this is his filter:

        grok {
        type => ”
        add_field => [ “received_at”, “%{@timestamp}” ]
        pattern => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}: Alert Level: %{NUMBER:level:int}; Rule: %{NUMBER:rule:int} – %{DATA:tmp_message}; Location: %{DATA:tmp_location}; (?:srcip: %{IP:srcip}; )?(?:user: %{NOTSPACE:user}; )?%{GREEDYDATA:details}”
        add_tag => ‘grok1’

        I see that srcip and user are 2 specific fields but no mather if I change the logstash.conf and restart logstash I still see the same interface in Kibana3.
        Can it be related to caching?
        Does it take some time to show these srcip+user fields?


    1. yes I do know that thread and I was going to suggest it to you. That setup looks at the alerts log file, whereas my example gets the syslog feed. I’m not sure if the src_ip field is always present in the UDP stream of packets, but maybe it doesn’t matter.

      1. Works like a charm on my server!
        I would surely recommend it to anyone! (perhaps use rsyslog if the instances were seperated but that’s it).

  5. Hi,

    Great article! I am having an issue… Ossec server seems to be unable to send alerts to my redis broker, which I use to collect logs from my logstash agents. Reis is listening on 6379; and that’s what I have in the ossec.conf file:



    I’m wondering whether ossec server can only send alerts to logastash server directly but not to redis.

    Please advise.

    Thank you in advance.

    1. I was able to “fix” the issue – I just specified a udp port in the input section of the logstash server config.

  6. Thanks for the fantastic post!

    “Note that Logstash as of version 1.4.x is run differently than documented here.”

    Any plans to add docs for Logstatsh 1.4.x or a pointer to someone who has?


Leave a Reply

Your email address will not be published. Required fields are marked *