GrayLog Configuration: GrayLog & Elasticsearch Performance Check Points.
GrayLog Configuration: GrayLog & Elasticsearch Performance Check Points.
1. Should have correct timezone setting to avoid confusion at browser.
a. Check and Set the admin user Timezone properly. Default is UTC.
can be set at server.conf file in the line "root_timezone = "
b. Set the Web interface timezone in web.conf file.
1. Should have correct timezone setting to avoid confusion at browser.
a. Check and Set the admin user Timezone properly. Default is UTC.
can be set at server.conf file in the line "root_timezone = "
b. Set the Web interface timezone in web.conf file.
2. Change the Global timeout for communication with Graylog server nodes. It's very useful when extended query is ran.
you can tackle gateway timeout errors with this. Default value is 5s.
3. Choosing a Heap Size. No more than 50% of available RAM.
Lucene makes good use of the filesystem caches, which are managed by the kernel. Without enough file system cache space, performance will suffer. No more than 31 GB: If the heap is less than 31 GB, the JVM can use compressed pointers, which saves a lot of memory: 4 bytes per pointer instead of 8 bytes.
even when you have memory to spare, try to avoid crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and makes the GC struggle with large heaps.
4. The best thing to do is disable swap completely on your system. If thats not an option try to lower swappiness.
edit the /etc/sysctl.conf. Enter vm.swappiness value as 1.
Check swappiness setting to be updated.
5. Set the fielddata Size. By default, this setting is unbounded—Elasticsearch will never evict data from fielddata.
To prevent this scenario, place an upper limit on the fielddata by adding this setting to the
you can tackle gateway timeout errors with this. Default value is 5s.
value can be changed at web.conf in "timeout.DEFAULT" section.
3. Choosing a Heap Size. No more than 50% of available RAM.
Lucene makes good use of the filesystem caches, which are managed by the kernel. Without enough file system cache space, performance will suffer. No more than 31 GB: If the heap is less than 31 GB, the JVM can use compressed pointers, which saves a lot of memory: 4 bytes per pointer instead of 8 bytes.
Change ES_HEAP_SIZE value in "/etc/sysconfig/elasticsearch"
even when you have memory to spare, try to avoid crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and makes the GC struggle with large heaps.
4. The best thing to do is disable swap completely on your system. If thats not an option try to lower swappiness.
edit the /etc/sysctl.conf. Enter vm.swappiness value as 1.
vm.swappiness = 1
sysctl -p
Check swappiness setting to be updated.
sysctl -a | grep -i vm.swappiness
5. Set the fielddata Size. By default, this setting is unbounded—Elasticsearch will never evict data from fielddata.
To prevent this scenario, place an upper limit on the fielddata by adding this setting to the
/etc/elasticsearch/elasticsearch.yml
indices.fielddata.cache.size: 50%
6. Set Indexing Performance.
If you are in an indexing-heavy environment, such as indexing infrastructure logs, you may be willing to sacrifice some search performance for faster indexing rates.
In these scenarios, searches tend to be relatively rare and performed by people internal to your organization. They are willing to wait several seconds for a search, as opposed to a consumer facing a search that must return in milliseconds.
The default is 20 MB/s. f you are doing a bulk import and don’t care about search at all, you can disable merge throttling entirely.
This will allow indexing to run as fast as your disks will allow: enter this config in your yml file "indices.store.throttle.type" : "none"
In My case, I've set it to indices.store.throttle.max_bytes_per_sec: 200mb
you can increase index.translog.flush_threshold_size from the default 200 MB to something larger, such as 1 GB. This allows larger segments to accumulate in the translog
before a flush occurs. By letting larger segments build, you flush less often, and the larger segments merge less often. All of this adds up to less disk I/O overhead and
better indexing rates.
Set it in /etc/elasticsearch/elasticsearch.yml
index.translog.flush_threshold_size: 1gb
No comments