Elasticsearch - Why is it recommended to use 50% of available memory for the heap? -


i know es recommends 50% of available memory heap, confusing me: on 2gb system, give 1gb es , 1gb os.

why on 8gb system, need reserve 4gb os rather 1gb on smaller system?

p.s. debating whether question belongs on stackoverflow or serverfault. if wrong place post it, mention in comments , move it.

historically, of elasticsearch's jvm heap usage due query filters , field data (facets/aggregations/sorting) , there weren't lots of safeguards in place prevent oom(this has changed).

in order perform searches elasticsearch uses mmap'ed files , bypasses jvm heap , hands off management of os disk cache, amazing job of using every teeny bit of memory available os , if starve performance suffer.

with recent updates, field data can stored directly disk via doc_values. tradeoffs amount of data can facet/aggregate/sort on speed doc_values end being little slower pure in memory solution, ymmv.

my general recommendation post 1.x elasticsearch allocate 2-4gb jvm , ensure plan ahead , use doc_values on high cardinality fields plan aggregate with.

so, 50% number no longer valid, if server more 64gb, don't want heap can't use compressedoops (https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/).

here great reading why should love disk cache: http://kafka.apache.org/documentation.html#persistence


Comments

Popular posts from this blog

javascript - Jquery show_hide, what to add in order to make the page scroll to the bottom of the hidden field once button is clicked -

javascript - Highcharts multi-color line -

javascript - Enter key does not work in search box -