OPush EhCache

An [EhCache](http://ehcache.org/) instance is nested in OPush in order to store information related to devices synchronization.

This library manages seven different stores:

* *mailSnapshotStore*: last known state for a given device concerning emails
* *monitoredCollectionService*: collections handled by push mode
* *syncedCollectionStoreService*: sync preferences for each device
* *unsynchronizedItemService*: calendar entries and contacts being sent to a device
* *mailWindowingIndexStore*: emails being sent to a device
* *mailWindowingChunksStore*: emails being sent to a device
* *syncKeysStore*: last known states for each device

Behind those technical stuffs, OPush administrator has to understand what is inside each store in order to manage EhCache configuration.

In order to provide a performant and durable application, the data are stored in memory and on disk.

Each store is configured that way:

* data is stored to disk for durability
* data is stored to memory for performance
* each element is dropped after one month
* elements should be evicted from memory in LRU order
* store acquires a percentage of EhCache memory

**Configuration**

EhCache may be configured in */etc/opush/ehcache_conf.ini*

### EHCACHE MEMORY SETTINGS

# Default value: half of JVM max memory
#maxMemoryInMB=

## BY CACHE, IN PERCENT (optional parameters)

#mailSnapshotStore=30
#monitoredCollectionService=5
#syncedCollectionStoreService=5
#unsynchronizedItemService=25
#mailWindowingIndexStore=5
#mailWindowingChunksStore=25
#syncKeysStore=5

### EHCACHE STATISTICS SETTINGS
statsSamplingTimeStopInMinutes=10
statsShortSamplingTimeInSeconds=1
statsMediumSamplingTimeInSeconds=10
statsLongSamplingTimeInSeconds=60

By default EhCache memory size is half of the JVM max memory, define in jetty configuration (*/etc/default/jetty* on debian).

EhCache memory is shared between the seven stores discussed above, a percentage of EhCache memory is applied on each store (the sigma must be equal to 100%).

The default configuration, shown in the example, is significative of the size of each store inside OPush.

In order to be able to monitore those stores, a [CRaSH](http://obm.org/content/crash) command has been developped; we will explain its usage later.

Monitoring needs also some configuration, and default values may be adjusted to your needs:

* *statsSamplingTimeStopInMinutes*: sets the frequency used to sample statistics, too long may overload the server
* *statsShortSamplingTimeInSeconds*: the three remaining parameters are based on Linux load average. Short sampling time, used in dashboard command
* *statsMediumSamplingTimeInSeconds*: medium sampling time, used in dashboard command
* *statsLongSamplingTimeInSeconds*: long sampling time, used in dashboard command

**EhCache CRaSH command**

There are actualy three commands:

* *ehcache conf*: displays the actual EhCache configuration
* *ehcache stats*: displays last stats
* *ehcache dashboard*: displays a dashboard refreshed each seconds of EhCache stats

Example of the *ehcache dashboard* command
![EhCache dashboard](/media/images/ehcache_dashboard.png "EhCache dashboard")

In this command, the two first parts are static; the statistics part is the only one to be refreshed.

This table shows statistics per store:

* disk hits in one second (statsShortSamplingTimeInSeconds configuration)
* disk hits in ten seconds (statsMediumSamplingTimeInSeconds configuration)
* disk hits in one minute (statsLongSamplingTimeInSeconds configuration)
* memory really used by the store in (byte, kilobyte, megabyte...)
* memory bar and percentage of the usage by the store on his dedicated max memory

**How to configure it well**

When a memory store is full, every new elements added into evicted older elements from memory.
You can check if the cache is well configured, three states can be observed :

1. If a store memory space is too small
* Active elements will be moved to the disk.
* Then, the server will have to search elements on disk which is costly rather than search them in memory.
* In this case you will see the three DISK HITS /s" showing values different from 0 after a few samples.
* --> Increase the size of this store

2. If a store memory space is well configured
* No or few active elements will be moved to the disk.
* Then, most of time, the server will search elements in memory which is really fast.
* In this case you will see in the column "DISK HITS /s" a number often to 0, sometimes 1.
* --> The store seems to have to good memory size

3. If a store memory space is too big
* Many non used elements will be keep in memory.
* Then, your Opush server will use its internal memory to store elements which will never be used.
* In this case you will see in the column "DISK HITS /s" a number often to 0 (but some hit can be shown).
* --> Decrease the store memory until reach the right size

**NOTE** when OPush is stopped every in memory element is moved to the disk.
At startup it's a normal behavior to see many disk hits, wait for the cache to be warm before to configure it.
Cheap Nike Air Max 2017 For Online Sale

Image: