Usually, on modern hardware, the performance of inserting a separate java objects (events) into the persistent storage on a specific node can reach up to 100000 objects per second. In reality, performance is affected by many factors that somehow reduce this value:
as a rule, the table contains at least one index field (@Id), often more than one, but inserting a record into the index is a separate insert action, therefore, inserting one object into some table with an index field will be slower than into simple table
the increasing size of the frame will slightly reduce inserting performance, usually, the overall performance of the inserts decreases linearly with the increase in the size of the frame (see picture below)
the table size for tables with index fields, with an increase in the size of indexes, the performance of inserts and reads slightly reduce
the size of the allocated heap memory, an larger heap size provides greater performance with large storage sizes, since reduces the frequency of clearing frames in memory
inserts with ordered incremental values of the index field will be faster than with unordered (random) values
There is a definite relationship between frame size, heap size, and the overall performance of inserts and reads. Usually, the best performance of data inserts is observed for small frame sizes, and we could stop there, but in practice, the internal mechanisms of the cluster use the definitions for frames constantly loaded into memory, as well as auxiliary data for transactions and transport communications.
For smaller frame sizes, the more objects will be on the heap for a given storage size and the more memory we need. It is also critical for good insert performance to have the largest possible number of index frames directly loaded into memory. If the heap is small relative to the storage size, the index frames will be flushed from the heap more frequently and re-read from disk as needed, which degrades performance. Here we will need to determine some trade-off between the speed of inserts and the amount of RAM that we can allocate for the heap. A relatively detailed picture of these dependencies is shown in the graph, roughly speaking, if our heap size is not limited, then we can use the minimum values for the frame size, but if there are limitations, then they must be taken into account when planning the storage size and the planned number of new records in some unit of time.
The performance values of inserts (persist) and reads (find by id) in the graph were obtained on a cheap Mac Mini Late 2012 i7 2.3 16G RAM 1TB SSD on a small storage size (near to 100GB) by measuring 5000000 inserts and reads into/from a table with a record size of 3 fields and one index field (@Id) with an incremental filling of the identifier and using the @NoCheck annotation. Heap cleanup is enabled. Xmx8G.
As you can see from the graph, the size of frames has more little effect on the performance of readings.