* feat(metrics): Add label for main and other listeners
The stats collected per connection are divided according to main or
other listener.
Metrics are decorated with labels listener= main or other.
The memcached listener is also labelled as main.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Add functionality to use SAVE and BGSAVE commands with dynamic CLOUD storage path.
New syntax is:
SAVE [RDB|DF] [CLOUD_URI] [BASENAME] where CLOUD_URI should start with S3 or GCS prefix.
For example, now it should work to have working directory pointing to some local folder and executing
command `SAVE DF s3://bucket/snapshots my_snapshot` would save snapshots to `s3://bucket/snapshots`
with basename `my_snapshot`.
Resolves#4660
---------
Signed-off-by: mkaruza <mario@dragonflydb.io>
binary strings are supported by RESP protocol so we now generate commands
using this protocol, so that we could pass binary strings.
In addition, fixed "done" metric which did not account for number of shards
cluster mode.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(hset_family): Add support for KEEPTTL to HSETEX
The KEEPTTL option if specified makes sure that TTL is preserved for existing members.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Channel store uses a read-copy-update to distribute the changes of the channel store to all proactors. The problem is that we use memory_order_relaxed to load the new pointer to the channel store for each proactor which *does not guarantee* that we fetch the latest value of the channel store. Hence, the fix is to use sequencial consistency such to force fetch the latest value of the channel store.
iouring allows to register a pool of predefined buffers used by kernel.
then during the recv operation the kernel will choose a buffer from the pool, copy data into it
and return it to the application. This is in contrast to prealocate buffers that need to be passed to
a regular Recv. So, for example, if we have 10000 connections, today we preallocate 10000 buffers,
even though we may have only 100 in-flight requests.
This PR does not retire the old approach, but extends with the new once
with the flag `--uring_recv_buffer_cnt=N` that specifies how many receive buffers per thread to preallocate.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Now the debug compression command runs over all the keys to count the histogram.
Based on the histogram it estimates potential savings for huffman compression of the keyspace.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
fix: improve stack margin for s3 operations.
our S3 code relies on aws sdk client, which is extremely stack hungry.
this PR moves some of s3 calls to one-off fibers with increased stacks,
which reduces stack usage for connection fibers executing snapshot save/load operations.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Add support for PUB SHARDCHANNELS and PUB SHARDNUMSUB and report error back if
sub command is not allow to run in non cluster mode.
resolves#847
Signed-off-by: mkaruza <mario@dragonflydb.io>
The bug is that expiring keys during heartbeat should not preempt while writing to the journal and we assert this with a FiberAtomicGuard. However, this atomicity guarantee is violated because the journal callback acquires a lock on a mutex that is already locked by on OnJournalEntry(). The fix is to release the lock when OnJournalEntry() preempts.
Signed-off-by: kostas <kostas@dragonflydb.io>
When `KEEPTTL` is optinally supplied after key, any existing members in the set will preserve their TTL values.
Only new members will get TTL applied to them.
fix(dfly_bench): support dns resolution for cluster hosts and multiple slot ranges.
Initial parsing of MOVED response is done but slot migration is not supported yet.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Making RedisParser::Buffer const, some minor changes in dragonfly_connection code.
No functionality is changed.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(set_family): Update TTL for existing fields in SADDEX
In SADDEX a TTL is now also applied to existing fields, if the field
already exists in set, its TTL is updated.
A new flag legacy_saddex_keepttl is introduced which is false by
default. If this flag is set to true, then SADDEX keeps legacy behavior.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
---------
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Mainly comments and refactorings.
There are two functional differrences:
1. flush serialized entries in case we gathered at least K delayed
entries coming from tiered entities.
2. allow loading snapshots larger than memory for tiered enabled datastores.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Co-authored-by: Kostas Kyrimis <kostas@dragonflydb.io>
* chore: reproduce a bug related to #4663
Add various debug logs to help tracking the deadlock.
Add more assertions in helio and provide state time for fibers
during stacktrace printings.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fix FreeMemWithEvictionStep that could preempt under FiberAtomicGuard.
This could happen during the return from the inner loop. Now, we break
from the guard first and then preempt in a safe place.
2. Rename LocalBlockingCounter to LocalLatch
because it's a variation of latch (see std::latch for example).
3. Rename PreUpdate to PreUpdateBlocking to emphasize it can block.
4. Fix mutations counting: consider either insertions or changing the existing entry.
Before that we incremented this counter for misses as well.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
feat(rdb_load): Added a flag to ignore key expiry #3858.
Added a new flag --rdb_ignore_expiry to ignore key expiry when loading from RDB Snapshot. Also cached this flag into RDBLoader object to reuse it.
FormatInfoMetrics used 18KB of stack size in debug mode.
Each call to append increased the stack even though the calls were done
from the scope blocks. This PR overcomes this by move the calls to lambda functions.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Development Docker Build / Build and Push alpine amd64 image (push) Has been cancelled
Development Docker Build / Build and Push alpine arm64 image (push) Has been cancelled
Development Docker Build / Build and Push ubuntu amd64 image (push) Has been cancelled
Development Docker Build / Build and Push ubuntu arm64 image (push) Has been cancelled
Development Docker Build / merge_manifest (alpine) (push) Has been cancelled
Development Docker Build / merge_manifest (ubuntu) (push) Has been cancelled
We call PerformDeletion in an atomic block, which in turn calls SendInvalidationTrackingMessage
that could block. We fix it by separating the blocking logic by moving the invalidation messages into
a designated send queue and flush it later.
In addition rename the function to make it explicit that they are atomic (i.e. not blocking).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>