Can be workaround with rclone destination but this adds a layer of complexity). very limited support for cloud destinations (no *Drive type services. no stable version yet - so not suitable for production rather complex multilevel datastore structure that if corrupted not clear how to recover. My suggestion to implement that on the forums was met with complete silence. does not support extended attributes driven exclusions (for example to exclude all Files skipped by Time Machine on a mac). support to run server side component to facilitate ACLs.It’s nice and frequently requested feature on duplicacy forums. ability to mount backup history as a virtual drive.backup configuration is stored on the destination.Generally, on paper, there are a lot of compelling features that build on what duplicacy does and move that forward. I admit, it could have been my fault, but that never happened with duplicacy, and it should not have been that easy to accomplish for a novice user.Īpproach to deduplication is indeed similar. Only compatibility (migration from) with previous version is promised, and as expected with software version 0.xxx any breaking changes can happen any time.ĭuring my time trying Kopia with local SFTP storage I managed to corrupt the datastore twice, both cases failed to recover. a set of snapshots), for example - maybe it's not even possible?) and for all its pretty graphs, Duplicacy doesn't really tell me anything useful either.Ĭontext: long time Duplicacy user here, and tried Kopia recently for a month.įor production use Kopia is not even a contender - it is still alpha. Unfortunately both products have so little status information about what they're up to that it's fairly hard to quickly compare this stuff (I'm yet to figure out how to even get the entire size of a backup set in Kopia (i.e. Duplicacy makes a big fuss about being the first and only backup program to de-duplicate all backup sets sent to the same destination (lock free deduplication they call it) - but as far as I can tell, Kopia will also do this? Is this correct? Anyone know how they compare in this regard? Specifically, I'm trying to figure out how their de-duplication and compression performance compares (when using multiple backup sets to the same destination). The UI's and the way they're set up to work from a workflow perspective is quite different, but I feel like behind the scenes the backup mechanism is probably similar? Does not support VSS.Many people seem to use Duplicacy, but it's quite hard to find info about Kopia.Hence long chain susceptible to corruption Sorry for harshness, even thinking about this $50 piece of turd makes me mad. If they can’t fix user-facing issues, how can I trust them not to mess up my backup? Pathetic.I kept quiet for a year - giving them a chance to fix it. Who designed that? Are they aware of profilers? Blows my mind. Lookin at spindump the software keeps repeatedly reallocating small amounts of memory. ![]() Crashplan (Even Crashplan!) accomplishes the same search in 2 min. CPU usage 100% for 10+ minutes, after which I give up. Got response “I’m working on making it faster.” Nothing changed by today. They offered test case of 1 million files (1000 folders with 1000 files each ) that in their tests takes under 2 min to scan. Without regex the scan time is still very long.They recommended using folder exclusions in the tree, which is not sustainable. Adding even one regex drastically increases scan time.Reported to Arq a year ago with screencasts and repro steps. Moving around windows causes reproducible UI corruption.
0 Comments
Leave a Reply. |