A mounted Box.com drive and duply are a good combination for backup data to a remote location. Duply allows for encrypted backups, and the encryption also helps to prevent bit rot. A remote drive is useful for keeping cloud-based servers free of extraneous data, which is especially useful when paying per GB per month for storage.
My problem started after I upgraded Ubuntu. Apparently, I used a the default configuration for davfs2 instead of my custom configuration, which caused a few errors. But I also learned a few things:
To use Box.com, you must have an appropriately sized cache and disable file locking in the davfs2 config.
To set this up, edit the config:
sudo nano /etc/davfs2/davfs2.conf
Add these two lines anywhere (preferably under the commented out section or at the bottom):
use_locks 0 cache_size 100
- File locking should be disabled for Box, as it is not compatible. Files can be written to on the server while they are being uploaded to Box, and vice versa, but your timed sync/backup strategy should prevent this.
- Box.com is limited to files of around 50 MB, and duply defaults to 25 MB, so a cache of 100 MB is certainly safely over the limits.
If a mounted drive cannot be written to, the davfs2 cache will grow (very large).
In my case, it was up to around 9 GB. Check to see what’s going on with journal ctl:
sudo journalctl -b | grep mount or something like
journalctl -u davfs2 --since today. You might see a message like open files exceed max cache size. Then check the configuration of davfs2 to increase the cache size and see if locking is enabled, as detailed above.
The davfs2 cache can be deleted safely if a drive is unmounted.
If a drive is busy, it can’t be unmounted. You can force an unmount (with chance of data corruption) using
umount -f or
umount -l, but it’s better to identify and kill the processes using the drive:
lsof | grep '/dev/sda1'(change /dev/sda1 to the mounted drive name)
pkill target_process(kills busy proc. by name | kill PID | killall target_process)