Hi Andreas,<div>First my apologies for writing in the jargon that governs my community of practice.</div><div> <br><div>There is a lot more to it. In your csync example you compare apples with banana's (cached vs. disked files) and hence it does not represent the problem, as I am not writing about matching/synchronising two plain directories. </div>
<div><br></div><div>I am rather presenting a solution to the <u>sad fact</u> that cloud-services with synchronisation do not acknowledge that not all devices have the same amount of disk space and that some logical is required to manage this constraint. </div>
<div><br></div><div>Here is an example: </div><div>- My mobile phone has a few Gb of space compared to </div><div>- my laptop that has 20x more, compared to </div><div>- my desktop which has 200x more, compared </div><div>
- to my server that limits at 30 Tb. </div><div><br></div><div>In particular I refer to the incremental statistical nature of the file-usage, where files in-frequently used may be moved to another location in meaningful manner, whilst still being economical about the storage media used for that purpose, and that this requires that there is some logic which guarantees that I have the newest version of a file on any device which has accessed the file before. Where as if I have not accessed a file earlier, it is appropriate to store it remotely. </div>
<div><br></div><div>In plain english I could assign the desktop and the server to become complete repositories for all files, and the laptop and mobile to become partial repositories, which always have the newest files - and are permitted to overwrite less frequently used files if they are about to run out of the assigned disk space. Hereby I save space and bandwidth using entropy from the SQLite database. </div>
<div>An example could be that I have never opened a given movie on my mobile (1.2Gb) and hence the file should not be synchronised to it. However my txt-notepad has been used 150k times so that is the first thing that gets synchronised. </div>
<div><br></div><div>SQLite is not used for sync. It is used for the logic.</div><div><br></div><div>Is it clearer now?</div><div><div><br></div><div>/B</div><div><br><div class="gmail_quote">On 22 February 2012 17:17, Andreas Schneider <span dir="ltr"><<a href="mailto:asn@cryptomilk.org">asn@cryptomilk.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wednesday 22 February 2012 15:27:46 Bjorn Madsen wrote:<br>
> *Thanks Klaas,*<br>
<br>
Hi,<br>
<div class="im"><br>
> As computational complexity is my favourite subject I start thinking how I<br>
> would perform a full replication the day when our data-repositories pass<br>
> 2-4 TB which is just around the corner....<br>
><br>
> On my private ubuntu 10.04 I ran $ ls -aR | wc -l and found 126037 files,<br>
> equal to 1.45 TB<br>
> I copied the full filetree from my two machines (just the paths and the<br>
> md5sum)- let's call them A & B into SQLite (appx. 9.3Mb each). In addition<br>
> I set up inotify to write any changes to the SQLite database on both<br>
> machines. Nothing more.<br>
><br>
> Now my experiment was to run the whole filetree in SQLite, perform three<br>
> join operations:<br>
><br>
</div>> 1. What is on machine A but not on machine B, as a LEFT excluding join,<br>
<div class="im">> i.e. SELECT <select_list> FROM Table_A A LEFT JOIN Table_B B ON A.Key =<br>
> B.Key WHERE B.Key IS NULL<br>
</div>> 2. What is on machine B but not on machine A, as a RIGHT excluing join,<br>
<div class="im">> i.e. SELECT <select_list> FROM Table_A A RIGHT JOIN Table_B B ON A.Key =<br>
> B.Key WHERE A.Key IS NULL<br>
</div>> 3. What is on A and B, as an inner join, i.e. SELECT <select_list> FROM<br>
<div class="im">> Table_A A INNER JOIN Table_B B ON A.Key = B.Key<br>
><br>
> With this operation I produce the lists #1 and #2 which I intend to feed to<br>
> rsync to send with ssh across the both machines (pull not push), and doing<br>
> so would be a piece of cake. I use rsync's option delete after filetransfer<br>
> as the time where a file is unavailable is unnoticeable.<br>
><br>
> However the first operation of this kind takes some time (17m4.096s on my<br>
> Intel Atom) and as our databases grow bigger, exponentially longer. In<br>
> addition the memory footprint also doesn't get prettier.<br>
<br>
</div>Did you expect somthing else?<br>
<br>
<br>
If I run csync on my home directory with cold caches, it needs 140.91 seconds<br>
walking 836549 files. And about 200MB to store in information about the files<br>
in memory.<br>
<br>
Comparing side A with side B takes less than a second.<br>
<br>
[stderr] 20120222 16:37:00.709 DEBUG csync.api- Reconciliation for local<br>
replica took 0.52 seconds visiting 836549 files.<br>
[stderr] 20120222 16:37:01.203 DEBUG csync.api- Reconciliation for remote<br>
replica took 0.49 seconds visiting 836549 files.<br>
<br>
So 17min vs 0.52 sec ;)<br>
<div class="im"><br>
><br>
> Now as all new files are written to the sqlite database, I can timestamp<br>
> the operation and only use incremental operations (at least until I have<br>
> performed 10^14 file operations, where it would be appropriate to recreate<br>
> the database.<br>
><br>
> This means I have a few powerful operations available:<br>
<br>
</div>17min doesn't sound powerful ... :)<br>
<div class="im"><br>
> #A I can segment the join operations to match the available memory<br>
> footprint using SELECT and an appropriate interval which would reflect the<br>
> allocated memory footprint.<br>
><br>
> #B More interestingly, I can use change propagation in the database<br>
> operation to avoid redundant operations, by only selecting files updated<br>
> since last operation or last "nirvana" when the synchronisation daemon<br>
> reinitiates its check. I my case the usage of change propagation brings my<br>
> memory footprint down to a few kb's (which I could run even on my old<br>
> phone).<br>
><br>
> #C I can measure the entropy (frequency of update, based on list #3) per<br>
> time-interval as an indicator for the age of my files, which permits that<br>
> high-entropy files (typical browser and cached stuff) rarely gets<br>
> synchronised, and that ultra-low entropy stuff could be moved to slower<br>
> drives. In practice this would permit all my drives quickly to have the<br>
> latest files I’m working on and then synchronise everything else later (if<br>
> there is space).<br>
<br>
</div>I don't think sqlite has been written for file synchronization. I don't know<br>
what you really want to achieve but 2-way file synchronization isn't done by<br>
sqlite and rsync is only a one-way file synchronizer.<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
-- andreas<br>
<br>
--<br>
Andreas Schneider GPG-ID: F33E3FC6<br>
<a href="http://www.cryptomilk.org" target="_blank">www.cryptomilk.org</a> <a href="mailto:asn@cryptomilk.org">asn@cryptomilk.org</a><br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
Owncloud mailing list<br>
<a href="mailto:Owncloud@kde.org">Owncloud@kde.org</a><br>
<a href="https://mail.kde.org/mailman/listinfo/owncloud" target="_blank">https://mail.kde.org/mailman/listinfo/owncloud</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>Bjorn Madsen</div><div><i>Researcher Complex Systems Research</i></div><div>Ph.: (+44) 0 7792 030 720 Ph.2: (+44) 0 1767 220 828</div><div><a href="mailto:bjorn.madsen@operationsresearchgroup.com" target="_blank">bjorn.madsen@operationsresearchgroup.com</a></div>
<div><br></div><br>
</div></div></div>