creating a content system

Manuel Amador rudd-o at amautacorp.com
Wed Aug 10 23:15:12 CEST 2005


El mié, 10-08-2005 a las 10:49 +0200, Dirk Mueller escribió:
> On Wednesday 10 August 2005 10:37, Zack Rusin wrote:
> 
> > I don't think you ever want to be doing indexing on nfs shares. Ever.
> 
> Oh you do want it if its the NFS server that does it, and if it can make the 
> index available to you. Thats because directory traversal is just magnitudes 
> slower than file-reading over NFS. so if you have a properly indexed file on 
> NFS its many times faster than doing the find / recursive grep yourself. In 
> addition it only needs to index once and then can answer the queries. 
> 
> Actually recursive directory listings are so slow over NFS that its just a lot 
> faster to create a INDEX.gz in the base directory of the NFS share and zgrep 
> that file each time . 

I think the original poster meant that no clients (workstation PCs)
should hit the NFS server for building indexes themselves, but rather
let the daemon run in the NFS server and let clients relay their queries
to the NFS server, and return responses from it to the user.

This is actually pretty much infeasible (in a seamlessly deployable way)
for per-user daemons.  There'd need to be a system indexing/search
daemon on the server, running all the time, and responding to queries
(evidently filtering out results they cannot see, via the access(2)
function, and filtering out clients which cannot, by /etc/exports, mount
said filesystems).  That is why I think that, in corporate environments,
it is unrealistic to have per-user indexing/search daemons a la Beagle.
It would be so much more efficient if the indexing process happened a
single time, with a single daemon.


> 
> 
-- 
Manuel Amador                   <rudd-o at amautacorp.com>
http://www.amautacorp.com/            +593 (4) 220-7010


More information about the Klink mailing list