KMountPoint::probablySlow and cifs mount points
Aurélien Gâteau
agateau at kde.org
Mon Nov 25 16:41:48 GMT 2013
Le lundi 25 novembre 2013 13:54:38 Mark Gaiser a écrit :
> On Mon, Nov 25, 2013 at 10:45 AM, Aurélien Gâteau <agateau at kde.org> wrote:
> > Le dimanche 24 novembre 2013 19:42:25 Mark Gaiser a écrit :
> >> On Sun, Nov 24, 2013 at 5:05 PM, Albert Astals Cid <aacid at kde.org> wrote:
> >> > In Okular we just got bug
> >> > https://bugs.kde.org/show_bug.cgi?id=327846
> >> > PDF Render time is unreasonably slow over cifs on high latency (WAN)
> >> > network connections
> >> >
> >> > Basically the issue is that poppler is quite read-intensive over files,
> >> > reading them over and over, and since the file is "local but really
> >> > remote" it takes ages to render for big files.
> >> >
> >> > The only solution i can think of is doing a local copy and then working
> >> > on
> >> > that.
> >>
> >> That would work with small files (< 10 MB) but will get you into
> >> trouble for bigger files.
> >> You can't - or shouldn't - do that in an automated manner. If the user
> >> manually copies the file and then opens it in a pdf reader: fine. Just
> >> don't auto copy the file. So you can probably give the user a popup
> >> suggesting them to copy the file to his local drive?
> >
> > I disagree with that: I believe the use should not have to worry about
> > this
> > and trust the application to be smart and do whatever is most appropriate
> > to deliver the best result instead.
>
> No. Here's why (again, but more detailed).
> 1. People don't expect the file to be copied. Just to be opened.
Sure, the fact the file has been downloaded should not be exposed to the user,
it should be downloaded in the background and removed afterward.
> 2. If you follow this route, you are already in the "probablySlow"
> path thus it copies over a likely slow connection. Have you though
> about the added bandwidth that takes? Not everyone is on unlimited
> insanely fast internet lines. Some people have bandwidth caps. For the
> record: i'm not one of those.
I don't think users consider PDF files as streamable content that can be
progressively read: it would make it impossible to do things like searching
for text across the document: you need the whole document. People with
bandwidth caps are aware of them and will download the file themselves before
opening it.
> 3. Copying it leaves the user puzzled as to where "the smart
> application" might have copied the data to.
Not if this is done transparently.
> 4. Copying also slowly eats your disc space. When i had this nasty
> surprise with a 10GB movie i wanted to play - not copy! - i was
> welcomed by a disc full error..
You can't compare the use case of a movie and a PDF. A movie is something you
expect to go through in one go. Arguably, you can also skip and go back and
forth, but it is much easier with movies than with PDF because movie container
formats were designed for that, and even with this design, it does not work
well with all network file systems.
As for disk space, it should be possible to check if there is enough disk
space before downloading, but again movies and PDF are quite different: 10GB
PDF may exists, but they are pretty much the exception.
> 5. And what if i open an PDF (in this example) just to see the first
> page and then close it again. That is a needless copy.
I assume the copy is temporary, it is removed once Okular is closed. Okular
could start displaying the first pages while it is still downloading in the
background. I have no idea if this is possible with PDF files.
> 6. You could even argue that you might violate intellectual property
> stuff. Insane? No! For a past employer i had to read PDF files, but
> was by no means allowed to copy them due to IP infringement if i did.
> So i had to use an application that doesn't make caches, temp files or
> copies internally and really just opens it, shows the data to me and
> closes it. In other terms: no physical copy of that file was allowed
> to be made without explicit permission of some management layer.
That sounds like a corner case to me and I am not sure it matters as long as
the local copy is removed at the end. If you dive into that hole you can also
be worried about the application memory being swapped out to disk or the
application crashing and dumping the document in a core file.
> You open up a whole can of issues if you follow the "just copy and be
> done with it" route. An app should _never_ copy a file to display it
> or open it. It should either just open it and be slow as hell or _ask_
> the user really nicely if it can copy the file to his local system for
> improved performance.
>
> All i'd like to do is very thoroughly talk the idea out of you that
> you can just automatically copy files if a connection is seemingly
> slow. If an app does that, it should ask first!
Users work with the "Don't make me think!" mantra. Asking before doing the
only doable thing is a bad user experience. What is needed though is a
progress indication to allow the user to cancel the operation if it takes too
long. A progress bar and a cancel button in the status bar for example.
> >> What should be done is detecting is the target is mounted via a slow
> >> connection. So if it's mounted via the internet then "probablySlow"
> >> should return true. On the other hand, i don't know if such checks are
> >> in place and if they are not there, how to implement them.
> >
> > Would be nice, but sounds difficult to do reliably, especially without
> > generating too much traffic.
>
> I doubt that.
> If you do a ping or those kind of network tricks to figure out it's
> speed. Then yes, it will cause some traffic. And i don't think you
> should go for that.
>
> What is possible - technically - is detecting if the connection is
> made through a wired internet connection, local network connection or
> even the same with wireless. Based on the "connection speed" that
> NetworkManager is just showing you, you can make a first guess if the
> connection is slow or fast. If you connect to an ip within the local
> IP ranges (10.x.x.x, 192.168.x.x etc..) then you can assume a fast
> local connection. Sure, that doesn't "need" to be true because you can
> also have VPN setups where ip's might seem local, but are in fact
> connected through the internet. There is no way to figure that out in
> a sane manner so they would have "false positives". That doesn't
> matter much. Those are the exceptions anyway. For other connections
> you can first do an nslookup (or trace route) to see if they are local
> or remote. Even then you can - yet again - argue that it's not going
> to work if you have a local dns server.. Again not important because
> those are the exceptions.
Sounds quite complicated: what about adding what Albert suggests until you are
done with writing this slow-network detection code?
Aurélien
More information about the kde-core-devel
mailing list