[Parley-devel] Features in next version of Parley

Inge Wallin inge at lysator.liu.se
Wed Sep 10 00:28:40 UTC 2014

Hi and sorry for a long time since my last reply.  I had some very bad 
computer problem where my laptop crashed from overheating and took a 
lot of work down with it. And to make it worse, I had to buy a new 
computer and didn't get email to work on it for almost a week.

Anyway, you raise some interesting points below.  See comments inline. 

On Thursday, August 28, 2014 02:13:46 AM Anša Vernerová wrote:
> Hi,
> > You mean your confidence levels. :) Anyway, real testing by a 
> > learner would be worth a lot. Can you build Parley from the sources?
> I will give it a try (during the weekend).

How did it work out for you? Where you able to get it running?

> > Not sure what you mean with 'translation' here. All of the other 4
> > variations are translations of different kind. What we have discussed 
> > to
> > generalise the confidence concept so that you can set separate 
> > between the same words but using different training methods - 
> > a
> > spoken word vs being able to write and spell it is a simple example.
> I understand it so that the confidence levels are assigned to an item
> in the target language L2 under the following scenarios (L1 being the
> known language and L2 and L3 target languages):
> reading:             L2.text -> image/L1
>       the learner understands the intended meaning from text, but
> possibly cannot pronounce it nor express in L3


> listening:            L2.sound -> image/L1
>      the learner understand the intended meaning from sound, but
> possibly cannot correctly spell it


> writing:              image/L1 -> L2.text
>     the learner can write down the word corresponding to a given
> meaning, but possibly not pronounce it correctly


> speaking:           image/L1 -> L2.sound
>     the learner can produce the correct sound for a given meaning, but
> possibly not write it down

Yes  (Note that parley doesn't cover this case, but Artikulate does.)

> So far, these can be seen as "passive" and "active" grades for the
> different aspects of a vocabulary entry (reading = "L2.text.passive",
> listening = "L2.sound.passive", writing = "L2.text.active",...) Not
> sure about "translation" either.
> In the following scenarios, in case of a failure to produce the right
> answer, the learner has to indicate which confidence level should be
> affected:
> L2.text/L2.sound -> L3.text/L3.sound     (the problem may be both on
> the L2 and on the L3 side, user clarification is necessary, then the
> confidence levels for the passive skill of L2 and the active skill of
> L3 can be updated)

This is where it gets interesting.  Is the L2 -> L3 scenario really 
worthwhile? All theories about learning languages that I have read 
indicates strongly that you need context to learn something well. And to 
have 2 words that supposedly mean the same thing in 2 languages that 
you don't know sounds to me like the oppositve of having context.

Parley could possibly support this technically but I doubt that it's a useful 
way of training.

> At an initial stage of learning, the learner might also want to
> practice just the following two scenarios. Should they have their own
> grades?
> dictation:            L2.sound -> L2.text
>       the learner knows how to capture the sound in writing, but
> possibly does not know what it means
> pronunciation:     L2.text -> L2.sound
>       the learner remembers how to pronounce the word, but possibly
> not its meaning

Technically I think they should. We should not design away any 
possibilities in the format alone. But in practice I also doubt that this is a 
good way to train and I also question if it's actually possible.

> >> Pregrades are currently 6 Leitner levels below the first visible one
> >> in the user interface. They have testing intervals from 3.5 minutes
> >> up to 8 hours. Inge implemented them.
> > 
> > Yeah... The original Leitner system had an interval of 1 day until next
> > time training a word once you got it right the first time. That was 
> > too long so you forgot almost every word until it was time to train it
> > again.
> Oh! I have parley set up so that the blocking threshold of Level 1 is
> just 4 hours, not 1 day :-) But what you are saying is that the
> Blocking threshold for Level 1 has been replaced by hard-coded
> implementation of pregrades.

Yes (sort of).  Although they are called "initial phase" not pregrades in the 
user interface. "Pregrades" is just an internal concept that was so named 
because the long-term levels were previously named "grades".  I don't like 
any of them but at least they are only internal to the code.

What is actually happening is that if you learn a new word from scratch it 
goes from grade 0, pregrade 0 to grade 0, pregrade 1. And then when you 
train it repeatedly with intervals 3.5 minutes, 7 minutes, 15 minutes, and 
so on, it goes from pregrade 1 to pregrade 7.  The next step is grade 1, 
pregrade 0 and from there it's exactly as before, i.e. pregrades are not 
used anymore.

So you should set back your grade 1 timeout to 1 day, I think.  

> If I am right that pregrades are applied below/at Level 1, then
> getting the word right the first time still skips the pregrade part
> (an unknown word marked as right on the first occasion I see it goes
> to level 2). Which I suppose is the kind of behaviour that one would
> want. (No need to get extra practice for a word which was answered
> correctly.) It definitely sounds much more intriguing than the "three
> consecutive right answers" version.

Yes, If you already know a word there is no use training it 8 times just to 
learn it. But it still moves up through the higher levels just like the other 

> > I have been searching for some research about the optimal timer 
> > Like you I have a feeling that the current ones are too short, 
> > the higher levels. But I have found nothing to indicate any better 
> > Do you know of any?
> No, just the 80-95% rule of thumb, and the experience that I often
> reach those levels even if I skip practice for a long time and then
> come back to my collection. (Obviously, this does not hold for the low
> levels.)
> >> As you also noted compound structures are a very effective way to
> >> reinforce multiple words in one training event. I would like to track
> >> the words individually within such constructions. For example for 
> >> the sentence, "The dog chased the cat", the student should get 
> >> for: the, dog, cat, to chase, past tense conjugation and SVO order. I
> >> would
> >> like to see lesson plans with a threshold, that once crossed starts
> >> presenting the user with more complicated compound structures
> >> while they still see their performance improve on the basics 
> >> words.
> > 
> > This is a whole new ballpark, though. It's a very intriguing thought but 
> > would like to read a little research about it before we get it into
> > Parley.
> > There is a risk that it becomes too complex.
> Yes, I have also been pondering this idea of tracking individual words
> and bits of grammar but practicing full sentences. I always came to
> the conclusion that, as a parley user, I prefer wasting some time by
> not optimizing, than wasting a lot of time by tagging each sentence
> with a list of words and grammar points that it contains.
> I think this would be a great idea for an innovative commercial
> product. After all, Computational Linguistics provides tools (at least
> for some languages) that could do the tagging for the user. But what
> about users who are learning uncommon languages? (I am not fond of
> automatically generated content in language learning. It is fine for
> sentences like "I am Jane/Mary/John/Adam/..." But even just for
> sentences like "John entered the car/bus/train.", one has to have some
> kind of ontology telling them that "car", "bus" and "train" are
> vehicles and that "PEOPLE enter VEHICLES", something like
> http://en.wikipedia.org/wiki/WordNet#Knowledge_structure and
> http://www.pdev.org.uk/ ... To put it simply, computers are not good
> enough in speaking human languages (yet) for producing sensible 
> that students could learn from.)

There is a company called Glossika (glossika.com) that uses this concept 
in a method called "mass sentences". Basically they have a set of several 
thousand sentences that they present in the known and the target 
language and where you study by saying the sentence in the target 
language before they give you the answer.

The idea is that the sentences will give you the structure of the language 
rather than individual words. You will learn the vocabulary anyway since 
many of the sentences use the same word in different contexts.

It's an interesting idea and I am looking into it if we can incorporate it 
into Parley at some point in the future.

> BTW, do you know how often people use some of the more complex
> features of parley? E.g., how many of the files uploaded to the common
> repositories have word types, inflection, synonyms etc. filled in?
> This could give some estimate of how many users would actually use
> some more complex feature such as you are describing.

No, no clue unfortunately. But we do get some bug reports so I think that 
at least some people use them.

To be honest I am not sure if they are worthwhile in the general sense. 
They give us a lot of work to maintain...

And thanks! I love your feedback!


> Anša
> _______________________________________________
> Parley-devel mailing list
> Parley-devel at kde.org
> https://mail.kde.org/mailman/listinfo/parley-devel

More information about the kde-edu mailing list