[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-i18n-doc
Subject:    Re: GCompris on its way to a public release
From:       Albert Astals Cid <aacid () kde ! org>
Date:       2014-11-19 21:38:51
Message-ID: 10986405.tqZFs8ZHZn () xps
[Download RAW message or body]

El Dimecres, 19 de novembre de 2014, a les 01:30:23, Bruno Coudoin va 
escriure:
> Le 18/11/2014 12:43, Burkhard Lück a écrit :
> > What about http://gcompris.net/wiki/Translation_addons and
> > http://gcompris.net/wiki/Word_Lists ?
> > 
> > The Legacy Gtk+ Version had some resources in xml files to translate, e.
> > g.
> > wordsgame.xml, gletters.xml, readingv.xml and hangman.xml
> > 
> > In the new gcompris this seems to be done via json files, e. g.
> > activities/gletters/resource/default-[lang].json
> > 
> > Do we need to integrate the translations of these resources into scripty?
> 
> Hi,
> 
> Yes, I have not yet mentioned about that but this is the right time to
> talk about it.
> 
> Beside the regular po file we have dataset files for several activities
> that have are localized instead of translated. As you mentionned it,
> these files have been translated from xml to json file during the port.

How do you see the translation of those datasets happening? I.e. do you want 

a) The translation is in the json
b) The translation is in the .po file

b) is certainly easier, since you just need to extract the text from the .json 
to a dummy cpp in Messages.sh time and then just call 
i18n(theTextInEnglishFromTheJson) and everything works.

Cheers,
  Albert

> 
> I don't know what is the best i18n strategy to adopt, keep them in
> GCompris or moving them into scripty. I it helps in the decision, as an
> input, these files once done are usually pretty stable over time.
> 
> As it will require some work on my side and on yours to have then under
> scripty, I would like to go this way only if there is a real added value.
> 
> We have not yet mentioned it but this is an important part in GCompris,
> we also have a large dataset of voices. This represent about 200MB in 40
> languages. In the Gtk+ version it was in the same git as the code but
> this is a huge mistake. It has to be put is a separate repo and svn
> works better for this kind of workload. Do you have some suggestion on
> the best place for this?
> 
> Bruno.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic