Re: What's the prerenderer do?
- From: Cyrille Chepelov <cyrille chepelov org>
- To: dia-list gnome org
- Subject: Re: What's the prerenderer do?
- Date: Mon, 4 Mar 2002 07:28:31 +0100
Le Sun, Mar 03, 2002, à 07:12:11PM -0600, Lars Clausen a écrit:
It does? In what way? Gnus was warning me that there were more than two
sections, but I could see them, neither when sending nor when seeing the
mail on the list.
Well, the first section is your message's body in ISO-8859-15, and the
second one is your signature in ISO-8859-1. Which is funny, since both
sections could be encoded with both encodings... But it's a known fact that
Gnus is a bit weak on the auto-selection of the best encoding for a given
bit of text (I'm confident this'll be solved earlier than later. Saw some
things to that effect recently on debian-user-french l d o).
Funnily, mutt understands this is messy when I reply to your message, and
sees it can merge these sections. It'll then attempt sending in US-ASCII
first, then latin1, latin9, KOI8-R and if all else fails, UTF-8. Adding this
specific behaviour to Gnus is probably not that difficult.
Forget the DPS renderer. DPS on free *nix platforms is dead, even if
XFree 4 has swallowed some DPS stuff. Since the GYVE project more or less
died, nobody really uses it anymore. It was a nice experiment for me, but
it's a patch I haven't touched or maintained in... uh, let's say fourteen
months (no. It must be even more).
Are there any files left from the DPS renderer that should be removed?
They've never been merged in the first place.
I'm horribly confused about the unicode things. I'm leaving the prolog
stuff (mostly) alone now (except see below), so if you could fix these
things, my brain and I would be most thankful:) I'm thinking the FreeType
version can be simple than the GDK version.
I haven't yet gotten even close to dumping single glyph outlines, I simply
dump the whole font. I know it makes the file much larger, but it's a good
first approximation and *much* easier. To reduce the size, I shall (soon)
have it use the standard PS fonts when possible.
OK. Be aware however that the very definition of "list of standard PS fonts"
is locale-dependent (actually, it depends on whether LC_ALL matches "ja_.*"
or not).
I'll give a try this evening at folding the StringPrerenderer into your
header pass.
How hard would it be to do font dumping on a glyph basis ? Would it help if
I added the code which tracks of which glyph of which font is needed ?
Here's a bit from the freetype mailing list:
> How do I find out a glyph index of characters>128 in TrueType Fonts
> when the character is not ASCII but in Latin2 or CP1250 codepage ?
The correct way is to activate a Unicode cmap (i.e. PID,EID=3,1), then
convert the character code of your encoding to Unicode, and finally
using FT_Get_Char_Index() to convert the Unicode encoded character into
a glyph index.
This would happen in freetype_load_string() and freetype_render_string() in
lib/font.c. We may well want to just use the Unicode cmap for all fonts
when Unicode is on, so in freetype_add_font(), add
FT_Select_Charmap(face, ft_encoding_unicode);
for each face. You know better than I how to convert chars into Unicode.
Converting one character to unicode is really easy: you just do something
like that:
for (utfchar* p = start; (*p); p = uni_next(p)) {
unichar c;
uni_get_utf8(p,&c);
/* do something with c, which is an UCS-4 encoded Unicode character */
}
So yes, we definitely don't want to mess with FreeType and Microsoft's
vision of encoding maps, and just talk Unicode to these. We're very close to
always talking UTF-8 internally anyway.
-- Cyrille
--
Grumpf.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]