

If your application actually uses DEC Supplemental (which is much like Latin-1) you might see something like this (with vttest highlighting the places where it does not match Latin-1): (The original hardware terminals used a setup selection). NRCS (National Replacement Character Sets) is provided as a mode in xterm. Latin-1 is mapped 1-1 into Unicode, but DEC Supplemental (which is much like Latin-1) is not.

That is different, because it would rely upon xterm's Unicode support (to use all of the available characters in the National Replacement Character Sets).

On the other hand, your question mentions DEC Supplemental. But if your locale settings (and terminal) are set consistently, you will see the expected characters.

The ncurses library checks the locale (which the calling application should have initialized) and finds that those single bytes in the 0x80-0xff do not form complete multibyte UTF-8, and shows blanks. Here is a screenshot from vttest illustrating ISO-8859-1 (what you might expect to see, for the application you are attempting to use):Īnd this is what it would show with UTF-8 encoding Those are what the locale names without the ".UTF-8" suffix usually refer to. Rather than UTF-8, what you appear to be asking about are the ISO-8859-1 and related encodings. If your shell-initialization uses the system's locale settings, then it should be enough to do this from the command-line: LC_ALL=en_US LANG=en-US xterm You can see what xterm is doing using the control-right-mouse menu: there is an entry "UTF-8 Encoding" which is checked when it expects UTF-8, and grayed-out when you cannot change it. (This is particularly an issue when running xterm in a desktop environment, where your system locale uses UTF-8, e.g., en_US.UTF-8). If that locale tells xterm that it uses UTF-8, xterm will use UTF-8 encoding (see the locale resource), and depending on the resource settings, may not allow you to turn it off. The locale in effect when you start xterm affects how it interprets the same codes. UTF-8 encoding uses codes in the 0x80-0xff range to build up multi-byte characters, which is not what you want. That tells applications running inside xterm to use UTF-8. Your shell's locale settings are part of the problem: I'd be grateful for any help, and happy to provide any and all detail. Restoring that glyph is what put me on this chase. That character appeared fine running under Sun Solaris, but displays a blank in kubuntu linux with ncurses. We have a custom program that invokes an xterm-based editor, using curses tools, which displays at least one character in the 128-255 range. I'm running 32-bit kubuntu, rather than 64-bit. Plenty of other people have written to the forums with similar problems, and they were solved by adjustments as in the above list. In all cases, only the default '?' glyphs are displayed.
UXTERM TRIED TO USE LOCALE FULL
In addition to the simple echo -e test, used test programs thatĭisplay full font grids or invoke appropriate vt-100 esc-commandĮscape ( '<' (to load the DEC Supplemental Character Set into G1)Ĭtrl-N (shift-out, to load G1 into the "left-half" GL set) Start xterm with different fonts invoked, all of which have the full I've tried a selection of characters above 128 (= 0x80), same result for all. These are characters 162 & 163(decimal) and in western fonts should display as cent sign and British pound. When I try, depending on the settings, I get either a blank or the default glyph of question mark in a dark oval. All characters in range 0x20-0x7e display as expected, but none in range 0x80-0xfe. I have tried every method I could find in the forums, and cannot display any 8-bit character in an xterm window, under kubuntu 18.04 (nor in earlier versions).
