| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
This infrastructure doesn't in any way guarantee that the character
we produce will sort before the one we incremented; but it does at least
make it much more likely that we'll end up with something that is a valid
character, which improves our chances.
Kyotaro Horiguchi, with various adjustments by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions should take a pg_locale_t, not a collation OID, and should
call mbstowcs_l/wcstombs_l where available. Where those functions are not
available, temporarily select the correct locale with uselocale().
This change removes the bogus assumption that all locales selectable in
a given database have the same wide-character conversion method; in
particular, the collate.linux.utf8 regression test now passes with
LC_CTYPE=C, so long as the database encoding is UTF8.
I decided to move the char2wchar/wchar2char functions out of mbutils.c and
into pg_locale.c, because they work on wchar_t not pg_wchar_t and thus
don't really belong with the mbutils.c functions. Keeping them where they
were would have required importing pg_locale_t into pg_wchar.h somehow,
which did not seem like a good plan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous functions of assign hooks are now split between check hooks
and assign hooks, where the former can fail but the latter shouldn't.
Aside from being conceptually clearer, this approach exposes the
"canonicalized" form of the variable value to guc.c without having to do
an actual assignment. And that lets us fix the problem recently noted by
Bernd Helmle that the auto-tune patch for wal_buffers resulted in bogus
log messages about "parameter "wal_buffers" cannot be changed without
restarting the server". There may be some speed advantage too, because
this design lets hook functions avoid re-parsing variable values when
restoring a previous state after a rollback (they can store a pre-parsed
representation of the value instead). This patch also resolves a
longstanding annoyance about custom error messages from variable assign
hooks: they should modify, not appear separately from, guc.c's own message
about "invalid parameter value".
|
|
|
|
|
|
|
|
|
|
|
| |
File encodings can be specified separately from client encoding.
If not specified, client encoding is used for backward compatibility.
Cases when the encoding doesn't match client encoding are slower
than matched cases because we don't have conversion procs for other
encodings. Performance improvement would be be a future work.
Original patch by Hitoshi Harada, and modified by me.
|
|
|
|
|
|
|
|
| |
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.
Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
elsewhere.
Similarly rename the version in mbprint.c, not because this affects anything
but just to keep the two copies in exact sync. There was some discussion of
having only one copy in src/port/ instead, but this function is so small
and unlikely to change that that seems like overkill.
Slightly editorialized version of a patch by Joseph Adams. (The bug-fix
aspect of his patch was applied separately, and back-patched.)
|
| |
|
| |
|
|
|
|
|
|
| |
as necessary.
Itagaki Takahiro with some changes from me
|
|
|
|
| |
provided by Andrew.
|
|
|
|
|
|
| |
update.
Per discussion.
|
|
|
|
|
|
|
|
|
| |
already did that on Windows, but it's needed on other platforms too when
LC_CTYPE=C. With other locales, we enforce (or trust) that the codeset of
the locale matches the server encoding so we don't need to bind it
explicitly. It should do no harm in that case either, but I don't have
full faith in the PG encoding -> OS codeset mapping table yet. Per recent
discussion on pgsql-hackers.
|
|
|
|
|
|
|
|
|
|
|
|
| |
conversion functions. This allows transaction rollback to revert to a
previous client_encoding setting without doing fresh catalog lookups.
I believe that this explains and fixes the recent report of "failed to commit
client_encoding" failures.
This bug is present in 8.3.x, but it doesn't seem prudent to back-patch
the fix, at least not till it's had some time for field testing in HEAD.
In passing, remove SetDefaultClientEncoding(), which was used nowhere.
|
|
|
|
|
|
|
| |
ENABLE_NLS is not defined, for better compatibility of the backend with
modules compiled the other way.
Per note from Tom after my previous commit.
|
| |
|
|
|
|
| |
too, so that the codeset is properly mapped on the newly added PL domains.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
encoding conversion functions. These are not can't-happen cases because
it's possible to create a conversion with the wrong conversion function
for the specified encoding pair. That would lead to an Assert crash in
an Assert-enabled build, or incorrect conversion otherwise, neither of
which is desirable. This would be a DOS issue if production databases
were customarily built with asserts enabled, but fortunately that's not so.
Per an observation by Heikki.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
| |
except the caller can specify the encoding to work in; this will be needed
for pg_stat_statements. In passing, do some marginal efficiency hacking
and clean up some comments. Also, prevent the single-byte-encoding code
path from fetching one byte past the stated length of the string (this
last is a bug that might need to be back-patched at some point).
|
| |
|
| |
|
|
|
|
|
|
| |
use for other modules; also move pnstrdup().
Clean up code slightly.
|
| |
|
|
|
|
| |
avoid this problem in the future.)
|
| |
|
|
|
|
|
| |
Add some comments so hopefully the next poor sod doesn't fall into the
same trap. (Wrong comments are worse than none at all...)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
renumbering of encoding IDs done between 8.2 and 8.3 turns out to break 8.2
initdb and psql if they are run with an 8.3beta1 libpq.so. For the moment
we can rearrange the order of enum pg_enc to keep the same number for
everything except PG_JOHAB, which isn't a problem since there are no direct
references to it in the 8.2 programs anyway. (This does force initdb
unfortunately.)
Going forward, we want to fix things so that encoding IDs can be changed
without an ABI break, and this commit includes the changes needed to allow
libpq's encoding IDs to be treated as fully independent of the backend's.
The main issue is that libpq clients should not include pg_wchar.h or
otherwise assume they know the specific values of libpq's encoding IDs,
since they might encounter version skew between pg_wchar.h and the libpq.so
they are using. To fix, have libpq officially export functions needed for
encoding name<=>ID conversion and validity checking; it was doing this
anyway unofficially.
It's still the case that we can't renumber backend encoding IDs until the
next bump in libpq's major version number, since doing so will break the
8.2-era client programs. However the code is now prepared to avoid this
type of problem in future.
Note that initdb is no longer a libpq client: we just pull in the two
source files we need directly. The patch also fixes a few places that
were being sloppy about checking for an unrecognized encoding name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
database via builtin functions, as recently discussed on -hackers.
chr() now returns a character in the database encoding. For UTF8 encoded databases
the argument is treated as a Unicode code point. For other multi-byte encodings
the argument must designate a strict ascii character, or an error is raised,
as is also the case if the argument is 0.
ascii() is adjusted so that it remains the inverse of chr().
The two argument form of convert() is gone, and the three argument form now
takes a bytea first argument and returns a bytea. To cover this loss three new
functions are introduced:
. convert_from(bytea, name) returns text - converts the first argument from the
named encoding to the database encoding
. convert_to(text, name) returns bytea - converts the first argument from the
database encoding to the named encoding
. length(bytea, name) returns int - gives the length of the first argument in
characters in the named encoding
|
|
|
|
|
| |
"Server-side support of all encodings" around 2007/3/26.
initdb required.
|
|
|
|
|
| |
along with new conversions among EUC_JIS_2004, SHIFT_JIS_2004 and UTF-8.
catalog version has been bump up.
|
|
|
|
|
|
|
| |
bletcherous and unsafe manipulation of global encoding setting.
Clean up libxml reporting mechanism a bit (it still looks like a
dangling-pointer crash waiting to happen, though, not to mention
being far less than sane from a localization standpoint).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up a bunch of the support utilities.
In src/backend/utils/mb/Unicode remove nearly duplicate copies of the
UCS_to_XXX perl script and replace with one version to handle all generic
files. Update the Makefile so that it knows about all the map files.
This produces a slight difference in some of the map files, using a
uniform naming convention and not mapping the null character.
In src/backend/utils/mb/conversion_procs create a master utf8<->win
codepage function like the ISO 8859 versions instead of having a separate
handler for each conversion.
There is an externally visible change in the name of the win1258 to utf8
conversion. According to the documentation notes, it was named
incorrectly and this changes it to a standard name.
Running the Unicode mapping perl scripts has shown some additional mapping
changes in koi8r and iso8859-7.
|
|
|
|
| |
Add comment marker for PG_ENCODING_BE_LAST.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
so no one noticed.
|
|
|
|
| |
John Hansen
|
|
|
|
| |
Roland Volkmann
|
|
|
|
|
|
|
|
|
| |
UNICODE => UTF8
ALT => WIN866
WIN => WIN1251
TCVN => WIN1258
The old codes continue to work.
|
| |
|
| |
|
|
|
|
| |
John Hansen
|