cancel
Showing results for 
Search instead for 
Did you mean: 

Do we need to Upgrade to UCA Support for International Character Storage - and How?

glenn_barber
Participant

We have an app that's been around since the very early days of SQL Anywhere. It is currently under the default collation 1282Latin1. Although the primary language for users is English - there are some cases where they need to store names and descriptions in languages like Chinese, Korean, and eastern European. Users are wanting to cut this and paste some of these names from excel into a form and have it preserve the presentation. I found that even in Sybase Central this will not work with our current collation. It does seem to work if I create a database with UCA.

SAP support thinks I might have to convert my entire database to UCA to support a few columns that need to support these characters. However the catch is there is no easy way to do it. The database wizard will not support designation of a different collation and unloading the database and loading from a new UCA database results in a variety of truncation and Alternate Key creation errors.

Are there any more efficient solutions than redesigning tables and rebuilding everything under a UCA database by hand?

Accepted Solutions (0)

Answers (2)

Answers (2)

VolkerBarth
Active Participant
0 Kudos
the default collation 1282Latin1

It's "1252Latin1", correct?

What data types do you use currently? VARCHAR or NVARCHAR?

If it is the former, I guess you should be fine to add the according "few" columns with NVARCHAR and use Unicode values there.

Say, what does the following reveal?

select db_property('CharSet'), db_property('Collation'), db_extended_property('Collation', 'Properties'), db_property('NCharCharSet'), db_property('NCharCollation'), db_extended_property('NCharCollation', 'Properties') 

----

FWIW, I would recommend "the other" SQL Anywhere forum for further discussion, IMHO it gets waaaay more attention by SQL Anywhere experts, and I guess you are already aware of it ...

glenn_barber
Participant
0 Kudos

Yes its 1252

I've tried nvarchar and there is still some conversion occurring on save which renders the characters unreadable.

Here's the query result.

'windows-1252','1252LATIN1','CaseSensitivity=Ignore','UTF-8','UCA','CaseSensitivity=Ignore;AccentSensitivity=Ignore;PunctuationSensitivity=Primary'

Thanks for the tip on the alternate support area - this site has various issues - including not notifying on update,.

former_member182948
Active Participant
0 Kudos
glenn_barber
Participant
0 Kudos

I had tried to do a similar activity using the Sybase Central to do the unload and using the load script in the new UCA database to do the load - where I ran into problems what the truncation of data which caused many of the tables to fail to load. How does this approach avoid having to change the schema to accommodate the changes in char and varchar lengths - in UCA the schema specifies bytes rather than characters as I understand.