are you transferring from Excel or other unicode files? Is it truncating at about 512 chanracters? I had this problem and IBM told me that the data was bad. Of course, it was not bad. It has to do with “invariant characters”. Here is what IBM told me:
“First, let me explain what I mean by an invariant character. Letters like A-Z and numbers 0-9 are invariant. Their code point is the same in all code pages. When we are dealing with Unicode data and Excel, we can process more data if the data is invariant. We basically compress out the high order byte and store just the low order byte. So, an A in unicode would be 0×0040, but we could store it as 0×40. Some of the
records contain characters that are outside of the invariant range. For instance, column 13 (M) of row 2 contains single quotes which are 0×1920 rather than apostrophe’s (which are 0×0027) and “en dash” which is 0×1320 rather than hyphen which is 0x002D. When this happens, the
unicode characters cannot be compressed. So when we read the data from Excel and see that the data is in a non-compressed form, we only accept 511 characters (1022 bytes). For other records, all the characters are invariant and hence can be compressed, so we end up with 1022 characters
(bytes). Aside from that, when the data is sent to the iSeries, you
lose the non-invariant characters because they won’t translate to code page
37, so you have a data integrity problem as well. ”
If you have my problem then there is no workaround.