Why do you think that the result you're getting is wrong?
You're passing 1 for the "writingscript" parameter, so it's not obvious what the encoding of the 8-bit character you're passing in is supposed to be, but if the 8-bit character you're starting out with is an ASCII character (code point less then 128), or if it's Latin-1, then the equivalent Unicode code point is going to be exactly the same as the 8-bit value, just padded to a 16-bit value. So I would expect to get the "actual character" back, especially if you're not specifying any particular starting encoding.
In this regard, the "char to text char" function is trivial for converting a single 8-bit character, especially with no encoding specified. I would think it would be a lot more useful for converting something like UTF-8 to UTF-16, although that's not going to do you much good with a single 8-bit character either, especially if that particular single byte is part of a UTF-8 escape sequence. Or it could be used to convert some other 8-bit encoding (other than Latin-1), such as Mac Roman, to UTF-16 Unicode.