Additional information about your environment is needed. But as some background material:
If your data is defined as UTF-8 to database (CCSID 1208) and the CCSID of the job processing the data is not 65535 then the UTF-8 data will be automatically converted to the job CCSID when read. In this case nothing needs to be done to the display file. This however also assumes that the job CCSID can represent the UTF-8 data being processed (note that this is a big assumption as most users go to a Unicode encoding such as UCS2/UTF8/UTF16 due to the need for a larger character set).
If you data is defined as UTF-8 to database (CCSID 1208) and the CCSID of the job processing the data is 65535 then the UTF-8 data will be left encoded as UTF-8. In this case gibberish will indeed be shown on the display if nothing special is done within the application program.
If the data does need to reflect characters that span a single job CCSID (for instance Russian, Chinese, Arabic, and German concurrently) then you need to use CCSID 13488 or 1200 for the display file definition. You also need to make sure that the DB2 data is not converted to an EBCDIC job CCSID during processing. This can be done by defining the DB2 data as 13488 or 1200 (rather than 1208). Alternatively you could define a logical file over the 1208 data, mapping the data to 13488 and/or 1200. In this case you also need to use a Unicode enabled 5250 emulator such as Access for Web (in order to preserve the extended character set).
Is there a reason for using UTF8 as opposed to UCS2/UTF16? UTF8 data is treated as character while UCS2/UTF16 as graphic character. There is quite a bit of difference in data handling due to this fundamental difference in definition.