I need to process a huge file and I'm looking to use Hadoop for it. From what my friend has told me, the file would get split into several different nodes. But if the file is compressed, then the file won't be split and would need to be processed a single node (and I wouldn't be able to use MapReduce). Would it be possible to split the large file in fixed size chunks, compress them and perform a MapReduce? Thanks!