I'm currently looking for the best available solution to create scalable storage for my big files. Some of the files range from 1-2 megabytes and others are 500-600 gigabytes. I was looking at Hadoop but it seems a bit complicated. Now I'm looking at MongoDB and its GridFS as my next file storage solution (I'm pretty impressed). But I do have several questions:
What's going to happen with GridFS when I write new files concurrently. Is there a lock for read/write operations?
Will they be cached in RAM and will it affect the read/write performance?
Free Guide: Managing storage for virtual environments
Complete a brief survey to get a complimentary 70-page whitepaper featuring the best methods and solutions for your virtual environment, as well as hypervisor-specific management advice from TechTarget experts. Don’t miss out on this exclusive content!