• Hadoop: How to handle data streams in real-time

    I've recently been working with Hadoop and now I'm using it to handle data streams in real-time. For this, I would like to build a meaningful POC around it so I could showcase it. I'm pretty limited in resources so any help would be appreciated.

    ITKE434,740 pointsBadges:
  • How to run Hadoop job without JobConf

    I'm trying to submit a Hadoop job that doesn't use the deprecated JobConf class. But my friend told me that JobClient only supports methods that take a JobConf parameter. Does anyone know how I can submit a Hadoop job using only the configuration class? Is there a Java code for it?

    ITKE434,740 pointsBadges:
  • Tell MongoDB to pretty print output

    Is there a way to tell MongoDB to pretty print output? Right now, everything is output to a single line and it's pretty difficult to read (especially with arrays and documents). I appreciate the help.

    ITKE434,740 pointsBadges:
  • Query MongoDB with LIKE

    I'm using MongoDB but I need a query like SQL's like. Something along the lines of this: select * from users where name like '%m%' Is there a way to do the same in MongoDB? I would appreciate any help.

    ITKE434,740 pointsBadges:
  • SANDISK data life span

    Is it possible to get data corruption on a sandisk compact flash if it has been stored without power for 4 or 5 years.

    A23857410 pointsBadges:
  • Process large text file with ByteStrings and lazy texts

    I'm looking to process a large unicode text file that has over 6 GB. I need to count the frequency of each unique word. I'm currently using Data.Map to track the count of each word but it's taking way too much time and space. Here's the code: import Data.Text.Lazy (Text(..), cons, pack, append)...

    ITKE434,740 pointsBadges:
  • Big data: How to get started

    We've been using R for several years and now we're starting to get into Python. We've been using RDBMS systems for data warehousing and R for number-crunching. Now, we think it's time to get more involved with big data analysis. Does anyone know how we should get started (basically how to use...

    ITKE434,740 pointsBadges:
  • How to compress large files in Hadoop

    I need to process a huge file and I'm looking to use Hadoop for it. From what my friend has told me, the file would get split into several different nodes. But if the file is compressed, then the file won't be split and would need to be processed a single node (and I wouldn't be able to use...

    ITKE434,740 pointsBadges:
  • Free space in HDFS

    Would there be a HDFS command to see if there's available free space in HDFS. I'm able to see it through the the browser using master:hdfsport. But unfortunately, I can't access it and I need a command. I can see disk usage but not free space. Appreciate the help.

    ITKE434,740 pointsBadges:
  • MongoDB: Checking to see if an array field contains an unique value

    I've been using MongoDB for over a month now and I have a blog post collection and each one has tags filled (which is an array). This is what it looks like: blogpost1.tags = ['tag1', 'tag2', 'tag3', 'tag4', 'tag5'] blogpost2.tags = ['tag2', 'tag3'] blogpost3.tags = ['tag2', 'tag3', 'tag4', 'tag5']...

    ITKE434,740 pointsBadges:
  • Running a big data process in R

    I've recently collected data from the Twitter Streaming API and now the JSON sits in a 10 GB text file. I'm looking to have R handle all the big data but I'm not sure if it could do a few things, such as: Read / process the data into a data frame Descriptive analysis Plotting Can R do this? Or do I...

    ITKE434,740 pointsBadges:
  • Delete million of rows by ID in SQL

    We're trying to delete roughly 2 million rows from our PG database. We already have a list of IDs that need to be deleted but it's turning into a slow process. This is what I tried: DELETE FROM tbl WHERE id IN (select * from ids) So basically, this is taking about 2 days to finish. Is there a...

    ITKE434,740 pointsBadges:
  • What framework should I use for fast Hadoop real-time data analysis?

    I'm trying to do some real-time data analysis on data in HDFS but I'm not sure which framework I should use. I'm deciding between Cloudera, Apache and Spark. Which one would best suite me? Thanks!

    ITKE434,740 pointsBadges:
  • How to cluster keys in Cassandra

    I'm pretty new to Cassandra and from what I've learned, a physical node has rows for a given partition key that are stored in the order induced by the clustering keys. This makes the retrieval of the rows in the order easy to do. But I'm not sure of what kind of ordering is induced by clustering...

    ITKE434,740 pointsBadges:
  • How to create a funnel in MongoDB

    In MongoDB, I have a collection that's named event (it basically tracks events from mobile applications). Here's what the structure of the document is: { eventName:"eventA", screenName:"HomeScreen", timeStamp: NumberLong("135698658"), tracInfo: { ..., "userId":"user1", "sessionId":"123cdasd2123",...

    ITKE434,740 pointsBadges:
  • How to get a random record in MongoDB

    I have roughly a 100 million records and I need to get a random record in MongoDB. What's the best way to do this? I already have the data ready to go but there's no field from which I can generate a random number / obtain and random row. I would appreciate the help.

    ITKE434,740 pointsBadges:
  • Pass mapped data to multiple reduce functions in Hadoop

    I currently have a large datasest that I need to analyze with multiple reduce functions. What I would like to do is read the dataset only once and then pass the mapped data to multiple reduce functions. Is there a way I can do this in Hadoop? Thank you!

    ITKE434,740 pointsBadges:
  • Getting warning message when starting Hadoop cluster

    I just started a Hadoop cluster but I keep getting this warning message: $HADOOP_HOME is deprecated. But when I add export HADOOP_HOME_WARN_SUPPRESS="TRUE" into hadoop-env.sh, I don't get the message anymore (when I start the cluster). When I run this: hadoop dfsadmin -report, I see the message...

    ITKE434,740 pointsBadges:
  • Load small random sample of a large CSV file into R data frame

    We have a CSV file that needs to be processed but doesn't fit into the memory. Is there a way we can read 20K+ random lines of it to do basic stats on our selected date frame?

    ITKE434,740 pointsBadges:
  • Should I learn MongoDB or CouchDB for NoSQL?

    I'm pretty new to everything NoSQL related but I've heard a ton of things on MongoDB and CouchDB but I'm not sure of the differences. Can anyone help me? Which one do you recommend as a first step to learning NoSQL? Thank you very much!

    ITKE434,740 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.

Following