• Can R handle data that’s bigger than RAM?

    I've been using R as open source...but it's not letting me handle data sets that are bigger than RAM memory. Would it be possible to handle big data sets applying PL/R functions inside PostgreSQL? Does anyone know?

    ITKE361,880 pointsBadges:
  • Fetch data from HBase table in Spark

    We have this huge table in HBase that's named UserAction. It has three different column families. We're trying to fetch all of the data from one column family as a JavaRDD object. We've tried using the code below but it's not working. What else can we do? static SparkConf sparkConf = new...

    ITKE361,880 pointsBadges:
  • Hadoop: When do reduce tasks start?

    I'm using Hadoop and I can't figure out when reduce tasks start up. Do they actually start after a percentage of mappers complete? Would there a fixed threshold? Thank you for the help.

    ITKE361,880 pointsBadges:
  • How does secondary sorting work in Hadoop

    I'm pretty new to the Hadoop/big data industry but I'm trying to figure how secondary sorting works in Hadoop. Why would I have to use GroupingComparator when doing this? Does anyone know how this works?

    ITKE361,880 pointsBadges:
  • Is there a .NET equivalent for Hadoop?

    For the past few years, I've been a C# developer, along with knowledge in Java. I would like to start learning Hadoop but I'm not sure where to start. Is there something along the lines of a .NET equivalent to Hadoop? Any help would be greatly appreciated.

    ITKE361,880 pointsBadges:
  • How to export data from R to SQL Server quickly

    Is there a way I can export data from R to SQL Server quickly? The standard way (SQLSave) is really slow for a large amount of data. This is what I tried so far: toSQL = data.frame(...); sqlSave(channel,toSQL,tablename="Table1",rownames=FALSE,colnames=FALSE,safer=FALSE,fast=TRUE); Thank you!

    ITKE361,880 pointsBadges:
  • How to cluster big data on a server

    I have roughly thousands of points that I need plotted with Highcharts. Would there be a way to cluster the data on a server (so it shows less than 1,000 points but when you zoom in, it will make Ajax calls to get the data for that zoomed in region). Hopefully that makes sense. I would appreciate...

    ITKE361,880 pointsBadges:
  • Document database for big data

    My department has around 100 million of records in a database. But roughly 65% of the records will be deleted on a daily basis and roughly the same amount of records will be added in. We feel like a big data document database like HBase, Cassandra or Hadoop could do this for us but we're not sure...

    ITKE361,880 pointsBadges:
  • What should I choose for file storage: MongoDB or Hadoop?

    For about the past month, I've been looking for the best solution to create scalable storage for big files. The file size varies from 1-2 megabytes and some get to 500-600 gigabytes. I'm deciding between MongoDB and Hadoop and I'm not sure which way to go. I'm thinking of using MongoDB as a file...

    ITKE361,880 pointsBadges:
  • Convert .txt file to Hadoop sequence file

    I have big data and I'm trying to store all of it in Hadoop's sequence file format. But all of the data is in a flat .txt format. Is there any way I can convert it? Thank you.

    ITKE361,880 pointsBadges:
  • Print documents in MongoDB shell

    Does anyone know of a way to print out more than 20 documents in MongoDB's shell? I've tried this: db.foo.find().limit(300) But this still prints out 20. Then I tried this code: db.foo.find().toArray() db.foo.find().forEach(printjson) But it's printing out an expanded view of each document of the...

    ITKE361,880 pointsBadges:
  • How to put the results of a Hive query to a CSV file

    I'm trying to put the results of a hive query to a CSV file. This is what my command looks like: insert overwrite directory '/home/output.csv' select books from table; So when I run it, it says it was successful but I'm having issues finding the file. Is there a way I can find this file? Thank you.

    ITKE361,880 pointsBadges:
  • Process range of Hbase rows using Spark

    We've been using HBase as a data source for Spark. We've already created a RDD from a HBase table but we can't figure out a way to create a RDD for a range scan. Does anyone know how to do it?

    ITKE361,880 pointsBadges:
  • How to count many lines in large files

    My partner and I usually work with files that are larger than 20 GB in size but we have to count the number of lines in any given file often. We've been doing it using cat fname | wc -| but it takes forever. Does anyone know of a way that will make it faster? We're working with a high performance...

    ITKE361,880 pointsBadges:
  • SSIS XMLNode as input parameter from web service

    Looking for a way to pass in an input parameter from a web service task that is of type XMLNode. After downloading the wsdl file, an input type of XMLNode is required.

    djcurtis25 pointsBadges:
  • New approaches to big data analytics

    What are the new approaches to modeling in analytics?

    rhari222520 pointsBadges:
  • Execute Mongo commands in Shell Script

    I'm trying to execute MongoDB commands using Shell Script. Here's what I tried so far: #!/bin/sh mongo myDbName db.mycollection.findOne() show collections When I executed it, it said the connection was established but not executed. Can someone help me out?

    ITKE361,880 pointsBadges:
  • Getting error message when installing Hadoop 2.2.0

    I've been trying to install Hadoop 2.2.0 cluster on my servers. All of the servers 64-bit and I've downloaded Hadoop and the configuration files are good to go. When I started to run ./start-dfs.sh, I keep getting this error: 13/11/15 14:29:26 WARN util.NativeCodeLoader: Unable to load...

    ITKE361,880 pointsBadges:
  • How to merge several small files into one in Hadoop

    I have several multiple small files into my input directory that I want to merge into a single file. I need to do this without using the local file system or writing mapreds. Is there a Hadoop command I can use to do this?

    ITKE361,880 pointsBadges:
  • Error message when handling big data set in R

    I'm currently working with Windows 8 with a RAM of 8 GB. I also have a data frame of 1.8 million rows and 270 columns (on which I have to perform a GLM). I've already tried to use FF and BIGGLM packages to handle the data but it hasn't worked. I keep getting this error: Error: cannot allocate...

    ITKE361,880 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following