Tableau currently has a comfortable relationship with a number of data preparation vendors, most notably Alteryx. But that hasn’t stopped the popular data visualization vendor from developing its own self-service data preparation tool, known as Project Maestro, set to be released before the end of the year. So what does that mean for Tableau’s data prep partnerships?
We explore that question in this edition of the Talking Data podcast. We look behind the news to think about how it could ripple throughout the world of analytics software. Will customers still look for standalone data preparation software when they can access good-enough functionality in the higher level software they already own? Will the secondary features of data prep software, like reporting and predictive analytics, be enough to entice customers?
For now that all remains up in the air, but we look at the questions in this podcast to see how they may play out. There are still more questions than answers in the still-hot analytics software market, and much remains to shake out.
You push the little valve down, and the music goes round and round. Where does it come out? What does that have to do with PowerBI?
Say all you want about deep learning, machine learning and neural networks – eventually enterprises are going to generate reports and dashboards for analytics. In the end, that is where “the music” comes out.
For Microsoft that dashboard and reporter often take the form of PowerBI. This analytics visualizer is an important part of the company’s analytics effort, and one of the earliest examples of its rebirth as a “cloud first” software provider.
But, although Power BI Report Server has its feet in the clouds, it is also striving to play an on-premises role. At the recent Pass Summit 2017 in Seattle, Microsoft enhanced that offering with scheduled data refresh, Direct Query, and a REST API capabilities. All this was part of the discussion on this edition of the Talking Data Podcast.
Tune into the Talking Data Podcast, as we report from Pass Summit on Power BI, and other Microsoft technology initiatives.
Cloud, automation and security were primary among a slew of topics at Oracle OpenWorld 2017. In this podcast, recorded at the event, David Essex, Brian McKenna and I share impressions on the company, and react to Oracle leader Larry Ellison’s various comments on databases, machine learning and data breaches. In the cloud, Oracle may still be playing catch-up, but it also seems to be exhibiting considerable momentum, according to the Talking Data podcasters.
We closed out September with a nod to our recent Talking Data Podcast. It is a look at digital disruption, Hadoop, and S3. The stuff that dreams are made of, as Ed Burns and I encountered them at the Big Data Innovation Conference. Be our guest – take a gander! -Jack Vaughan
Data science is nothing new, but despite the fact that it’s been around for years, businesses are still looking for ways to get value from it.
There’s an inherent tension between the research mindset required to perform good data science and the results-based focus of business processes. But by bending towards each other both areas can benefit.
In this edition of the Talking Data podcast, we recap some perspectives on how businesses can derive value from data science as presented at the Big Data Innovation Summit in Boston. One of the keys is to make sure that data science efforts serve the business. Projects don’t need to be perfect. As soon as data science team glean any kind of insight from data they should report it to business teams.
But that’s just the tip of the iceberg. Listen to this podcast to learn more about data science and business teams can better mesh their efforts and deliver real value to their enterprise.
The data side of Microsoft will be front and center at the upcoming Ignite conference, scheduled for Sept. 24 through 29 in Orlando, Fla. Sessions at the event will flesh out important details concerning SQL Server 2017 for Linux, Azure SQL Data Warehouse and the company’s most recent NoSQL database entry — Azure Cosmos DB. The Redmond giant has increasingly used SQL Server as a launching pad for analytics efforts that have come to rival those of database; one such effort is new Python support for the data warehouse, which also will be among the topics considered at Ignite. Catch this version of the Talking Data podcast, which looks back at this summer’s Microsoft data doings, and forward to the conference.
In this podcast, we get a chance to speak with Neha Narkhede. As a development lead at LinkedIn, she helped forge the messenger called Kafka. Now as CTO at Confluent, she is charting the course of Kafka as a streaming engine. Part of that is Exactly Once messaging, which will be among the items on tap at Confluent’s upcoming Kafka Summit. Why do they call it Kafka? We are not telling here. You must listen to the podcast to learn.
Data preparation as it is applied to deep learning is a topic under discussion these days, as data engineers and scientists try out new machine methods for prediction. In comparison to garden-variety BI, deep learning applications can require significant pre-processing. This became clear as the Talking Data crew ventured out to cover the Re.Work Deep Learning Summit 2017 in Boston. Catch the latest, and subscribe to our iTunes feed, for free home delivery of potent podcasts. Let us know: What obstacles do you foresee for deep learning data handling in your organization?
Deep learning is great and all, but how do you implement in a production environment?
That’s a question that a lot of businesses are asking. Deep learning may look tempting when you think about how powerful it is. Research projects have shown it capable of replicating human vision and other types of thought processes. But it also can bring some complications to business processes.
In this edition of Talking Data we look at some of the business implications of deep learning, particularly when taking it from research projects to production environments.
About fifteen years ago, while covering application performance management, I had the great good fortune to meet Brend Harzog. Application performance in that first distributed age was like an onion — one with layer after layer that had to be peeled away in order to find the root cause of bottlenecks. Harzog, then a consultant and industry analyst, knew the domain well, and was patient, explaining that domain to a particularly curious journalist.
So, when I learned Brend had gone on to start OpsDataStore, a technology upstart intent on rolling up the diverse data sources in today’s data center into an understandable form, I was very fast to hook up with him for the Talking Data podcast. The topic: IT operations analytics.
It is Harzog’s argument that modern IT doesn’t need more big data as much as it needs tools that bring meaning to that data. While the topic may be a bit afield from our usual fare, we hope it is the start of an occasional series that looks at how big data is impacting some very diverse areas of interest. Listen to this edition of Talking Data. Let us know what you think about it.