when relevant content is
added and updated.
This is a guest blogpost by Alex Bordei, head of product management, Bigstep
In today’s fast-paced business landscape, experimentation is key when it comes to innovation and gaining and maintaining a leading edge. Yet many enterprises simply pass up the opportunity to study and experiment with their data because they are laser focused on pre-defined use cases and immediate benefit. Instead, they should consider using the cloud as a prototyping workbench and embrace the researcher mindset in order to build fully functional data laboratories.
Enterprise IT need not be rigid
Most enterprises acquire IT services through similar standard procedures: they issue an RFP [Request for Proposal] to preferred vendors, have them do a demo and perhaps prepare a POC [Proof of Concept] of their technologies. These steps are required because such systems cost millions of dollars and no sane CTO would sign off on a solution without properly testing it with live data in a POC environment.
Some enterprises today use the cloud much in the same way. They buy expensive big data “appliances” that are fixed in time and space and are typically pre-configured by experts and then handed off.
However, data analytics environments are not solely production systems and should not be consumed as such. They don’t need to be ultra-rigid. Instead, they should be flexible, easy to set up and teardown, and simple to transform and scale, through a self-service model. These characteristics enable big data experimentation.
Why experimenting with data matters
Researchers have always had this dilemma: If you don’t know what you don’t know, how do you plan for discovery? Inventions and breakthroughs in general do not simply pop up. They happen after a lifetime of painstaking research.
Research involves experimentation and the intelligent use of suppositions. You have a hunch, you develop a hypothesis and then you try various techniques, tools and experiments to establish if that hypothesis is actually true. Many times the hunch is completely wrong, but along the way you find out some interesting things. This process is what truly generates innovation.
Companies must teach their employees how to do research and listen to their gut feelings. The ones most likely to have good instincts are in the trenches, actually doing the work. There are many tools available today that can apply basic machine learning without having to write a single line of code.
Here are some ways to get started:
• Learn the scientific method: The main thing is to understand and apply the basic principles of the scientific method. If there is no objective and no clear path to success, then how do you plan for it? You don’t. You setup a framework in which research happens at a certain pace.
• Adopt an agile mentality: As software development has a big research component and hence cannot be easily predicted, software companies have adopted the agile methodology. This development paradigm doesn’t enforce deadlines, but rather helps align team members so that quick course corrections can be made.
• Consider a data lake: Data lakes aggregate data from various production systems, in their original format. That data is processed by algorithms and analyzed by data scientists. Then, various perspectives on that data are made available to the rest of the team. A cloud-based data lake allows fresh technologies to be immediately deployed on the available data, without having to commit to a high cost, or end up in a vendor lock-in. Cloud vendors should focus on setting up the environment to be as flexible and easy to use as possible.
Experiments in the cloud are easier and more cost effective than their inflexible and costly predecessors. If you adopt a research mentality, you can enable your employees to explore the data in your “laboratory,” which will lead to innovation. Who knows, this might be the beginning of the age of data discovery!