Data Matters

Page 1 of 1112345...10...Last »

December 6, 2017  11:42 AM

Why not the North?

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Ted Dunning, chief application architect, MapR Technologies.

I am a foreigner to the UK. I am an engineer.

These characteristics are what shaped the first impressions I had of the north of England over twenty years ago. I came then to consult at the university in Sheffield and was stunned by the rich history of world-class engineering in the region. The deep culture of making and building across the north struck me at the time as ideal for building new ventures based on technology and engineering.

Twenty five years on, when I come back to visit, I am surprised to see that the start-up culture in Britain is still centred around London with small colonies in Edinburgh, Cambridge and Oxford. The north of England is comparatively a start-up vacuum.

The sprouting of technological seeds like the Advanced Manufacturing Research Centre (AMRC) at University of Sheffield show that the soil is fertile, but that success makes the lack of other examples all the more stark.

Drawing necessarily imperfect analogies with US cities, the former steel town of Pittsburgh has suddenly become a start-up mecca for self-driving cars, but Sheffield has not had a comparable result, in spite of scoring well in the last, 2014, Research Excellence Framework in Computer Science and Informatics – 47% of the submissions scoring 4*: “quality that is world-leading in terms of originality, significance and rigour”. For comparison, Oxford scored 53, Cambridge 48, and Manchester (with its Turing-related heritage in computer science), 48: so Sheffield is in a similar bracket of excellence.

Invention and start-ups are like a rope and cannot be pushed. The inventors and visionaries who would pull on that rope can, however, must be inspired and encouraged. The real magic of Silicon Valley is a sense of optimism and willingness to attempt the impossible. Closely related to that optimism is a generosity of spirit and willingness to help others for no obvious short-term return. There are stories about places like the Wagon Wheel Restaurant in Mountain View where engineers from different companies used to share problems and solutions over beers. Unfortunately, it seems to be a common impression that this licence is somehow geographically bound.

It isn’t.

It is woven into all of our expectations of what can and cannot be done. The same sense of “yes, we can” can be applied in the north.  If that idea could turn sleepy California orchard towns like San Jose or Sunnyvale or a gritty steel town like Pittsburgh into technological powerhouses, it can do the same for Sheffield or Liverpool or Manchester.

The time to start is now.

December 6, 2017  10:58 AM

From the server to the edge: the evolution of analytics

Brian McKenna Profile: Brian McKenna

In a guest blogpost, Peter Pugh-Jones, head of technology at SAS UK & Ireland, reflects on how the analytics industry is evolving and what organisations need in a data-driven economy.

Forty years is a long time in analytics, and in that time much has changed. In the last four decades, analytics has become part of everyday life and helped solve some of society’s biggest challenges. From helping develop specialised medications to combatting crime networks and ensuring transport fleets are energy efficient.

Data analytics is playing an ever-increasing role in our businesses, economy and environment. In the beginning, data analytics was used to find the solution to an existing problem. Today this approach has been turned on its head. Now we start with the data to uncover patterns, spot anomalies and predict new opportunities.

Data now informs organisations about trends and problems they never knew existed. It shapes how people interact, share information, purchase goods, and how they’re entertained and how they work. It dictates political decisions and economic cycles. Data is the raw power that helps us optimise decisions and processes to iron out inefficiencies though use of analytics. Analytics can be utterly transformative.

On the edge

For example, General Electric Transportation (GET) is a leading division in locomotive manufacturing and maintenance. It depends on the efficient running of its rail assets, with breakdowns and inefficient fuel usage threatening profits. To optimise its operations, each train has been equipped with devices that manage hundreds of data elements per second to improve operations. Analytics is then applied to the small, constrained devices that sit at the network’s edge to uncover use patterns that keep trains on track.

This ability to analyse and learn from data in transit is a game changer for all industries. Smart sensors on the production line are improving product quality by identifying faults before they happen or instantly as they occur. In turn, customer satisfaction and company competiveness are increased.

Connected devices are now generating more data than ever before. At the same time, customer demands are rising and the complexity of modern, global supply chains is growing exponentially. To stay competitive and provide the best products and services, companies require an unprecedented level of control and the ability to positively intervene at every stage of the process.

Moving at speed

Yet most are not up to the task. Any inefficient processes between capture, insight and action squander valuable opportunities for the business. It’s obvious that the static analytics approach of the past is no longer tenable. The increasing volume of sensors and the limitless possibilities for the fusion of their data has changed the conversation. Analytics now needs to be applied at the right time and in the right place, for the right level of return.

Take energy consumption as an example. A single blade on a gas turbine can generate 500GB of data per day. Wind turbines constantly identify the best angles to catch the wind, and turbine-to-turbine communications allow turbine farms to align and operate as a single, maximised unit. By using analytics, data can be used to provide a detailed view of energy consumption patterns to understand energy usage, daily spikes and workload dependencies so that we can store more energy for use when the wind is light.

The challenge is that new connected devices, the Internet of Things (IoT) and artificial intelligence (AI) put an infinite level of insight in the hands of organisations. This means that the analytics of the present and future has to become instantaneous. The ability to gather and analyse an ever-growing amount of data to deliver relevant results in real-time will become the deciding factor for whether organisations win or lose. Analytics has to move at speed and make the development of the most promising technologies possible.

When we speculate how analytics will be used in the future, it is clear we are on the edge of something revolutionary. It is old hat to think that analytics still resides in the server – it has been brought to the edge.

Yet it would be unrealistic to assume that all businesses can run their analytics on this scale. Most organisations are a complex patchwork of legacy systems and siloed data infrastructures which do not always speak to each other. Integration is a key part of the puzzle. Organisations require analytics platforms that understand the different states of play and can consolidate data from the edge to the enterprise, from the equipment in the field to the data centre and the cloud.

Unified, open and scalable

In recent years analytics has been made open and accessible. No longer the preserve of data scientists, businesses have realised considerable gains when analytics and its insight can be communicated and used at every level of the organisation. This evolution is driven by necessity. Data is growing exponentially and becoming more complex every day, and there is no organisation with a blank cheque for technology investment. In modern, complex data environments a business’s analytics has to be flexible. It must be able to adapt to infrastructure changes and the daily challenges of businesses.

For organisations and industries, it will mean they must have access to a single, unified platform that is constantly evolving. Organisations need a platform that can scale to their needs and is delivered flexibly to achieve the latest advances that allow them to solve problems and create new value. This means being cloud-native and having access to scalable, elastic processing and accessible open interfaces. This means an environment where organisations can easily log in, access data, build models, deploy results and share visualisations.

Organisations now need platforms that are open and integrated, leveraging future technologies to scale and provide instant insight through a consolidated data environment. Platforms that provide joined-up data and faster, more accurate insight will transform organisations’ decision-making and facilitate better integrated planning. Above all, they will allow business to make good on the opportunities that data offers.


November 22, 2017  11:16 AM

Why it’s time to move past the multi-tenant cloud model

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Allan Leinwand, CTO, ServiceNow     

Cloud services first appeared in the late ‘90s when large companies needed a way to centralise computing, storage, and networking. Originally, cloud architecture was built on database systems designed for tracking customer service requests and running financial systems. For many years, companies like IBM, Oracle, and EMC grew and developed in this centralised ecosystem as they scaled their hardware to accommodate customer growth.

The problem is that while this approach worked for large enterprises, it didn’t really deliver a smooth experience for customers. This is because in a multi-tenant architecture customers must share the same software and infrastructure, while cloud providers only need to build and maintain a centralised system. Unfortunately, multi-tenant models are still the most widely used, preventing customers from having the experience they deserve.

The three major drawbacks of the multi-tenant model for customers are, in my view:

  • Time spent on maintenance and downtime

Multi-tenant architectures need large and complex databases that require hardware and software maintenance on a regular basis. This results in availability issues for customers, who must wait for maintenance works to be completed before they access the cloud again. Some departments, such as sales or marketing, can cope with downtime periods in less busy periods such as overnight. However, many other business critical enterprise applications need to be continually operational.

  • Commingled data makes it difficult to isolate your data from competitors’

In a multi-tenant environment, customers must rely on the cloud provider to isolate their data from everyone else’s. This means that customers and their competitors’ data could be stored all together in a single database. The problem with this approach is that the data is not physically separate. It relies on software for separation and isolation. This has major implications for government, healthcare and financial regulations, but even more so in the event of a security breach, as it could expose everyone’s data at once.

  • In the event of an outage everyone suffers

In a multi-tenant model, issues such as outages or upgrades will impact every tenant in the shared model. In other words, when software or hardware problems arise on a multi-tenant database, it causes an outage for all customers. So, if this model is applied to run enterprise–wide business services, entire organisations will be affected. Businesses cannot rely on this shared approach on applications that are critical to their success. Upgrades should be done on their own schedule, so that the business can plan for downtime. Similarly, other software and hardware issues need to be isolated and resolved quickly.

It is clear that these data isolation and availability challenges are not good enough in today’s rapidly evolving digital landscape. Multi-tenancy is a legacy architecture that will not stand the test of time. To embrace and lead today’s technological innovations, companies need to move past the old model and look at more advanced cloud architectures.


November 20, 2017  11:19 AM

GDPR means businesses must show they are serious about cloud data privacy

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Julian Box, CEO, Calligo.

The prospect of ambulance-chasing lawyers interesting themselves in the General Data Protection Regulation (GDPR) is now very real.

With just a few months to go before the European Union’s landmark set of regulations comes into force next May, any business storing or processing customer data in the cloud needs to consider the advantages of being able to demonstrate the steps it is taking towards compliance.

Given the rights that European citizens will have under GDPR to interrogate organisations about how their data is being handled, no-win, no-fee lawyers are likely to be very interested in any instance of non-compliant data-handling. With so many organisations hoarding data in the hope that one day some of it will be valuable, the dangers are substantial.

Mistakes are easy

On a day-to-day level it is very easy for small and mid-size organisations to fall foul of the new rules. Few realise, for example, that the CV of an unsuccessful job applicant should be deleted if no explicit consent for the file’s retention is obtained. This is because the data is no longer relevant under the terms of the GDPR.

In too many cases, businesses still lack a suitable mechanism for answering subject requests about data. We have already seen incidents where a request about personal data under existing legislation has resulted in an unedited swathe of data being transferred, compromising the privacy of many other individuals.

Even cyber risk insurance is unlikely to cover the potentially immense costs of being in breach of the GDPR, which include penalties of up to four per cent of global turnover, along with the financial drain of having to make financial redress to the individuals affected.

Compliance will be a real commercial differentiator

There is however, every reason to be optimistic. As awareness grows of the obligations imposed by GDPR, businesses and supply chain partners that demonstrate the steps they have taken to achieve compliance will not only be in a better position with the regulators, they will also give themselves a significant commercial advantage. This is bound to become particularly acute for organisations entrusting substantial amounts of personally-identifiable data to the cloud where they run their applications.

It is true there are already a number of standards that apply to cloud, and which organisations can insist on even though they are not specific to it, such as ISO27001, PCI compliance and Sarbanes-Oxley Act compliance (or SOX) for example. There are also those specifically related to the cloud, such as CSA STAR.

But to demonstrate that GDPR-compliance is being addressed directly and comprehensively, an organisation utilising a cloud provider needs to ensure that there is a legal contract defining the restrictions around the key Data Controller and Processor relationship concepts of the new regulation.

The speed of adoption and expansion of cloud has meant many organisations enjoying its benefits do not fully understand how much of its resources they are consuming, both from SaaS solutions and also from their gradual accumulation of IT and dev-ops initiatives. In the age of the GDPR this is a reckless position to be in.

As more and more tech companies embrace subscription-style services based in the cloud, the need to act in compliance with the regulation becomes ever more urgent. The GDPR demands that organisations have far better understanding and supervision of their cloud footprint (and indeed their private infrastructures and data-sets).

The point here is that while there is no single, magic tool that will sort out compliance for an organisation, there are steps that can be taken. It is a question of sorting out data governance now and building in a privacy-by-design approach to the cloud.

Find the best hands-on provider

Businesses must take informed advice from hands-on experts about what is compliant and adapt the processes and workflows accordingly, using the applications and technologies that are available from cloud-providers offering genuine performance guarantees. It is no small task for a mid-tier business, but it is perfectly achievable.

If an organisation has a cloud provider that is clearly expert in GDPR compliance and operates to best-practice standards, it will be able to demonstrate it has taken all reasonable steps and implemented the appropriate technological advances, as GDPR requires. In the event of a security breach (as opposed to a failure of compliance) this is likely to be a significant factor in the minds of regulators, reducing any penalties.

It is not just a question of living in fear of hungry lawyers or super-vigilant regulators either. There are immense cost and efficiency benefits to be derived from having better data stewardship. Everybody handling data should take note.


November 2, 2017  3:36 PM

Doing more with less – how would Alexander the Great have tackled today’s big data challenge?

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Partha Sen, CEO and co-founder of Fuzzy Logix.

Ever heard of the legend of Alexander the Great and the Gordian Knot? According to the myth, Alexander was confronted on his travels with an ox cart tied with a knot so fiendishly difficult that no man could untangle it. Alexander’s solution was a simple one. He took his sword and cut the knot and freed the cart. Problem solved.

Fast forward to today and we have our very own Gordian knot type challenge: regardless of your business analytics environment, you are likely under increasing pressure to deliver more for less. Your business is demanding faster insights, more efficiency and more simplicity. At the same time as asking to spend less and get a faster return.

So, is there an answer to such a seemingly intractable problem? Is it just a question of trying ever harder to keep pace with the demands of doing more with less or will this approach be just as ineffectual as those who tried to actually untie the Gordian knot? Or is there a different way to look at the challenge and rather like Alexander with his sword, remove the challenge in one fell swoop? To answer, let’s take a step back for a second and explore what our own ‘Gordian knot’ looks like…

If your analytics is based on SAS, you will probably have to significantly increase your investment each year to perform the same analytics on double the volume of data.  It’s expensive and time-consuming to manage all that duplicate storage, datacentre-space and network infrastructure for moving data as well as the need to increase processing power. Even with all of that, the   and the time-to-insights will not live up to your business requirements.

Traditional approaches to analytics like SAS force you to move data from where it is stored to separate analytics servers because the data is in one place and models run in a different place and then feed the results back into the database. This results in huge pain points, including:

  • Expensive hardware
  • Slow insight delivery as two thirds of the time is often spent moving the data
  • Sub-par model analysis – due to memory constraints of the analytics servers, models must be built with only what fits into memory, and not the entire data
  • Outdated analysis – in several industry verticals, the underlying database might change rapidly versus the snapshot moved into memory on the analytic servers.

And there’s another problem; the better the data quality, the more confidence users will have in the outputs they produce, lowering the risk in the outcomes and increasing efficiency. The old ‘garbage in, garbage out’ adage is true, as is its inverse. When outputs are reliable, guesswork and risk in decision making can be mitigated.

Many organizations, faced with this challenge, just try to run ever faster in the hope of meeting the needs of the business. It is a fruitless approach though, doomed to eventual failure. Instead, like Alexander and his sword, they need to appreciate that the objective of achieving scalability at speed for their analytics at a lower cost requires a very different approach.

Let me explain. At Fuzzy Logix, we turned the problem on its head with our in-database analytics approach. We move the analytics to data, as opposed to moving data to analytics, and eliminate the need for separately analytics servers. We leverage the full parallelism of today’s massively parallel processing databases. With DB Lytix, data scientists are able to build models using very large amounts of data and many variables. No more sampling, no more waiting for data to move from some other place to your analytics server. These models run 10X to 100X faster than traditional analytics, bringing you expedited insights at a fraction of the cost. And with a simple software-only install of DB Lytix on to your existing SAS analytics, you can reap the rewards of faster analytics without making major infrastructure changes.

Ok, enough of the marketing speak, how about the reality? Here’s a real world example; a large US health insurer, moving the data out of the database to the SAS servers meant breaking it into 25 jobs and assembling the results – a process that took over 6 weeks!  Using in-database analytics allowed the customer to work on the entire dataset at once, and finish the analytics in less than 10 minutes.

So, how are you going to solve the intractable challenge of doing less with more? Are you going to continue doing what you have always done and hope for success? Or are you going to change tack and adopt the type of approach I’ve outlined above? To use a cliché, I think it’s a ‘no brainer’. But don’t take my word for it because if there’s one thing that history and Alexander the Great has told us, seemingly impossible tasks require bold and different solutions.


October 31, 2017  4:39 PM

Process mining – levelling the playing field in the culture of immediacy

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Bastian Nominacher, co-founder and co-CEO, Celonis

Over the past decade, almost every industry has been overturned by digital disruptors. Same-day delivery, restaurant food on demand and mini cabs at the touch of a button are the norm: and this “Amazon effect” has transformed customer expectations across every sector.

Most people now want more choice of product, cost effectiveness and convenience and full visibility from the moment they place an order. When something goes wrong, they expect the issue to be resolved quickly and as painlessly as possible and they don’t want to be passed from department to department having to explain the problem each time. In this culture of immediacy, businesses are under increased scrutiny and must look at their internal processes and the way their organisations operate if they’re to compete in a marketplace that is centred on technology, mobility and social media. While small deviations from standard processes may seem to only have a marginal effect, they can actually have a significant impact on a company’s efficiency.

Driving the need for optimisation

Of course, companies wanting to optimise their processes is nothing new – all businesses want to save time and money and work more efficiently; as this can give them the upper hand against competitors. However, with the increase in digitisation, more and more business processes are being driven by IT systems. While this facilitates communication both internally and externally, it makes it more difficult to control the increasingly complex processes within a company. At the same time, mountains of data are accumulating within companies. As a result, businesses are sitting on a goldmine of information but without the right tools in place, they don’t know how to make sense of this data.

Traditionally, organisations have brought in management consultants to help them understand and improve their core operations such as purchasing, logistics and production. While this practice can often uncover inefficiencies, such as a delayed invoice or a particularly expensive supplier, it is often lengthy and expensive to carry out. External consultants also typically rely heavily on the existing operations teams to collect data and provide context, often resulting in significant disruption to the organisation while the processes are being analysed.

Tracking digital footprints within the business 

Identifying issues in core processes can be like finding a needle in a haystack. But technology has evolved to a point where the algorithms being developed are powerful enough to help even the largest of enterprises sift through the massive amounts of data being collected, uncover hidden patterns, correlations and customer preferences and make more informed decisions. One such approach is “Process Mining”, a new form of big data analytics software that helps businesses bring anomalies in the data to the surface and pinpoint inefficiencies within their core operations.

The technology uses the digital traces left behind by every IT-driven operation in a company and provides complete transparency into how business processes are operating in real life. It automatically reconstructs the as-is process from the raw data and provides a real-time visualisation of the entire organisation’s business processes. Using this insight, process owners, such as CFOs, logistics managers and heads of purchasing, can see how efficient (or inefficient) their core operations are and identify any causes of delays or bottlenecks. They can then make more informed decisions about where the most pressing opportunities to drive efficiencies are and identify the root causes.

Sourcing talent and ensuring accessibility

As the discipline is still a relatively new category within big data analytics, organisations are working to determine the best way to deploy it within their teams. The theory behind process mining is starting to be introduced to university courses to ensure that the next mathematicians and data scientists are equipped with the skills needed to take full advantage of the technology. With a need to understand how best to apply it to their processes, it’s important that business leaders look for solutions that are accessible to everyday users and not just those within IT teams.

The bottom line is that staying competitive in the digital age means adopting best operational practices, and best practices must be dictated by analysing data. Those organisations that take control of their processes now will be able to get better transparency into their business and gain a greater understanding of their existing operations. In turn, this will enable them to give their customers what they want, when they want, and truly capitalise on the promise of digital business transformation.


October 12, 2017  3:44 PM

Graphs will take applied AI to the next level

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Emil Eifrem, CEO, Neo4j

Google’s dominant search engine has always been driven by smart software. But around 2012, the search giant quietly transformed the way users could search for information – and that’s a change that many of us are going to want to see in other Web applications, too.

What did Google do? It started using a Knowledge Graph – enhancing search results with semantic search information gathered from a wide variety of sources.

That sounds like a small step, but it was a profound one. The traditional way of storing data is ‘store and retrieve’. But that method doesn’t give you much in terms of context and connections and for your searches and recommendations to be useful, context needs to come in.

To help improve meaning and precision, you need richer search – which is what Google started to give us. Knowledge Graphs powered by graph databases are now one of the central pillars of the future of applied AI, and graphs are becoming more and more widespread in the form of recommender system technology or the shopping or customer service chatbot.
eBay’s AI-powered ShopBot, for example, is built on top of a graph database. This enables the system to answer sophisticated questions like, ‘I am looking for a black Herschel bag for under £50 – find me those only.’

The software can then ask qualifying questions and quickly serve up relevant product examples to choose from. You can send the bot a photo – ‘I like these sunglasses, can you find similar models for me?’ – and it will employ visual image recognition and machine learning to figure out similar products for you.

All this is done by using natural language techniques in the background to figure out your intent (text, picture and speech, but also spelling and grammar intention are parsed for meaning and context).

The recommendation engine, built with Neo4j, helps to refine the search against inventory with context, a way of representing connections based on shopper intent is key to helping the bot make sense of the world in order to help you.

That context is stored so that the ShopBot can remember it for future interactions: when a shopper searches for ‘black bags’ for example, eBay ShopBot knows what details to ask next like type, style, brand, budget or size. As it accumulates this information by traversing the graph database, the application is able to quickly select specific product recommendations.

Tapping into human intent is the ‘holy grail’ of what we want to do with applied AI. In this discussion on conversational commerce the example is well made: in response to a statement, My wife and I are going camping in Lake Tahoe next week, we need a tent, most search engines would react to the word ‘tent’ and the additional context regarding location, temperature, tent size, scenery, etc. is typically lost. This matters, as it’s this specific information that actually informs many buying decisions – and which Knowledge Graphs could help empower computers to learn.

Context matters. It is what drives sophisticated shopping behaviour and decision making generally. And just as Google did quietly but effectively five years ago, so the rest of us need to enfold AI-enriched features to our systems, be they retail or recommendations, to make them that more powerful and reactive to user need and business demand.

The author is CEO of Neo Technology, the company behind graph database Neo4j (http://neo4j.com/)


October 12, 2017  1:55 PM

The block chain supply chain. Will these chains ever meet?

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Greg Kefer, vice-president, marketing, Infor.

Blockchain is arguably, one of the most hyped technologies around. Despite the fact that Gartner believes that 90% of pilots will fail over the next 18 to 24 months, there is huge, sustained interest and the business value-add of blockchain is expected to grow to more than $176 billion by 2025.

Blockchain recently passed the “peak of inflated expectations” and is now making its way into the trough of disillusionment in the most recent Gartner Hype Cycle for Emerging Technology. Despite all the marketing hype, blockchain does represent a future technology that will mature and become commercially viable, we just don’t yet know where, or when, or how.

Supply Chain automation is one of the opportunity areas that has been associated with blockchain because of the decentralised processes associated with sourcing, manufacturing, shipping and paying for goods. The distributed nature of the technology has led to a great deal of speculation of supply chain applications for blockchain. Asset tracking, payments and intelligent contracts have all been proposed as possible deployments and there is no shortage of PR and event presentations that highlight the potential.

Despite the hype, the market is a long way from maturity in supply chain. Gartner predicts that mainstream adoption of blockchain in the supply chain at scale is likely 10 or more years away.

This reminds me of the early days of cloud. In the late 1990s the notion of renting advanced business software via the Internet saw tremendous VC investment and hype. Thousands of companies emerged, Super Bowl commercials were bought, new ideas surfaced, Y2K happened, and then reality began to set in. After most companies failed, business models were refined, and early adopters began to engage in cloud IT projects which matured the technology.

In the global supply chain arena, cloud computing represented an immense opportunity. Why? Because supply chains had become global, companies were producing and shipping goods all over the world, and they needed technology that could unite entire partner ecosystems together technologically – something enterprise software systems were never designed to do.

Despite superior IT economics and making the impossible possible, cloud still took a full decade to reach a level of mainstream adoption in the supply chain. Complexity was part of the reason. Security was also a concern. And most importantly, the notion of shifting mission critical operations – a company’s ability to deliver its products to the market – from the relative “safety” inside the four walls software systems to the cloud was still a scary proposition for most CIOs.

Powerful use cases began to emerge which showed the industry that connecting partners to a common network and shedding light on what is actually happening in the supply chain could unlock tremendous value.

Excellence in supply chain automation requires more the advanced software. The network is critical and that takes time to build. The reason Facebook is so valuable is not the software code, but rather the 2 billion active users. Same with LinkedIn. While there is no Facebook for supply chain yet, there are a number of network based solutions that have spent the past two decades steadily building up a network of buyers, sellers and partners that are actively engaged in the biggest supply chains on earth. Our GT Nexus Commerce Network is a prime example.

Blockchain will face a similar challenge gaining traction is the supply chain. It will take time.

Blockchain should also not be dismissed as pure hype. Expect to see companies engaging in projects with the multitude of new blockchain focused vendors that are emerging. As use cases make their way into the mainstream, more companies will jump into blockchain which will further prove out what works and what does not.

The degree of complexity in the supply chain will likely keep mass blockchain adoption a few steps behind some other business process areas, but something will happen eventually. It’s very likely that successful projects will be done in conjunction with some of the newer network-based solutions that have built in flexibility to operate alongside new technologies. We’re already seeing internet of things and advanced machine learning solutions being meshed with cloud supply chain platforms and there is no reason blockchain wouldn’t part of that evolution.

But companies should also not stand on the sidelines and wait. A lot of what blockchain promises already exists in some form. The ability to connect a supply chain to a single source of truth exists. Capabilities to monitor access and history exist. And the solutions are available anywhere, anytime via the Web are common today.

If companies are still e-mailing purchase orders and using spreadsheets to run their operations, they will not be in position to take advantage of blockchain. The supply chain has become a strategic advantage and those that don’t stay in front of innovation will find themselves getting disrupted into bankruptcy before Supply-Block-Chain becomes reality. It will be interesting to watch.


October 5, 2017  3:42 AM

Einstein, Leonardo, Coleman, ClAIre … 18c: what’s in a name?

Brian McKenna Profile: Brian McKenna

Oracle launched what it has described as the “world’s first autonomous database” at OpenWorld this week in San Francisco.

CTO Thomas Kurian said, in the press conference following his keynote at the event, it has been working on the new iteration of the Oracle database for a good many years, which has to mean before machine learning became as fashionable as it now is.

The 18c database has a new layer of automation infused into it, meaning, as an example, security patches to fix code flaws that are at risk of hacker exploitation are applied automatically. In both his conference keynotes, Oracle founder, chairman, and CTO Larry Ellison referred to the recent Equifax data loss as an example of why humans have to be taken out of patching in order to reduce the opportunity for error.

The automation added to the database is said to be an example of machine learning in action. Ellison said ML is as radical as the internet itself, and that his company’s new “autonomous” database is “revolutionary” – a term that he said he did not apply lightly.

This stress on machine learning has, of course, been commonplace in the world of enterprise software in recent years. A galaxy of human geniuses – Einstein for Salesforce, Leonardo for SAP to name but two – has been invoked to name a new generation of enterprise software inflected by machine learning.

But why has Oracle refrained from naming their artificial intelligence/machine learning efforts? UK, Ireland and Israel managing director Dermot O’Kelly said, in an interview with Computer Weekly that he thought the epithet “revolutionary” was indeed appropriate for the new iteration of the database and saw no issue with Oracle’s not giving a “fancy” name to its machine learning initiative.

“Unless you know another fully autonomous database then, yes, it is revolutionary. Our customers know immediately what an autonomous database would be – it patches itself, it upgrades, it scales ….

“We keep naming very simple, that is the way Larry likes it. Embedding AI into the applications or the database is much more important than giving it a fancy name. Not one customer I’ve spoken to at Open World has asked me what it is called. I talk to a lot of CIOs who have all done bits with AI, but having it delivered to them embedded into a product set helps them with their journey to using it”.

It is surely reasonable to perceive the naming of AI/ML enterprise software programmes after rare human geniuses as pretentious. However, Oracle’s “no name” approach betrays a pragmatic approach – of which they are making a virtue – that could be said to be lacking in imagination, and so subserving an incrementalist agenda.

Moreover, is there not a difference between automation and autonomization – the latter being the realm of AI in its fullest sense, where computers think like humans, learn by themselves, without human input? Automation, on the other hand is about incrementally improving operational processes, and  reducing human input – and so opportunity for error: and has been the point of business computing all along, so no big change there. Also, in terms of the database, I wonder if the approach of making it more and more automatic – which Oracle has to do, and which has business value for users – in the end risks increased commoditization of the database bit of the IT industry? It might keep you off the front page for losing customer data, but it won’t differentiate your business strategically.

These are open questions, and, as Dermot O’Kelly says it is easier to say than to do autonomy in the database realm.

I discuss these, and other issues provoked by OpenWorld 2017 with my TechTarget colleagues David Essex and Jack Vaughan in a couple of linked podcasts on SearchERP.com and SearchOracle.com:

Oracle AI apps now present throughout enterprise cloud suite

Machine learning and analytics among key Oracle security moves

 

 


September 22, 2017  10:01 AM

Artificial Intelligence could be about to start a processing chip arms race

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Matt Jones, lead analytics strategist, Tessella

Recent converts to AI and machine learning’s amazing potential may be disappointed to discover that many of its more interesting aspects have been around for a couple of decades.

AI works by learning to distinguish between different types of information. An example is the use of a neural network for medical diagnosis. Inspired by the way the human brain learns, neural nets can be trained to analyse images tagged by experts (this image shows a tumour, this image does not). Over time, and with enough data, they become as good as the experts at making the judgement, and they can start making diagnoses from new scans.

Although not simple, it is not the complexity of the algorithms that has held these tools back until now. In fact, data scientists have been using smaller scale neural networks for many years.

Rather profanely, the limiting factor for the past 20 years has been processing power and scalability.

Processors have improved exponentially for years (see Moore’s law). But a few years ago, NVIDIA launched GPUs with chips which were not just powerful, but capable of running thousands of tasks in parallel with an instruction set that was exposed for use by developers.

This was the step change for machine learning and other AI tools. It allowed huge amounts of data to be processed simultaneously. Neural nets, like the many synapses in your brain, process lots of information simultaneously to reach their conclusion, with calculations being performed at each node, of which there can be thousands. Before highly parallel processing capability, this was a slow process. Imagine looking at a picture and taking hours to work out what it was.

The availability of consumer level GPUs with massive parallelisation via Nvidia CUDA cores has meant deep neural networks can now be run in reasonable times and at reasonable cost. A grid of GPUs is considerably cheaper, and more effective than the corresponding level of compute available via traditional CPU’s.

Neural nets have long been used in labs to analyse datasets, but due to compute limitations this would take weeks or even months to complete a run. They found applications where lengthy data analysis beyond human capacity (or patience) was needed, but where speed wasn’t critical, such as for predicting drug-like molecule interaction with target receptors in medicine research.

Today’s neural nets – and deep learning (large, combined neural networks) – can now do the same compute in hours or minutes. Computationally intensive AI processes which previously took hours can be applied to real time tasks, such as diagnostics and making safety critical industrial decisions.

On the shoulders of giants

This has been critical to the rapid rise of AI. Off the back of this, more commoditisation has appeared. Google, Microsoft and Facebook, amongst others, have all developed AI programming tools. These allow anyone to build their own AI on the tech giants’ platforms and run the associated processing in their data centres – examples include diagnosing disease, managing oil rig drilling, and predicting when to take aeroplanes out of service. AI became democratised.

Amongst the excitement around applying AI to a brand new set of possibilities, NVIDIA quietly cornered the market for AI processor chips.

Slightly late to the party, but never to be written off, the usual suspects are catching up. Microsoft (using Intel technology) recently launched Brainwave, a new type of chip specifically designed for deep learning. Google also recently started building its own chips, the Tensor Processing Unit, for AI applications.

This means a chip arms race is around the corner. Expect AI announcements soon from other chip manufacturers, and an aggressive push from NVIDIA to defend its leadership.

None of this is a bad thing for us machine learning professionals. If the capacity to process data increases at an ever faster rate, it expands what we can do with AI and how fast we can do it. Better, faster, more parallelised tasks can mean ever deeper deep learning algorithms and complex neural networks. Data processing tasks which previously took minutes, hours, days, are gradually brought into the realm of real time decision making.

With AI tools and processing power readily available, the desire to harness AI is growing rapidly. The tech giants, innovative startups, and companies undergoing digital transformation all want a piece of the action. Technology advances apace, but the limiting factor now is skills, which have not been able to keep up with AI’s meteoric rise.

Truly harnessing AI requires a wide range of highly specialised skills covering advanced maths, programming languages, and an understanding of the tools themselves. In most cases a degree of expertise in the subject the AI is being designed for (oncology or oil rig engineering for example) is necessary. AI is now seen as a serious career choice, but still, these skills will take the uninitiated a good few years to learn – a PhD and many years’ industry experience, which are needed for most AI roles – do not come overnight.

Meanwhile, a generation of scientists – who have spent the last 20 years in a lab patiently waiting for their meticulously designed neural network to work its way through months of data – are suddenly finding themselves in high demand.


Page 1 of 1112345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: