For all the hype around artificial intelligence (AI), and the excitement around some of its potential – personal assistants that develop a personality, robot-assisted micro surgery, etc. – it is arguably adding most value to businesses in less glamorous, but ultimately more valuable, ways, says Nuxeo’s Dave Jones, in a guest blogpost.
Backend tasks in a business are few people’s favourite. They are hugely time consuming, rarely rewarding but are vitally important. Automating these tasks is an area where AI has the potential to add incredible value for businesses.
AI and information management
Information management is an area with many ways in which AI can be of benefit. AI allows organisations to streamline how they manage information, reduce storage, increase security, and deliver faster and more effective searches for content and information.
Many companies are struggling with the volume of information in modern business and find it difficult for users to locate important information that resides in multiple customer systems and transaction repositories. The key to solving this problem is having accurate metadata about each content and data asset. This makes it easy to quickly find information, and also provides context and intelligence to support key business processes and decisions.
Enrichment of metadata is one area that AI really excels at. Populating and changing metadata before AI was a laborious task – not made any easier by the fixed metadata schemas employed by many content management systems. However, metadata schemas in an AI-infused Content Services Platform (CSP) are flexible and extensible. Much more metadata is being stored and used than ever before, so the ability to use AI to process large volumes of content and create numerous and meaningful metadata tags is a potential game-changer.
Unlocking the content in legacy systems
Another powerful way in which AI can address backend tasks, is in connecting to content from multiple systems, whether on-premise or in the cloud. This ensures the content itself is left in place, but access is still provided to that content and data from the AI-infused CSP.
It also provides the ability for legacy content to make use of a modern metadata schema from the CSP – effectively enriching legacy content with metadata properties without making any changes to the legacy system at all. This is a compelling proposition in itself, but when combined with the automation of AI, even more so.
By using a CSP to pass content through an AI enrichment engine, that content can be potentially enriched with additional metadata attributes for each and every one of the files currently stored. This injects more context, intelligence, and insight into an information management ecosystem.
But by using an AI-driven engine to classify content stored within legacy systems, this becomes much easier to do. Even simple AI tools can identify the difference between a contract and a resume, but advanced engines expand this principle to build AI models based on content specific to an organisation. These will deliver much more detailed classifications than could ever be possible with generic classification.
Backend AI in action
A manufacturing firm I met with recently has been automating the classification and management of its CAD drawings. There is a misconception that AI needs to be super intelligent to add real value. But in this example the value of AI is not the intelligence required to identify what qualifies as a particular kind of design drawing, but to be ‘smart enough’ to recognise the documents that definitely ‘aren’t’ the right type – essentially to sift out the rubbish and allow people to focus on the relevant information much faster.
Information management and associated backend tasks may not be the most glamourous AI use cases but if done well, they can provide significant value to businesses all over the world.
This is a guest blogpost by David Richards, co-founder and CEO, WANdisco
At its recent Cloud Next conference Google rolled out a number of new cloud products, services and packages – all designed to improve the company’s competitive position and differentiate itself from Amazon and other peers.
As Google and its fellow American giants press on in the cloud ecosystem, rapidly expanding their already formidable market share, it begs the question where Britain sits in the global ranks?
Gartner predicts the market for the global cloud industry will exceed $200 billion this year, yet it’s a four-horse race and very much set to stay that way.
As the industry is gathering pace across the globe, it’s only fair to ask whether the UK should be striving to secure a place in the front line?
The UK is rightly considered a world leader in technological innovation and has long been drawing in talented entrepreneurs looking to transform their ideas into successful businesses. Our digital tech sector is strong, growing several times faster than the overall economy – producing pioneering developments in fintech, healthtech and SaaS technology.
Notwithstanding our success elsewhere, we are falling behind in data and cloud operations. It’s a sector in which we do not hold a strong market position, and one that has moved too far along for us to play catch up.
In the same way that Silicon Valley as a whole is light years ahead of other tech ecosystems – having developed its foundations far earlier – the pace setters in cloud were out of the blocks early.
Amazon was the first to create what we now know as the Infrastructure-as-a-Service cloud computing industry back in the early 2000s with the launch of Amazon Web Services. What started as adapting their IT infrastructure to spike seasonal retail demand, soon turned into the most dominant cloud infrastructure in the world.
Google (because of it search engine), Microsoft (trying to compete with Google’s search prowess with Bing) and Alibaba (competing with Amazon in retail) came soon after and have grown to become legitimate AWS rivals. The big four currently hold 89% of the market share, which is projected to only increase.
To that end, UK companies entering the cloud market are facing a colossal uphill climb. The incumbents have engrained themselves into the system, and any challenger stands little chance – just ask IBM or Oracle.
Simply put, the UK stands little chance in breaking through in the cloud platform market. The cloud, however, democratises storage that in turn lowers barriers to entry in new markets such as AI. Britain can turn to its strengths, and support the areas where it excels – artificial intelligence and machine learning.
AI in the UK
The UK has long been at the forefront of development in the AI industry, housing a third of Europe’s AI start-ups – twice as many as any other European country.
We are leading hub for AI advancements in healthcare, thanks to high adoption rates in the National Health Service and close ties with top-rated medical universities trialling the latest developments in medical technology.
With the right application, AI offers a £232 billion opportunity for the UK economy over the next decade, and the government is moving in the right direction to seize the chance with both hands.
Recently, the government launched a new nation-wide programme through the Alan Turing Institute for industry-funded courses at UK universities. These fellowships are laying the foundation for the next generation of researchers, business leaders and entrepreneurs in the AI industry – building a strong pipeline for the future.
Understanding one’s role in the global tech ecosystem is the first step to success. The sooner Britain recognises where its strength really lie, the easier the path to growth will be.
Whilst we can follow in the footsteps of the US and China in putting digital transformation at the top of the business agenda by embracing the cloud adoption, we should not try to chase the tails of the major cloud giants in developing the newest cloud infrastructure.
Britain has long stood as a pioneering force in technology adoption and development, dating back to the days of Alan Turing.
The pedigree we hold in the digital industries carries immense weight, and allows the sector to work with strong foundations – whether it’s access to capital, developing talent or international connections.
While cloud technology stands as the hot topic of 2019, Britain will best serve the growth of its technology sector by doubling down on the expertise we already hold and propelling our AI standing to the next level.
This is a guest blog post by Rich Pugh, co-founder and Chief Data Scientist, Mango Solutions
Data is the new oil, or so we are told. In some respects, this is true – successful businesses today run on data, and, like oil, data is near-useless unless it is refined and treated in the right way. But refining is a difficult process, and, with many business executives overwhelmed by the “bigness” of modern data, it’s easy to see plug-and-play business intelligence, AI or Machine learning solutions as a one-stop data-to-value machine.
The problem is that all too often, these tools cannot deliver the mythical value expected of them; even if the technology finds an important and relevant correlation, businesses are unsure how to act on the information effectively and understand the full context of the finding. Insight becomes an eye-grabbing statistic in a PowerPoint presentation, or perhaps a one-off decision made based on a nugget of information, and then nothing further. It’s hard to quantify what the long-term value of this was, because the full context is missing.
That’s where data science comes in – or more specifically, a company-wide culture of data science. Rather than just a tool to turn data into insight, data science is a way of blending together technology, data and business awareness to extract value, not just information, from data. While 81% of senior executives interviewed for a recent EY and Nimbus Ninety report agreed that data should be at the heart of all decision-making, just 31% had actually taken the step to restructure their organization to achieve this. That leaves a huge majority of organisations who recognize the potential of data but have yet to find a way to embed a data driven culture within their business.
Restructuring can sound like a difficult and intensive process, but it doesn’t have to be. It’s about following a process to harness existing resources and improve collaboration with a focus around delivering value.
So where do you start? Many companies already have pockets of data science and analytics-savvy professionals dotted around their organisation, but these can be siloed by business function. These can range from product development specialists who understand how to code and develop new analytics solutions, to members of the team who excel at extracting interesting pieces of insight from vast spreadsheets. By connecting these people together into a new Community of Practice – and encouraging ongoing collaboration and connection, as well as discussion around fundamental technologies – you have already created a data science community that sits across your business.
It’s then a case of getting these people to work towards what “best practice” looks like. This requires the team to work together on a common understanding of what the business is trying to achieve, and the questions you want to solve with data, and then build a structure from there for what “good” looks like. As part of this it’s important to agree what the priorities for any projects are, and the ways in which these will be communicated back to others in the business. It’s not about enforcing a one-size-fits-all approach, but instead fostering commonality and cohesion to ensure the team can agree about what needs to happen, when.
Once you have your team of data science experts, it’s time to engage with the business as a whole. Educating the business requires the whole data science team to be confident with what analytics can achieve for the business, and even more importantly, what it cannot achieve that the business might be expecting. This will then need to be communicated in a clear way – using language that the business teams will understand will help break down any preconceptions. This can be daunting, and often, data science teams will find themselves faced with a huge variety of interest levels. Many who hear about the potential of data science will feel it has little bearing on their work – and discussions about its potential will go in one ear and out the other. However, there will also be people who are inspired by what data can do for them and want to get more involved. These people can be future champions for driving a data driven culture beyond the core team.
Most importantly, the business needs to be engaged around relatable, real topics. While the data science team is educating the business, it’s also important to encourage the business to “educate” the data science team. Workshops discussing what success looks like for each business area, what the decisions that shape this success are, and what would be useful for improving those decisions help to transition data science from a magical black box spitting out insights into a process focussed on solving real business issues. From these meetings, the data science community can prioritise and execute around the core challenges that can be addressed with data.
Finally, it’s about finding a way to quantify the value the data science community now brings to the business and make success a repeatable part of the business process. The individuals from the business team who were initially positive about the potential of data science can be fantastic advocates here, explaining in business terms what value a solution has brought – and how these solutions continue to transform business decision making. This can then provide a springboard for targeting more sceptical business departments and scaling a culture of data driven success throughout the organisation.
By adopting a data driven culture, businesses stand a far greater chance of success in the Information Age than by investing in plug-and-play solutions and hoping for the best. By building data science solutions around real business problems, in conjunction with the whole business team, organisations are more likely to see the value thanks to an ongoing culture of problem solving with data science.
This is a guest blogpost by Emil Eifrem, CEO, Neo4j, in which he explains how an elegant solution to the mathematical puzzle to find the most efficient way of traversing the bridges in Prussia’s Koenigsberg resulted in graph theory
It’s the anniversary this month (15 April) of Swiss mathematician Leonhard Euler’s (1707-1783) birth. While some of us already know him for introducing huge chunks of modern mathematical terminology and breakthrough work in mechanics, fluid dynamics, optics and astronomy, more and more of us are beginning to appreciate him for one of his lesser known achievements: inventing graph theory.
The story begins with Euler taking on a popular problem of the day – the “seven bridges of Königsberg” problem. The challenge was to see whether it was possible to visit all four areas of that city while only crossing each bridge once.
Formalising relationships in data to reveal hidden structures
To get his answer, Euler was able to abstract the problem. And today, we still talk of a Eulerian path being defined as one where every node in a network is visited exactly once, just as the puzzle demands, making a proper cycle if you start and finish at the same place.
This has since been applied, very successfully, to derive the most efficient routes in scenarios such as snow ploughs and mail delivery, but Eulerian paths are also used by other algorithms for processing data in tree structures. His key insight was that only the connections between the items in the case were relevant. Publishing his solution in 1736 he created a whole new sub-set of mathematics – a framework that can be applied to any connected set of components. He had discovered a way to quantify and measure connected data that still works today – and because of this foundational work we can also use his way of formalising relationships in data to reveal hidden structures, infer group dynamics, and even predict behaviour.
It wasn’t until 1936, two hundred years later, when the first textbook was written on what Euler did with the Seven Bridges problem that the concept of graphs came into circulation. It took 40 more years until network science as a discipline and applied graph analytics began to emerge.
Looking at real-world puzzles analytically
Fast forward to today: Euler gave us a powerful framework, but graphs, in software terms, only really became prolific in today’s interconnected, data-driven world. Our unprecedented ability to collect, share and analyse massive amounts of connected data, accompanied by an unrelenting drive to digitise more and more aspects of our lives as well as the growth in computing power and accessible storage, presents us with the perfect context for lots of networks to emerge. These networks contain lots of information in them that we can tease out using graph techniques.
Euler is well known to generations of mathematicians and computer scientists – but even among those who are familiar with him, his ideas on graphs aren’t as well known as they should be. Yet, every graph database user is so indebted to him for his insights. Just as Euler’s life’s work was about looking at real-world puzzles analytically, so the graph software developers of today are solving highly complex, real-world issues, in areas such as AI, Machine Learning, sophisticated recommendations and fraud – and finding practical, graph-based ways to fix them.
Here’s to the man who solved the Seven Bridges problem and gave us a great way to understand and improve the world. Happy 312th birthday, Herr Euler!
This is a guest blogpost by Andrew Filev, founder and CEO, Wrike.
Since no business wants to leave money on the table, it’s tempting to try to squeeze as much productivity as possible out of your workforce to execute every campaign, programme, and project available.
The problem with this approach is that it’s unsustainable. In a survey of 1,500 workers commissioned by Wrike last year, nearly 26% of respondents said, “If my stress levels don’t change, I’ll burn out in the next 12 months.” Work is moving faster, demand on workers is increasing, and digital, while a massive enabler of growth, is also contributing to our increasing stress.
A core way to fight this stress epidemic in 2019 is for businesses to bring more intelligence to work. This will enable executives and managers to make smarter decisions on effort allocation to drive desired results – rather than wasting time and energy on what amounts to being just activity.
Assessing which work makes the biggest impact is an impossible task for many teams without business intelligence. If your company is managing work through emails and spreadsheets, project details are rarely up to date, and even when details are current, they are often too fragmented to tie to business impact.
By now, you’ve probably got your strategic plan for 2019 completed, and are wondering how you’re going to achieve all your goals – and how you’re going to keep your team focused on the most impactful. The first step, in my view, is to leverage a robust collaborative work management (CWM) platform. This will allow your teams to optimise processes with automation, templates, and real-time workflow analytics. From there the data collected within the CWM can be connected to a business intelligence (BI) solution, where it can be translated into actionable insights on project efficiency and ROI.
These insights aren’t just valuable for the sanity of your workforce (though sanity is a noble reason on its own). They’re essential in making your business more efficient and bringing continuous improvement – fuelled by real data and metrics – into your culture. Marketers use analytics to measure the ROI of an ad, and sales leaders measure the effectiveness of specific strategies and tactics to their revenue goals. But for a lot of other knowledge work positions, measuring how work impacts specific company OKRs [Objective and Key Results] isn’t so clear.
The possibilities for such insights that executives can glean from integrated CWM and BI platforms are unprecedented in decision making. For example, if a high-performing business unit shows signs of bottlenecks in a particular phase of work, you know it’s time to boost its headcount. On the contrary, if a major initiative is shown to take a lot of work, but not move the needle, maybe it’s time to reassign that talent to more profitable projects. This may have been possible before, but only after problems made a noticeable impact on deliverables.
Traditional BI integrations with ERP, CRM, and finance systems aren’t enough to fuel these insights. Nor are rigid legacy PPM [Project Portfolio Management] systems built to manage both the structured and unstructured methods or collaborative ways that most teams execute work today, which has limited their adoption to primarily formal project managers and the PMO office.
While those tools are all important, CWM software offers flexibility across teams, departments, and projects, making company-wide adoption not only possible, but far more likely. Once deployed, the data CWMs collect becomes invaluable for measuring the return on effort throughout an organisation with real-time updates about time to completion, delays, effort, and team effectiveness. It offers the ability to bring hard numbers to business operations that had previously been left to guesswork.
Connecting work to impact with business intelligence is a critical step for any business trying to stay competitive in the digital transformation and bringing excellence to operations for both B2C and B2B companies that are under pressure to deliver on-demand products to customers. As a business scales, you can’t throw projects at the wall and see what sticks – that will ultimately burn out teams and lead to a lot of misguided effort. BI and CWM technology is the smartest way to keep teams focused on the most important work in 2019, so their workloads are balanced, and their stress levels are low.
Dominic Raab, the Secretary of State for Exiting the EU, has been in hot water for his late-dawning admission of the importance of the Dover-Calais route to the UK’s trade in goods.
Raab was speaking at a Tech Nation event in London. In an aside, he candidly admitted he had not previously fully appreciated the extent of how reliant the UK’s trade in goods is on the Calais to Dover route.
Here is the full quote:
“We want a bespoke arrangement on goods which recognises the peculiar, frankly, geographic and economic entity that is the United Kingdom. We are, and I [hadn’t] quite understood the full extent of this, but if you look at the UK, and how we trade in goods, we are super [?] reliant on the Dover-Calais crossing, and that is one of the reasons why – and there has been a lot of controversy on this – we wanted to make sure we have a specific and proximate relationship to the EU to ensure frictionless trade at the border, particularly for just-in-time manufactured goods, whether pharmaceutical goods or perishable goods like food”.
Raab’s political enemies seized on these words as indicating unfitness for his role, ignorance of British geography, failure to do his homework before committing to Brexit ideologically (he is, unlike Theresa May and Philip Hammond, a dyed in the wool Brexiteer), and so on.
Then again, you could say, from a Remain point of view: “better late than never”. Or, from a Brexit point of view, that this half-swallowed, fleeting “gaffe” is part of a cunning plan to persuade Tory dissenters to sign up to the Chequers plan – which is what Raab was advocating.
Or you could just say he was speaking in a relaxed and colloquial matter before a modest gathering of mostly tech entrepreneurs, whose focus is mostly software and services, not physical things, and was just being honest.
Or, yet again, this is an example of the ostensible self-deprecation that used (at least) to be characteristic of university dons professing ignorance of something they actually know a great deal about. (“I really hadn’t appreciated the full significance of Derridean deconstruction until I applied it to that which is not said in the texts of Jane Austen. Pass the port”).
Personally, I would be less sanguine than he expressed himself as being about the future of our AI sector, post-Brexit.
Perhaps Raab will be less candid in future. Politics is a murky business.
Oracle Open World 2018 seems a world away. Especially, when you have had a few days’ furlough in between.
Mark Hurd’s second keynote at OOW convoked a common room of former spies from the US and UK intelligence world. It was an interesting session, and I thought I’d offer a few words by way of reflection. A very good account of the session is given by Rebecca Hill over at The Register.
Joining Hurd on stage were Oracle’s own chief corporate architect, Edward Screven, Michael Hayden, former director of the CIA, Jeh Johson, former head of the Department of Homeland Security, and Sir John Scarlett, former head of our own Secret Intelligence Service, also known as MI6.
I’ll put the comments of the Americans to one side. John Scarlett said an arresting thing about how British people typically see the state as benign, whereas Americans do not – partly, he joked, as a result of the actions that provoked their rebellion in 1776. He didn’t mention the British burning down of the White House in 1814, which Malcolm Tucker said we would do again, in the film In the Loop. That might have been a joke too far.
Sir John said: “in the UK, people don’t look at the state as a fundamental threat to our liberties. In the US you have a different mentality – partly because of us”.
But are we really so insouciant about the state and its surveillance? There are many contumacious traditions on these islands. The London Corresponding Society, of which Edward Thompson writes in his magisterial The Making of the English Working Class is one such on the radical left, in the late eighteenth century. And, on the right, there are libertarian Tory anarchists aplenty, from Jonathan Swift to David Davis, the “Brexit Bulldog”. And these anti-statists are just as British as the undoubtedly fine men and women of SIS.
Scarlett did, though, have interesting things to say. For instance, about the Cold War being a riskier time than now. “There is a tendency now to say we live in a world that is more unpredictable than previously. But the Cold War was not a time of predictability and stability. There was the nuclear war scare in 1983. It was the real thing, and not many people know about it”.
However, he said he does now see a perilous return of Great Power tensions and rivalries and that technology is a great leveller in that respect, with the relationship between the US and China being at the centre of that new “great game”. He also said that there is not as yet a “sense of the rules of the game to agree on whether a cyber-attack is an act of war”. And added: “I am wary of talk of a ‘cyber 9-11′. I think that is to think in old ways”.
Later in the discussion he said he comes from “a world of security and secrecy where you protect what you really need to protect. That is critical. Attack and defence will continue to evolve, but the philosophical point is you need to be completely clear about what you really, really need to protect. You can’t protect everything”.
One small point. I was amused and interested to hear Mark Hurd pronounce SIS as “Sis”. Was John Scarlett too British and polite to correct him? Or is our security service affectionately thus known among the security cognoscenti on the other side of the Pond?
All in all, it was an interesting session. And it caused me to re-read the chapter on the special relationship between the UK and US intelligence communities in Christopher Hitchens’ Blood, Class and Empire. There, in ‘The Bond of Intelligence’, he writes (of an episode in Nelson Aldrich’s book Old Money):
“it is difficult to think of any more harmonious collusion between unequals, or any more friendly rivalry, than that existing between the American and British “cousins” at this key moment in a just war [the Second World War]. In later and more caricatured forms it has furnished moments of semi-affectionate confusion in several score novels and films: the American doing his damnedest to choke down the school-dinner food in his plummy colleague’s Pall Mall Club; the Englishman trying to get a scotch without ice in Georgetown. It is the foundation of James Bond’s husky comradeship with Felix Leiter, and of numerous slightly more awkward episodes in the works of John le Carré”.
This is a guest blogpost by Julian Nolan, CEO, Iprova
A technology revolution is taking place in the research and development (R&D) departments of businesses around the world. Driven by data, machine learning and algorithms, artificial intelligence (AI) is helping scientists to invent new and better products faster and more effectively. But how is this possible and why is it necessary?
Invention has long been thought of as the product of great minds: the result of well-rounded scholars and thinkers like Leonardo Da Vinci and Thomas Edison making synaptic links between ideas that most people would never consider. And for hundreds of years, this has indeed been the case.
However, the times are changing and we’re currently in a position where information experiences exponential growth, yet innovation and invention has slowed. Great minds are still behind new products and services, but the vast quantity of information now available to mankind exceeds the grasp of a single researcher or R&D team — particularly as many researchers specialise in narrow fields of expertise rather than in multiple disciplines. Developments outside of those fields are often unknown, even though they may be relevant.
As such, we find that many new patented inventions are not the result of making truly novel links between concepts, but rather a linear step forward in the evolution of a product line.
This is now changing by putting artificial intelligence at the core of the invention process itself. At Iprova we have developed a technology that uses advanced algorithms and machine learning to find inventive triggers in a vast array of sources of information, from new scientific papers to market trend research across a broad spectrum of industries.
This technology allows us to review the data in real-time and make inventive connections. That’s why we are able to spot advancements in medical diagnostics and sensor technology and relate them to autonomous vehicles for example.
According to the European Patent Office (EPO), the typical patenting process is 3–4 years. When you consider that the typical research process from conception to invention takes place over a similar amount of time, most companies are looking at a minimum of six years to bring products to market.
This is where machine learning makes a big difference. Our own technology reviews huge amounts of data and identifies new inventive signals at high speed, which means that our invention developers can take an idea and turn it into a patentable invention in only a couple of weeks — significantly reducing the overall lead time and research costs of inventions.
Thinking back to Da Vinci or Edison, the only reason why we still remember their names today is because their inventions were ground breaking at the time. Others may have been working on similar creations, but their names didn’t make history because of one simple fact. They weren’t first. Fast forward to today, being first is all businesses care about when it comes to taking new products to market. Yet, in the age of data explosion this can only be achieved in one way – using artificial intelligence at the invention stage itself.
This is a guest blogpost by Colin Elkins, global industry director, process manufacturing, IFS.
Global food waste, or food loss as it is classified by growers and manufacturers, is around 1.3 billion tons per year, which is one third of all food produced and twice the weight of the world’s current population!
In developing countries, waste is due to inadequate or inappropriate agriculture and storage, whereas in the developed countries it is caused by consumption. In the US consumption accounts for a staggering 40% of all US food wasted, whereas in sub-Saharan Africa only five percent of food is wasted by the consumer.
Our agricultural and production systems today are a wonder to behold, computer-based crop and animal management, GPS based harvesting and field fertilisation techniques, robotic graders and sorters, vison recognition and rejection systems are all part of the journey from our fields to our factories. The utilisation of these and other technologies has made the manufacturing element of the supply chain the lowest waster of food in developed countries. There is however one exception, fresh produce – fruit and vegetables are amongst the highest contributors to waste. This is not due to poor technology or practice, but due to the perceived demands of retailers and consumers.
Retailers desire to make their products look attractive to the consumer has driven up the waste levels for growers and producers. Size, shape, colour and consistency all form part of the ‘retailer specification’ outside of which the produce is simply discarded and returned to the soil. Yields of less than 50% are not uncommon for minor deviations from specification.
Retailers are often seen as the main villain in terms of creating this ‘artificial’ waste, but are they simply pandering to consumer needs? Consumer demands for perfect produce are also a key culprit.
Technology will not solve the food waste problem completely, as a cultural change is required on the part of the consumer, however it can go part of the way. Growers and producers already innovate by developing smarter farms, factories and processes to reduce waste, using the latest technologies like AI, and The Internet of Things (IoT) in conjunction with their ERP Software.
Where the food manufacturers can play a part in waste reduction is in the planning phase. What many consumers don’t realise is the level of planning that goes into the production and packaging, to be able to supply the correct amount for consumer needs. The combination of high demand from retailer and consumers means that if supply does indeed outstrip demand, then both the fresh produce and its packaging are often discarded.
Predictions and planning are an area that technology has already redefined in recent years, and the manufacturing industry is no different. Until fairly recently, food manufacturers were dealing purely with a gut feeling, sporadic weather reports, and disparate systems in order to plan for yield and supply.
In the same way that predicting the weather is now far more data-driven than ever before, the same is true for predicting demand and matching supply for fresh food products. IoT and its network of physical devices, vehicles, sensors, appliances and other items embedded with electronics, are producing vast volumes of potentially critical and valuable data for the food manufacturing industry, and the ability for these devices connect and exchange data with the planning function is key. The future of the industry is not simply in each smart device, but in the connectivity between them, something I refer to as ‘digital string’.
The strings of data must converge to a central, visible and accessible point within the manufacturing organisation, so the planning decision makers have this critical data delivered to them, all in one dashboard. Utilising data from internal sources like sales, promotions and forecasts is normal today, but with the use of Artificial Intelligence it will be possible to integrate that data with external sources like weather patterns, public events and holidays and then analyse and learn from the results to then optimize the plan and reduce waste. The data is out there, ripe for the picking, and if food manufacturers can harvest it and bring it within the factory to inform planning processes, the industry can do much more to reduce waste, whilst making significant savings.
From an ideological perspective, a fundamental change is required on the part of the consumer to reduce packaging consumption and accept a wider spectrum of perceived fresh food ‘quality’ when it comes to fresh produce. Reducing food waste is everyone’s responsibility, and like many other sustainability efforts, technology is available to manufacturers and producers to help them do their bit, whilst driving efficiencies and cost savings within their business. It’s a win, win.
This is a guest blog post by Savinay Berry, VP of Cloud Services at OpenText
Microservices are an increasingly important way of delivering an alternative to traditionally large, complex and monolithic enterprise applications. With a microservices architecture, applications can be broken down and developed through a collection of loosely coupled services that allow continuous development without having to redeploy the entire application. It is essentially a variant of service-oriented architecture (SOA).
Think like IKEA
For a simpler way to think about how a microservices architecture works, take the example of how the making and buying of furniture for the home has evolved. Traditionally, if someone needed a sofa or a dining room table, they would go out to a furniture store and buy the item for a few thousand pounds. They would get good use out of it and might buy a maintenance plan to cover any repairs.
Then along came Ingvar Kamprad, a 17-year-old Swedish boy who started a business selling replicas of his uncle Ernst’s kitchen table. That company became IKEA and broke the traditional model of buying furniture as a complete product that was already assembled. Instead, each item of furniture was broken up into easily assembled pieces that people could put together themselves. Manufacturing became cheaper as a result, and there was no maintenance or ongoing cost. The customer gets the modern dining room table or bookshelf they wanted, but at a fraction of the cost compared to a traditional furniture store. Efficiencies also resulted for IKEA; the modular design meant that operating logistics were simplified, making things cheaper at the back end.
The same is now happening with software. Traditional on-premises software is expensive, plus there is the cost of ongoing maintenance and support. Like in the IKEA analogy, software is instead broken up into easily consumable services, at a lower cost model.
Adoption of microservices, and cloud microservices in particular, is growing fast. Research and Markets predicts the global cloud microservices market will grow from $683.2m in 2018 to $1.88bn by 2023, driven largely by digital transformation and the retail and ecommerce sector, where customer experience is a key competitive differentiator.
A way to unleash the power of data
How does a microservices architecture work in the real world? Take the example of a large insurance company that provides homes and buildings insurance.
Previously, documents, insurance contracts and financial paperwork would be taken from the content management system and emailed to the builder. The builder would complete them and email them back. The insurance administrator would then print the email, scan the documents and put them into the content management system. This was happening on a daily basis, thousands of times a year.
Using a microservices approach, the insurance company was able to digitise the entire process. The administrator can now share documents by just clicking a couple of buttons, the builder can edit the documents, and have them automatically sent back into the content store. There is no longer a need to print or scan documents. This is what digitising businesses means in the real world. It’s not just a high-level, abstract IT industry concept. It’s a real application that saves time and money for enterprises.
Another example is a company in the utility sector, which will be able to use artificial intelligence (AI) to help maintenance crews search for parts. If the crews are searching for a part they get prompted with similar or related searches to what they are looking for, similar to Amazon’s recommendation engine approach. It is about leveraging the power of data stored in your own systems. And while the algorithm is important, it’s not the key factor. What really matters is the data – that’s the true differentiator.
Enabling innovation and transformation
But microservices isn’t just about making things easier to build and maintain, and cheaper to deliver and run. It’s also about what microservices enables enterprises to do. This approach provides the platform for businesses to completely reimagine applications and, in doing so, transform the customer experience.
A microservices architecture is more resilient and reliable. If one service goes down, the rest of the application will still be available to the customer. Microservices are also more scalable and can easily be configured to automatically scale up and down to meet peaks and troughs in demand, ensuring the customer experience is maintained without holding on to excess capacity.
Microservices also enable faster innovation, development and deployment. Developers have more freedom, because code for different services can be written in different languages. It allows for easy integration and automatic deployment of services and accelerated time-to-market for new services and products, or the ability to easily make enhancements to existing ones.
Microservices are a key enabler in the world of cloud and digital. But it’s not just about developers, code and technology. As with the cloud, it’s not simply a deployment model change – it’s a mindset and cultural change. And that’s what true digitisation provides – while inspiring new ways to work.