Data Matters


August 7, 2019  10:18 AM

The Enterprise Data Fabric: an information architecture for our times

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Sean Martin, CTO and co-founder, Cambridge Semantics

The post-big data landscape has been shaped by two emergent, intrinsically related forces: the predominance of cognitive computing and the unveiling of the data fabric architecture. The latter is an overlay atop the assortment of existing distributed computing technologies, tools and approaches that enable them to interact for singular use cases across the enterprise.

Gartner describes the data fabric architecture as the means of supporting “frictionless access and sharing of data in a distributed network environment.” These decentralized data assets (and respective management systems) are joined by the data fabric architecture.

Although this architecture involves any number of competing vendors, graph technology and semantic standards play a pivotal role in its implementation. By providing business meaning to data and flexibly integrating data sources of any structural type, respectively, this tandem delivers rapid data discovery and integration across distributed computing resources.

It’s the means of understanding and assembling heterogeneous data across the fabric to make this architecture work.

Drivers

The primary driver underpinning the necessity of the data fabric architecture is the thresholds of traditional data management options. Hadoop inspired data lakes can co-locate disparate data successfully, but encounter difficulty actually finding and integrating datasets. The more data that disappears in them, the more difficult organizations have governing them or achieving value.  These options can sometimes excel at cheaply processing vast, simple datasets, but have limited utility when operating over complex multiple entity laden data which restricts them to only the simplest integrations.

Data warehouses can offer excellent integration performance for structured data, but were designed in the slower pace of the pre big data era. They’re too inflexible and difficult to change in the face of the sophisticated and ever increasing demands of today’s data integrations, and are poorly suited for tying together the unstructured (textual and visual) data inundating the enterprises today. Cognitive computing applications like machine learning require far more data and many more intricate transformations, necessitating modern integration methods.

Semantic Graphs

The foremost benefit semantic graphs endow data fabric architecture with is seamless data integrations. This approach not only blends together various datasets, data types and structures, but also the outputs of entirely distinct toolsets and their supporting technologies. By placing a semantic graph integration layer atop this architecture, organizations can readily rectify the most fundamental differences at the data and tool levels of these underlying data technologies. Whether organizations choose to use different options for data virtualization, storage tiering, ETL, data quality and more, semantic graph technology can readily integrate this data for any use.

The data blending and data discovery advantages of semantic graphs are attributed to their ability to define, standardize, and harmonize the meaning of all incoming data. Moreover, they do so in terms that are comprehensible to business end users, spurring an innate understanding of relationships between data elements. The result is a rich contextualized understanding of data’s interrelations for informed data discovery, culminating in timely data integrations for cutting edge applications or analytics like machine learning.

With the Times

Although the data fabric architecture includes a multiplicity of approaches and technologies, that of semantic graphs can integrate them—and their data—for nuanced data discovery and timely data blending. This approach is adaptable for modern data management demands and empowers data fabric architecture as the most suitable choice for today’s decentralized computing realities.

The knowledge graph by-product of these integrations is quickly spun up in containers and deployed in any cloud or hybrid cloud setting that enhances germane factors such as compute functionality, regulatory compliance, or pricing. With modern pay on demand cloud delivery mechanisms in which APIs and Kubernetes software enable users to automatically position their compute where needed, the data fabric architectures is becoming the most financially feasible choice for the distributed demands of the modern data ecosystem.

July 24, 2019  2:39 PM

Why empathy is key for Data Science initiatives

Brian McKenna Profile: Brian McKenna

This is a guest blogpot by Kasia Kulma, a senior data scientist at Mango Solutions

When we think of empathy in a career, we perhaps think of a nurse with a good bedside manner, or perhaps a particularly astute manager or HR professional. Data science is probably one of the last disciplines where empathy would seem to be important. However, this misconception is one that frequently leads to the failure of data science projects – a solution that technically works but doesn’t consider the problem from the business’ point of view. After all, empathy isn’t just about compassion or sympathy, it’s the ability to see a situation from someone else’s frame of reference.

To examine the role of empathy in data science, let’s take a step back and think about the goal of data science in general. At its core, data science in the enterprise context is aimed at empowering the business to make better, evidence-based decisions. Success with a data science project isn’t just about finding a solution that works, it’s about finding one that meets the following criteria:

  • The project is completed on time, on budget, and with the features it originally set out to create
  • The project meets business goals in an implementable and measurable way
  • The project is used frequently by its intended audience, with the right support and information available

None of these are outcomes that can be achieved by a technical solution in isolation; instead, they require data scientists to approach the problem empathetically. Why? Because successful data science outcomes rely on actually understanding the business problem being solved, and having strong collaboration between the technical and business team to ensure everyone is on the same page – all of which is essential, and a key resource for getting senior stakeholder buy-in.

In short, empathy factors in throughout every stage of the process, helping create an idea of what success looks like and the business context behind that. Without this, a data scientist will not be able to understand the data in context, including some of the technical aspects such as what defines an outlier and subsequent treatment in data cleaning. The business process, even with less technical understanding, will have far better insight into why data may look “wrong” than a data scientist alone could ever guess at. Finally, empathy helps build trust – critically in getting the support of stakeholders early in the process, but then also in the deployment and evaluation stage.

Given the benefits, empathy is key in data science. To develop this skill, there are some simple techniques to drive more empathetic communication and successful outcomes. The three key questions that data scientists should be looking to answer are: “What do we want to achieve?” “How are we going to achieve it?” and “How can we make sure we deliver?”

What do we want to achieve?

For the first point, one approach is to apply agile development methodology to the different users of a potential solution and iterate to find the core problem – or problems – we want to solve. For each stakeholder, the data science function needs to consider what type of user they represent, what their goals are and why they want this – all in order to ensure they understand the context in which the solution needs to work. By ensuring that a solution addresses each of these users’ “stories”, data scientists are empathetically working to recognise the business context in their approach.

How are we going to achieve it?

Then it’s a case of how to go about achieving a successful outcome.  One helpful way to think about it is to imagine that we are writing a function in our code: given our desired output, what are the necessary inputs? What operation does our function need to perform in order to turn one into the other? Yes, the “function” approach does not only apply to data, but also to the process of creating a solution. Data scientists should be looking at an input of “the things I need for a successful solution” a function for “how to do it” and then an output of the desired goal. For example, if the goal is to build a successful churn model, we need to consider high level inputs such as sign-off from relevant stakeholders, available resources and even budget agreements that might contain the project. Then, in the function stage, it may be time to discuss the budget and scope with senior figures, work out if additional resources need to be hired and any other items needed to drive the right output at the end. This can then be broken down into more detailed individual input-function-output processes to get desired outcomes.  For example, working out if additional resources need to be hired can become a function output that will now have a new set of relevant inputs and actions driving the solution.

How can we make sure we deliver?

Finally, there are questions that need to be asked in every data science project, no matter what the scope or objective. In order to ensure that none of them are omitted, stakeholders should form a checklist, a strategy that has been successfully used in aviation or medical surgery to reduce failure.  For example, preparing to build a solution that suits the target environment shouldn’t be a final consideration, but instead a foundational part of the planning of any data science project. Thus, a good checklist that data scientists could consider in the planning stage could include:

  • Why is this solution important?
  • How would you use it?
  • What does your current solution look like?
  • What other solutions have you tried?
  • Who are the end-users?
  • Who else would benefit from this solution?

Only with this input can data scientists build a deployable model or data tool that will actually work in context, designed for its eventual users rather than for use purely in a theoretical context.

Empathy may seem an unusual skill for a data scientist, however embracing this value fits into a wider need for a culture of data science within organisations, linking business and data science teams rather than keeping them in siloes. By encouraging dialogue and ensuring all data science projects are undertaken with the stakeholders in mind, data scientists have the best chance of building the most effective solutions for their businesses.


June 4, 2019  2:28 PM

Why the real value of AI in business is in automating backend tasks

Brian McKenna Profile: Brian McKenna

For all the hype around artificial intelligence (AI), and the excitement around some of its potential – personal assistants that develop a personality, robot-assisted micro surgery, etc. – it is arguably adding most value to businesses in less glamorous, but ultimately more valuable, ways, says Nuxeo’s Dave Jones, in a guest blogpost.

Backend tasks in a business are few people’s favourite. They are hugely time consuming, rarely rewarding but are vitally important. Automating these tasks is an area where AI has the potential to add incredible value for businesses.

AI and information management

Information management is an area with many ways in which AI can be of benefit. AI allows organisations to streamline how they manage information, reduce storage, increase security, and deliver faster and more effective searches for content and information.

Many companies are struggling with the volume of information in modern business and find it difficult for users to locate important information that resides in multiple customer systems and transaction repositories. The key to solving this problem is having accurate metadata about each content and data asset. This makes it easy to quickly find information, and also provides context and intelligence to support key business processes and decisions.

Enrichment of metadata is one area that AI really excels at. Populating and changing metadata before AI was a laborious task – not made any easier by the fixed metadata schemas employed by many content management systems. However, metadata schemas in an AI-infused Content Services Platform (CSP) are flexible and extensible. Much more metadata is being stored and used than ever before, so the ability to use AI to process large volumes of content and create numerous and meaningful metadata tags is a potential game-changer.

Unlocking the content in legacy systems

Another powerful way in which AI can address backend tasks, is in connecting to content from multiple systems, whether on-premise or in the cloud. This ensures the content itself is left in place, but access is still provided to that content and data from the AI-infused CSP.

It also provides the ability for legacy content to make use of a modern metadata schema from the CSP – effectively enriching legacy content with metadata properties without making any changes to the legacy system at all. This is a compelling proposition in itself, but when combined with the automation of AI, even more so.

By using a CSP to pass content through an AI enrichment engine, that content can be potentially enriched with additional metadata attributes for each and every one of the files currently stored. This injects more context, intelligence, and insight into an information management ecosystem.

But by using an AI-driven engine to classify content stored within legacy systems, this becomes much easier to do. Even simple AI tools can identify the difference between a contract and a resume, but advanced engines expand this principle to build AI models based on content specific to an organisation. These will deliver much more detailed classifications than could ever be possible with generic classification.

Backend AI in action

A manufacturing firm I met with recently has been automating the classification and management of its CAD drawings. There is a misconception that AI needs to be super intelligent to add real value. But in this example the value of AI is not the intelligence required to identify what qualifies as a particular kind of design drawing, but to be ‘smart enough’ to recognise the documents that definitely ‘aren’t’ the right type – essentially to sift out the rubbish and allow people to focus on the relevant information much faster.

Information management and associated backend tasks may not be the most glamourous AI use cases but if done well, they can provide significant value to businesses all over the world.

Dave Jones is on the Board of Directors at the Association for Intelligent Information Management (AIIM) and is also Director of Product Marketing, Nuxeo.


April 30, 2019  11:09 AM

The UK needs to stop chasing clouds

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by David Richards, co-founder and CEO, WANdisco

At its recent Cloud Next conference Google rolled out a number of new cloud products, services and packages – all designed to improve the company’s competitive position and differentiate itself from Amazon and other peers.

As Google and its fellow American giants press on in the cloud ecosystem, rapidly expanding their already formidable market share, it begs the question where Britain sits in the global ranks?

Gartner predicts the market for the global cloud industry will exceed $200 billion this year, yet it’s a four-horse race and very much set to stay that way.

As the industry is gathering pace across the globe, it’s only fair to ask whether the UK should be striving to secure a place in the front line?

The UK is rightly considered a world leader in technological innovation and has long been drawing in talented entrepreneurs looking to transform their ideas into successful businesses. Our digital tech sector is strong, growing several times faster than the overall economy – producing pioneering developments in fintech, healthtech and SaaS technology.

Notwithstanding our success elsewhere, we are falling behind in data and cloud operations. It’s a sector in which we do not hold a strong market position, and one that has moved too far along for us to play catch up.

In the same way that Silicon Valley as a whole is light years ahead of other tech ecosystems – having developed its foundations far earlier – the pace setters in cloud were out of the blocks early.

Amazon was the first to create what we now know as the Infrastructure-as-a-Service cloud computing industry back in the early 2000s with the launch of Amazon Web Services. What started as adapting their IT infrastructure to spike seasonal retail demand, soon turned into the most dominant cloud infrastructure in the world.

Google (because of it search engine), Microsoft (trying to compete with Google’s search prowess with Bing) and Alibaba (competing with Amazon in retail) came soon after and have grown to become legitimate AWS rivals. The big four currently hold 89% of the market share, which is projected to only increase.

To that end, UK companies entering the cloud market are facing a colossal uphill climb. The incumbents have engrained themselves into the system, and any challenger stands little chance – just ask IBM or Oracle.

Simply put, the UK stands little chance in breaking through in the cloud platform market. The cloud, however, democratises storage that in turn lowers barriers to entry in new markets such as AI.  Britain can turn to its strengths, and support the areas where it excels – artificial intelligence and machine learning.

AI in the UK

The UK has long been at the forefront of development in the AI industry, housing a third of Europe’s AI start-ups – twice as many as any other European country.

We are leading hub for AI advancements in healthcare, thanks to high adoption rates in the National Health Service and close ties with top-rated medical universities trialling the latest developments in medical technology.

With the right application, AI offers a £232 billion opportunity for the UK economy over the next decade, and the government is moving in the right direction to seize the chance with both hands.

Recently, the government launched a new nation-wide programme through the Alan Turing Institute for industry-funded courses at UK universities. These fellowships are laying the foundation for the next generation of researchers, business leaders and entrepreneurs in the AI industry – building a strong pipeline for the future.

Understanding one’s role in the global tech ecosystem is the first step to success. The sooner Britain recognises where its strength really lie, the easier the path to growth will be.

Whilst we can follow in the footsteps of the US and China in putting digital transformation at the top of the business agenda by embracing the cloud adoption, we should not try to chase the tails of the major cloud giants in developing the newest cloud infrastructure.

Britain has long stood as a pioneering force in technology adoption and development, dating back to the days of Alan Turing.

The pedigree we hold in the digital industries carries immense weight, and allows the sector to work with strong foundations – whether it’s access to capital, developing talent or international connections.

While cloud technology stands as the hot topic of 2019, Britain will best serve the growth of its technology sector by doubling down on the expertise we already hold and propelling our AI standing to the next level.


April 29, 2019  3:15 PM

Data driven business is about culture, not tools

Brian McKenna Profile: Brian McKenna

This is a guest blog post by Rich Pugh, co-founder and Chief Data Scientist, Mango Solutions

Data is the new oil, or so we are told. In some respects, this is true – successful businesses today run on data, and, like oil, data is near-useless unless it is refined and treated in the right way. But refining is a difficult process, and, with many business executives overwhelmed by the “bigness” of modern data, it’s easy to see plug-and-play business intelligence, AI or Machine learning solutions as a one-stop data-to-value machine.

The problem is that all too often, these tools cannot deliver the mythical value expected of them; even if the technology finds an important and relevant correlation, businesses are unsure how to act on the information effectively and understand the full context of the finding. Insight becomes an eye-grabbing statistic in a PowerPoint presentation, or perhaps a one-off decision made based on a nugget of information, and then nothing further. It’s hard to quantify what the long-term value of this was, because the full context is missing.

That’s where data science comes in – or more specifically, a company-wide culture of data science. Rather than just a tool to turn data into insight, data science is a way of blending together technology, data and business awareness to extract value, not just information, from data. While 81% of senior executives interviewed for a recent EY and Nimbus Ninety report agreed that data should be at the heart of all decision-making, just 31% had actually taken the step to restructure their organization to achieve this. That leaves a huge majority of organisations who recognize the potential of data but have yet to find a way to embed a data driven culture within their business.

Restructuring can sound like a difficult and intensive process, but it doesn’t have to be. It’s about following a process to harness existing resources and improve collaboration with a focus around delivering value.

So where do you start? Many companies already have pockets of data science and analytics-savvy professionals dotted around their organisation, but these can be siloed by business function. These can range from product development specialists who understand how to code and develop new analytics solutions, to members of the team who excel at extracting interesting pieces of insight from vast spreadsheets. By connecting these people together into a new Community of Practice – and encouraging ongoing collaboration and connection, as well as discussion around fundamental technologies – you have already created a data science community that sits across your business.

It’s then a case of getting these people to work towards what “best practice” looks like. This requires the team to work together on a common understanding of what the business is trying to achieve, and the questions you want to solve with data, and then build a structure from there for what “good” looks like. As part of this it’s important to agree what the priorities for any projects are, and the ways in which these will be communicated back to others in the business. It’s not about enforcing a one-size-fits-all approach, but instead fostering commonality and cohesion to ensure the team can agree about what needs to happen, when.

Once you have your team of data science experts, it’s time to engage with the business as a whole. Educating the business requires the whole data science team to be confident with what analytics can achieve for the business, and even more importantly, what it cannot achieve that the business might be expecting. This will then need to be communicated in a clear way – using language that the business teams will understand will help break down any preconceptions. This can be daunting, and often, data science teams will find themselves faced with a huge variety of interest levels. Many who hear about the potential of data science will feel it has little bearing on their work – and discussions about its potential will go in one ear and out the other. However, there will also be people who are inspired by what data can do for them and want to get more involved. These people can be future champions for driving a data driven culture beyond the core team.

Most importantly, the business needs to be engaged around relatable, real topics. While the data science team is educating the business, it’s also important to encourage the business to “educate” the data science team. Workshops discussing what success looks like for each business area, what the decisions that shape this success are, and what would be useful for improving those decisions help to transition data science from a magical black box spitting out insights into a process focussed on solving real business issues. From these meetings, the data science community can prioritise and execute around the core challenges that can be addressed with data.

Finally, it’s about finding a way to quantify the value the data science community now brings to the business and make success a repeatable part of the business process. The individuals from the business team who were initially positive about the potential of data science can be fantastic advocates here, explaining in business terms what value a solution has brought – and how these solutions continue to transform business decision making. This can then provide a springboard for targeting more sceptical business departments and scaling a culture of data driven success throughout the organisation.

By adopting a data driven culture, businesses stand a far greater chance of success in the Information Age than by investing in plug-and-play solutions and hoping for the best. By building data science solutions around real business problems, in conjunction with the whole business team, organisations are more likely to see the value thanks to an ongoing culture of problem solving with data science.


April 20, 2019  9:38 AM

How an 18th century Maths puzzle solves 21st century problems

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Emil Eifrem, CEO, Neo4j, in which he explains how an elegant solution to the mathematical puzzle to find the most efficient way of traversing the bridges in Prussia’s Koenigsberg resulted in graph theory

It’s the anniversary this month (15 April) of Swiss mathematician Leonhard Euler’s (1707-1783) birth. While some of us already know him for introducing huge chunks of modern mathematical terminology and breakthrough work in mechanics, fluid dynamics, optics and astronomy, more and more of us are beginning to appreciate him for one of his lesser known achievements: inventing graph theory.

The story begins with Euler taking on a popular problem of the day – the “seven bridges of Königsberg” problem. The challenge was to see whether it was possible to visit all four areas of that city while only crossing each bridge once.

Formalising relationships in data to reveal hidden structures

To get his answer, Euler was able to abstract the problem. And today, we still talk of a Eulerian path being defined as one where every node in a network is visited exactly once, just as the puzzle demands, making a proper cycle if you start and finish at the same place.

This has since been applied, very successfully, to derive the most efficient routes in scenarios such as snow ploughs and mail delivery, but Eulerian paths are also used by other algorithms for processing data in tree structures. His key insight was that only the connections between the items in the case were relevant. Publishing his solution in 1736 he created a whole new sub-set of mathematics – a framework that can be applied to any connected set of components. He had discovered a way to quantify and measure connected data that still works today – and because of this foundational work we can also use his way of formalising relationships in data to reveal hidden structures, infer group dynamics, and even predict behaviour.

It wasn’t until 1936, two hundred years later, when the first textbook was written on what Euler did with the Seven Bridges problem that the concept of graphs came into circulation. It took 40 more years until network science as a discipline and applied graph analytics began to emerge.

Looking at real-world puzzles analytically

Fast forward to today: Euler gave us a powerful framework, but graphs, in software terms, only really became prolific in today’s interconnected, data-driven world. Our unprecedented ability to collect, share and analyse massive amounts of connected data, accompanied by an unrelenting drive to digitise more and more aspects of our lives as well as the growth in computing power and accessible storage, presents us with the perfect context for lots of networks to emerge. These networks contain lots of information in them that we can tease out using graph techniques.

Euler is well known to generations of mathematicians and computer scientists – but even among those who are familiar with him, his ideas on graphs aren’t as well known as they should be. Yet, every graph database user is so indebted to him for his insights. Just as Euler’s life’s work was about looking at real-world puzzles analytically, so the graph software developers of today are solving highly complex, real-world issues, in areas such as AI, Machine Learning, sophisticated recommendations and fraud – and finding practical, graph-based ways to fix them.

Here’s to the man who solved the Seven Bridges problem and gave us a great way to understand and improve the world. Happy 312th birthday, Herr Euler!


January 23, 2019  11:09 AM

Using business intelligence tools to prevent team burnout

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Andrew Filev, founder and CEO, Wrike.

Since no business wants to leave money on the table, it’s tempting to try to squeeze as much productivity as possible out of your workforce to execute every campaign, programme, and project available.

The problem with this approach is that it’s unsustainable. In a survey of 1,500 workers commissioned by Wrike last year, nearly 26% of respondents said, “If my stress levels don’t change, I’ll burn out in the next 12 months.” Work is moving faster, demand on workers is increasing, and digital, while a massive enabler of growth, is also contributing to our increasing stress.

A core way to fight this stress epidemic in 2019 is for businesses to bring more intelligence to work. This will enable executives and managers to make smarter decisions on effort allocation to drive desired results – rather than wasting time and energy on what amounts to being just activity.

Assessing which work makes the biggest impact is an impossible task for many teams without business intelligence. If your company is managing work through emails and spreadsheets, project details are rarely up to date, and even when details are current, they are often too fragmented to tie to business impact.

By now, you’ve probably got your strategic plan for 2019 completed, and are wondering how you’re going to achieve all your goals – and how you’re going to keep your team focused on the most impactful. The first step, in my view, is to leverage a robust collaborative work management (CWM) platform. This will allow your teams to optimise processes with automation, templates, and real-time workflow analytics. From there the data collected within the CWM can be connected to a business intelligence (BI) solution, where it can be translated into actionable insights on project efficiency and ROI.

These insights aren’t just valuable for the sanity of your workforce (though sanity is a noble reason on its own). They’re essential in making your business more efficient and bringing continuous improvement – fuelled by real data and metrics – into your culture. Marketers use analytics to measure the ROI of an ad, and sales leaders measure the effectiveness of specific strategies and tactics to their revenue goals. But for a lot of other knowledge work positions, measuring how work impacts specific company OKRs [Objective and Key Results] isn’t so clear.

The possibilities for such insights that executives can glean from integrated CWM and BI platforms are unprecedented in decision making. For example, if a high-performing business unit shows signs of bottlenecks in a particular phase of work, you know it’s time to boost its headcount. On the contrary, if a major initiative is shown to take a lot of work, but not move the needle, maybe it’s time to reassign that talent to more profitable projects. This may have been possible before, but only after problems made a noticeable impact on deliverables.

Traditional BI integrations with ERP, CRM, and finance systems aren’t enough to fuel these insights. Nor are rigid legacy PPM [Project Portfolio Management] systems built to manage both the structured and unstructured methods or collaborative ways that most teams execute work today, which has limited their adoption to primarily formal project managers and the PMO office.

While those tools are all important, CWM software offers flexibility across teams, departments, and projects, making company-wide adoption not only possible, but far more likely. Once deployed, the data CWMs collect becomes invaluable for measuring the return on effort throughout an organisation with real-time updates about time to completion, delays, effort, and team effectiveness. It offers the ability to bring hard numbers to business operations that had previously been left to guesswork.

Connecting work to impact with business intelligence is a critical step for any business trying to stay competitive in the digital transformation and bringing excellence to operations for both B2C and B2B companies that are under pressure to deliver on-demand products to customers. As a business scales, you can’t throw projects at the wall and see what sticks – that will ultimately burn out teams and lead to a lot of misguided effort. BI and CWM technology is the smartest way to keep teams focused on the most important work in 2019, so their workloads are balanced, and their stress levels are low.


November 9, 2018  8:51 AM

Once more on Dominic Raab at Tech Nation

Brian McKenna Profile: Brian McKenna

Dominic Raab, the Secretary of State for Exiting the EU, has been in hot water for his late-dawning admission of the importance of the Dover-Calais route to the UK’s trade in goods.

Raab was speaking at a Tech Nation event in London. In an aside, he candidly admitted he had not previously fully appreciated the extent of how reliant the UK’s trade in goods is on the Calais to Dover route.

Here is the full quote:

“We want a bespoke arrangement on goods which recognises the peculiar, frankly, geographic and economic entity that is the United Kingdom. We are, and I [hadn’t] quite understood the full extent of this, but if you look at the UK, and how we trade in goods, we are super [?] reliant on the Dover-Calais crossing, and that is one of the reasons why – and there has been a lot of controversy on this – we wanted to make sure we have a specific and proximate relationship to the EU to ensure frictionless trade at the border, particularly for just-in-time manufactured goods, whether pharmaceutical goods or perishable goods like food”.

Raab’s political enemies seized on these words as indicating unfitness for his role, ignorance of British geography, failure to do his homework before committing to Brexit ideologically (he is, unlike Theresa May and Philip Hammond, a dyed in the wool Brexiteer), and so on.

Then again, you could say, from a Remain point of view: “better late than never”. Or, from a Brexit point of view, that this half-swallowed, fleeting “gaffe” is part of a cunning plan to persuade Tory dissenters to sign up to the Chequers plan – which is what Raab was advocating.

Or you could just say he was speaking in a relaxed and colloquial matter before a modest gathering of mostly tech entrepreneurs, whose focus is mostly software and services, not physical things, and was just being honest.

Or, yet again, this is an example of the ostensible self-deprecation that used (at least) to be characteristic of university dons professing ignorance of something they actually know a great deal about. (“I really hadn’t appreciated the full significance of Derridean deconstruction until I applied it to that which is not said in the texts of Jane Austen. Pass the port”).

Personally, I would be less sanguine than he expressed himself as being about the future of our AI sector, post-Brexit.

Perhaps Raab will be less candid in future. Politics is a murky business.


November 2, 2018  4:13 PM

The Bond of Intelligence at Oracle Open World

Brian McKenna Profile: Brian McKenna

Oracle Open World 2018 seems a world away. Especially, when you have had a few days’ furlough in between.

Mark Hurd’s second keynote at OOW convoked a common room of former spies from the US and UK intelligence world. It was an interesting session, and I thought I’d offer a few words by way of reflection. A very good account of the session is given by Rebecca Hill over at The Register.

Joining Hurd on stage were Oracle’s own chief corporate architect, Edward Screven, Michael Hayden, former director of the CIA, Jeh Johson, former head of the Department of Homeland Security, and Sir John Scarlett, former head of our own Secret Intelligence Service, also known as MI6.

I’ll put the comments of the Americans to one side. John Scarlett said an arresting thing about how British people typically see the state as benign, whereas Americans do not – partly, he joked, as a result of the actions that provoked their rebellion in 1776. He didn’t mention the British burning down of the White House in 1814, which Malcolm Tucker said we would do again, in the film In the Loop. That might have been a joke too far.

Sir John said: “in the UK, people don’t look at the state as a fundamental threat to our liberties. In the US you have a different mentality – partly because of us”.

But are we really so insouciant about the state and its surveillance? There are many contumacious traditions on these islands. The London Corresponding Society, of which Edward Thompson writes in his magisterial The Making of the English Working Class is one such on the radical left, in the late eighteenth century. And, on the right, there are libertarian Tory anarchists aplenty, from Jonathan Swift to David Davis, the “Brexit Bulldog”. And these anti-statists are just as British as the undoubtedly fine men and women of SIS.

Scarlett did, though, have interesting things to say. For instance, about the Cold War being a riskier time than now. “There is a tendency now to say we live in a world that is more unpredictable than previously. But the Cold War was not a time of predictability and stability. There was the nuclear war scare in 1983. It was the real thing, and not many people know about it”.

However, he said he does now see a perilous return of Great Power tensions and rivalries and that technology is a great leveller in that respect, with the relationship between the US and China being at the centre of that new “great game”. He also said that there is not as yet a “sense of the rules of the game to agree on whether a cyber-attack is an act of war”. And added: “I am wary of talk of a ‘cyber 9-11′.  I think that is to think in old ways”.

Later in the discussion he said he comes from “a world of security and secrecy where you protect what you really need to protect. That is critical. Attack and defence will continue to evolve, but the philosophical point is you need to be completely clear about what you really, really need to protect. You can’t protect everything”.

One small point. I was amused and interested to hear Mark Hurd pronounce SIS as “Sis”. Was John Scarlett too British and polite to correct him? Or is our security service affectionately thus known among the security cognoscenti on the other side of the Pond?

All in all, it was an interesting session. And it caused me to re-read the chapter on the special relationship between the UK and US intelligence communities in Christopher Hitchens’ Blood, Class and Empire. There, in ‘The Bond of Intelligence’, he writes (of an episode in Nelson Aldrich’s book Old Money):

“it is difficult to think of any more harmonious collusion between unequals, or any more friendly rivalry, than that existing between the American and British “cousins” at this key moment in a just war [the Second World War]. In later and more caricatured forms it has furnished moments of semi-affectionate confusion in several score novels and films: the American doing his damnedest to choke down the school-dinner food in his plummy colleague’s Pall Mall Club; the Englishman trying to get a scotch without ice in Georgetown. It is the foundation of James Bond’s husky comradeship with Felix Leiter, and of numerous slightly more awkward episodes in the works of John le Carré”.

 


November 1, 2018  3:01 PM

How data is driving the future of invention

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Julian Nolan, CEO, Iprova

A technology revolution is taking place in the research and development (R&D) departments of businesses around the world. Driven by data, machine learning and algorithms, artificial intelligence (AI) is helping scientists to invent new and better products faster and more effectively. But how is this possible and why is it necessary?

Invention has long been thought of as the product of great minds: the result of well-rounded scholars and thinkers like Leonardo Da Vinci and Thomas Edison making synaptic links between ideas that most people would never consider. And for hundreds of years, this has indeed been the case.

However, the times are changing and we’re currently in a position where information experiences exponential growth, yet innovation and invention has slowed. Great minds are still behind new products and services, but the vast quantity of information now available to mankind exceeds the grasp of a single researcher or R&D team — particularly as many researchers specialise in narrow fields of expertise rather than in multiple disciplines. Developments outside of those fields are often unknown, even though they may be relevant.

As such, we find that many new patented inventions are not the result of making truly novel links between concepts, but rather a linear step forward in the evolution of a product line.

This is now changing by putting artificial intelligence at the core of the invention process itself. At Iprova we have developed a technology that uses advanced algorithms and machine learning to find inventive triggers in a vast array of sources of information, from new scientific papers to market trend research across a broad spectrum of industries.

This technology allows us to review the data in real-time and make inventive connections. That’s why we are able to spot advancements in medical diagnostics and sensor technology and relate them to autonomous vehicles for example.

According to the European Patent Office (EPO), the typical patenting process is 3–4 years. When you consider that the typical research process from conception to invention takes place over a similar amount of time, most companies are looking at a minimum of six years to bring products to market.

This is where machine learning makes a big difference. Our own technology reviews huge amounts of data and identifies new inventive signals at high speed, which means that our invention developers can take an idea and turn it into a patentable invention in only a couple of weeks — significantly reducing the overall lead time and research costs of inventions.

Thinking back to Da Vinci or Edison, the only reason why we still remember their names today is because their inventions were ground breaking at the time. Others may have been working on similar creations, but their names didn’t make history because of one simple fact. They weren’t first. Fast forward to today, being first is all businesses care about when it comes to taking new products to market. Yet, in the age of data explosion this can only be achieved in one way – using artificial intelligence at the invention stage itself.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: