Quocirca Insights

Page 3 of 3012345...102030...Last »

May 9, 2017  8:58 AM

GDPR: Why print is a crucial element of endpoint security

Louella Fernandes Profile: Louella Fernandes
GDPR, MPS, Security

The EU General Data Protection Regulation (GDPR), which takes effect on 25th May 2018, could prove to be a catalyst to change the existing haphazard approach to print security.

Networked printers and multifunction printers (MFPs) are often overlooked when it comes to wider information security measures. Yet these devices store and process data, and as intelligent devices have the same security vulnerabilities as any other networked endpoint. With Quocirca’s recent research revealing that almost two thirds of large organisations have experienced a print-related data breach1, organisations cannot afford to be complacent. The biggest incentive to rethink print security is the substantial potential fines imposed by the GDPR. Infringement can attract a fine of up to 4% of total global annual turnover or €20m (whichever is the higher).

Securing the print environment

Today’s smart MFPs have evolved into sophisticated document processing hubs that in addition to print and copy, enable the capture, routing and storage of information. However, as  intelligent, networked devices, they have several points of vulnerability. A printer or MFP, is effectively an Internet of Things (IoT) device and as such, left unsecured, is an open door into the entire corporate network. Without the appropriate controls, information on the device, in transit or on the device can be accessed by unauthorised users. The risks are real – recent Quocirca research indicating that almost two thirds of large organisations have suffered a print related data breach.

There are two key issues – the printer/MFP as an access point to the network, and the printer/MFP as a storage device for “personally identifiable information” (PII).

Mitigating the print security risk and addressing GDPR compliance

As critical endpoints, printers and MFPs must be part an overall information security strategy. This should ensure that all networked printers and MFPs are protected at a device, document and user level. This means, for instance, that data is encrypted in transmission, hard drives are encrypted and overwritten, print jobs are only released to authorised users and devices are protected from malicious malware.

Many organisations may believe that they are covered by existing technology, but in many cases this does not protect against the latest threats. Consequently, operating a large, mixed fleet of oled and new devices, can leave gaping security holes.

Given the complexity of print security in large organisations, particularly those with a diverse fleet, Quocirca recommends seeking guidance from vendors that understand the internal and external risks and the risk of unprotected data on printer/MFP devices. Organisations should select vendors that can address both legacy and new devices and offer solutions for encryption, fleet visibility and intelligent tracking of all device usage. This should ensure the ability to track what information is being printed or scanned, for instance, where and on what device, therefore enabling faster breach remediation.

Managed print service (MPS) providers should be the first port of call, as they are best positioned to advise on print security technology. The emergence of advanced managed print security services (offerings vary from vendors that include those from HP, Lexmark, Ricoh and Xerox) aim to improve resilience against hacking attempts on devices, rapidly detect malicious threats, continually monitor the print infrastructure and enhance security policies and employee awareness.

Look for comprehensive print security services that offer:

  • Assessment: A full security assessment of the printer infrastructure to identify any security gaps in the existing device fleet. This should be part of the broader Data Protection Impact Assessment (DPIA) that an organisation may conduct internally or using external providers. Recommendations can be made for ensuring all devices use data encryption, user access control and features such as hardware disk overwrite (the erasure of information stored on the MFP hard disk). Also look to use endpoint data loss prevention (DLP) tools at this stage to gain insight as to what likely PII could be transferring via an MFP (for instance scanning personal information via the MFP to email or cloud storage).
  • Monitoring: In order to monitor and detect breaches, ongoing and proactive monitoring ensures devices  are being used appropriately in accordance with organisational policies. More advanced print security controls use run-time intrusion detection. Integration with Security Information and Event Management (SIEM) systems can help accelerate the time to identify and respond to a data breach, which is key to GDPR compliance. Consider third-party managed services support in order to streamline data logging and security intelligence gathering.
  • Reporting: GDPR’s demanding reporting requirements can be addressed through reporting usage by device and user. This will highlight any non-compliant behaviour or ‘gaps’ in controls so that they can be identified and addressed, and allow audit trails to be created to support the demonstration of compliance.

Conclusion

GDPR is a reminder that organisations should proactively assess their security position. Organisations must move quickly to understand the legislation and put appropriate measures in place. Ultimately print security is part of a broader GDPR compliance exercise, and it is vital that organisations act now to evaluate the security of their print infrastructure.

Fore more information on the steps that should be taken to protect the print environment in light of GDPR, please contact Louella.Fernandes@quocirca.com

Further reading:

1 http://quocirca.com/content/print-security-imperative-iot-era

http://www.computerweekly.com/blog/Quocirca-Insights/Quocircas-whistle-stop-tour-of-the-GDPR

May 7, 2017  9:02 AM

Email security retrospection

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

Keeping anything safe and secure involves multiple considerations. Avoid putting yourself in danger, put up a protective ‘shield’, detect when that is compromised, take mitigating action.

When it comes to communication and the modern hyperconnected world based on open protocols, it’s much harder to avoid danger. Hence shields and detection are the significant part of the online security proposition.

Mitigation is another matter. While some of it can be accomplished financially or through after the event actions such as insurance claims, smarter or a more holistic use of technology can also offer useful and timely support reducing the negative impact of incidents.

electronic mail email

Securing email

Consider email. From a business perspective, many will feel it is a double-edged sword – can’t live with it, can’t live without it. The general expectation with email is that it should work, be used sensibly to deliver important messages which arrive promptly and be addressed to the correct people! Emails should be uncluttered by lots of surrounding dross, especially malware.

The reality is of course all too often the exact opposite. While the internet might have robust survival in its core packet switching protocols, some of the applications layered over it can be flimsy at times. Email systems need enhancing, at the very least with anti-SPAM and anti-malware shields and data leak prevention. The  major security players have built up considerable expertise in this sector, including Trend Micro, Sophos, McAfee and Symantec.

However, there is no such thing as 100% protection. Given email’s significance and business dependence on it, many could benefit from going further with additional layers of defence. This does not mean a protectionist approach of ‘closing the door’, but better monitoring of communication flow, and critically, an ability to learn, adapt and follow up in real time.

Recognised malicious software should be detected by antivirus scanners and stopped on entry to an organisation well before delivery to its final destination inbox. But this is a moving feast. New malware appears, before scanners are updated, meaning there will be traffic that has already passed through the secure perimeter.

Email retrospection

This is where solutions that take a retrospective view, such as Retarus’ Patient Zero Detection, can add value to the internal flow of email within an organisation. In addition to perimeter protection, it uses an innovative follow-up approach based on post-delivery identification of attacks. This takes advantage of the time lag between an email entering an organisation’s email system and then being accessed. It applies a trace to all incoming messages so that this can be done rapidly, without resorting to searching all inboxes.

Imagine the scenario. Email with new virus that is not picked up by the barrier protection passes into the organisation and through to inboxes without being noticed. Normally when it is accessed, by one recipient or another, the problem will manifest…or perhaps not, but the damage is done.

With the Patient Zero Detection approach, each incoming email is given a unique digital fingerprint/hash code so that it can be traced. If an email containing a new virus gets through, but the virus is subsequently identified, recipients can be identified retrospectively. Administrators (and if required, recipients) can be immediately warned and appropriate action taken to prevent further impact.

This approach not only supports rapid and targeted responses to potential problems, it also allows for more forensic exploration of the flow of messages – both safe ones and those containing malware. It is not foolproof, and needs to be part of a wider email security strategy. But it does provide an additional layer of support in avoiding the worst possible impacts of malware and user behaviour.

Security strategy – a portfolio of tools and educated users

It is often said that security is everybody’s responsibility. This is true, but everybody also needs as much automated support as possible. This means not only looking at protection and detection, but also remedial action. Tackling all of email security challenges requires a range of tools and addressing every stage of the communications flow. That should help ensure that one side of the email ‘sword’ is sharper than the other!


May 4, 2017  9:26 AM

Quocirca’s whistle stop tour of the GDPR

Bob Tarzey Profile: Bob Tarzey

Computer Weekly has now published Quocirca’s buyer’s guide to the General Data Protection Regulation (GDPR), Dealing with data under GDPR. The guide outlines how mid-market organisations can reduce the risk of potentially big fines for mishandling personal data; either by taking themselves out of scope for the regulation or outsourcing the administration and data security requirements.

For those still trying to make sense of it all, here is Quocirca whistle stop tour of the GDPR:

The GDPR applies to data controllers (organisations that collect and store personal data to support business processes) and data processors (third parties that process data on behalf of data controllers). The regulation applies to any organisation that deals with data regarding EU-citizens whether they are based in the EU or beyond its borders.

  • Personal data is anything that can be used to directly or indirectly identify a person; e.g. names, photos, email addresses, social media posts, medical records and IP addresses (so simply gathering information on devices via an IoT application may bring an organisation into scope).
  • The maximum fines are big; the greater of 4% of annual global turnover or €20 Million. Fines can be levied for both data breaches and for failing to meet administrative requirements.
  • Privacy by design and by default must be built in to relevant processes and applications (in other words, the systems that process data must be secure and timely data breach detection capabilities must be in place).
  • Data Protection Impact Assessments (DPIA) of the risk to data subjects (the likes of you and me) may be required before personal data is processed with bi-annual Data Protection Compliance Reviews
  • Consent for processing must be obtained from data subjects to process their data.
  • When data is leaked, there must be timely breach notification to both data subjects and the relevant authorities (in the UK the Information Commissioner’s Office or ICO).
  • Data subjects have a right to access their data and for it to be supplied to them in a form that enables data portability.
  • Data subjects can request data erasure (the so called right to be forgotten). This is not an absolute right, there are statutory obligations to keep certain data and it is allowed for legitimate research purposes.
  • Only organisations that conduct regular and systematic monitoring of data subjects on a large scale need to appoint a Data Protection Officer (DPO).

Quocirca’s buyer’s guide to the GDPR, Dealing with data under GDPR, can be viewed on Computer Weekly at this link: http://www.computerweekly.com/feature/Dealing-with-data-under-GDPR


May 3, 2017  8:48 AM

Enterprise video is not always a conference

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

Have video conferencing peaked? Not really, but video is evolving. This earlier blog outlines why the traditional view of video conferencing might make it appear to have peaked, particularly in the market for expensive and specialized endpoints.

The business use of video, addressing specific needs and applications is however, definitely growing. The sector is evolving into a virtual model, centred around software and services. Video technology has become more affordable, with cameras either cheap accessories or a standard element of most desktop or mobile devices. It is also now benefiting from a cloud or subscription model for service delivery. This makes video connections simpler to access, with scalable adoption so its use can be better aligned to business needs.

Video is shifting from ‘souped up’ phone calls using specialised endpoints to something that can add value to any business process, anywhere. No longer video ‘conferencing’, but video enhanced applications using open standard technology and networks.

Some of this has been catalysed by consumer adoption of video, but enterprise video does not always follow consumer user experiences. Specialised, vertical applications are also increasingly benefiting from the incorporation and integration of video. This is not simply about the endpoints, but the end-to-end integration required to fully exploit visual media in a broader context of available data. The cloud is now fundamental to ensuring that the reach of visual media extends to match the business need.

Video in the wild

Benefits from in-office use cases are one thing, but the tele-connection of video really comes into its own out in the field. A good example of this is the use of body worn cameras used by emergency services. In particular there are ones used by police forces, such as the Si500 video camera/microphone/radio speaker from Motorola Solutions. This endpoint is similar in concept to action cameras and dash cams. However, it is clearly more rugged and is designed to enable officers to collect and stream live rich media.

The endpoint device is smart in itself. However, its real value comes from being used in combination with Motorola Solutions cloud-based digital evidence management service, Command Central Vault. Visual content is easily gathered and stored in a secure and compliant environment. It can then be immediately accessed and used by all parts of the operational group, wherever they are. It combines reassuringly simple and robust user experience with the potential for smart use of big data by the organisation. Reassuring too for all that depend on such services for public safety.

While the cost of endpoints and connectivity has fallen there still needs to be a purpose to justify the effort. Holding meetings (conferences) remotely to save travel time and money will only go so far. The real benefits from visual information come from making business processes more effective or efficient and making participants more comfortable, safe or involved. Organisations need to look at video as a strategic consideration, not simply a communications tool.

Video conferencing may not strictly have peaked, but the use of video is entering a new phase.


May 3, 2017  8:47 AM

Has video conferencing peaked?

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

In some respects, it may have. But before there are panicked responses (or flaming torches and pitchforks) from many across the video vendor sector, let’s look at why this might not be an entirely bad thing.

homepc

Around 2011, many industry analysts were predicting strong growth in enterprise video conferencing market revenues. The consensus seeming to reach over $5B by 2016. The reality has probably been a bit below that. There has been a dip in total market sales over a couple of the intervening years, and certain video endpoint technologies (telepresence and desktop systems) have not done as well as some hoped or expected.

This does not mean video overall is unpopular. True, it seems to be taking some people a bit longer than anticipated to ‘get over’ being on camera. There also remains a persistent uncertainty that everything will always work smoothly, especially when external video connections are involved. However as consumers, the younger generation has casually accepted video, whilst the older generation uses video to stay in touch with distant relatives and those in between are getting accustomed to less formality at work. Together this should mean that video will become more widely acceptable as a tool in the workplace.

So why not exponential growth in revenues?

Cost reduction is potentially a reason, but also adoption is being held back. Not by lack of bandwidth or usability, but by lack of access and application. However, the market is undergoing a series of changes which will improve matters. The proprietary and sometimes clumsy nature of engaging with video conferencing systems is evaporating fast. Interoperability and ease of use have been tackled on the technical front. Subscription or operational models of pricing have eased commercial adoption.

Better, smarter, high resolution cameras have become a low-cost desktop accessory and high quality audio is more commonplace. While fully immersive telepresence systems have sold well in certain quarters, investment costs make them exclusive to high end markets and a narrow group of relatively infrequent users. Elsewhere much video technology is moving into everyday, rather than exceptional, use.

However, it is not hardware advances that is making video progress, but software, services and end-to-end applications. Increasingly, video capability is being delivered as a cloud-enabled service. This also allows it to be accessed and used anywhere often ’embedded’ to enhance applications. This is far better than being seen as a separate medium to be unified with other forms of communications.

This encourages behaviour change

Video is shifting from being focused on specialised endpoints delivering ‘souped up’ phone calls over dedicated network capacity, to being something that can seamlessly add value to any business process, anywhere. This is no longer video ‘conferencing’, but video enhanced applications using open standard technology and networks.

These applications are not simply about establishing a connection or ‘communication’, but reaching a ‘conclusion’. The idea is to get something of value accomplished. This might be a broad or horizontal application, such as ‘better collaboration’, or a vertically aligned one, such as tackling crime.

Better collaboration is not just people simply communicating on a pre-arranged schedule – conferencing –  but all participants enjoying ad hoc rich media involvement and being able to focus on and work towards a goal. This implies access to video by anyone, anytime, anywhere on any device. Plus it must be seamless. This can be best accomplished by shifting all the complexity into the cloud and smartly integrating to provide as close to a simple, reliable and zero touch experience as possible.

Silver linings

Many of the traditional business video conferencing players have been moving towards cloud in the last couple of years. Other companies have focused there from the start. The recent integration of Blue Jeans Network’s onVideo and Primetime products with Workplace by Facebook indicate a strong commitment to video on its enterprise platform and increases the pervasiveness of video by embedding it in other applications and delivering as a service.

There are others too in the enterprise video software and cloud camp, such as Vidyo and StarLeaf. And then there is Skype. Microsoft has clearly moved this proposition from a consumer play into enterprise video contention. It is also working closer with industry stalwarts like Polycom, plus prompting integration and interoperability programmes with the likes of Cisco and Lifesize.

Video conferencing may have reached an inflexion point, but video innovation and adoption is showing no signs of slowing. Enterprises should no longer view video as a tool for a set group to do ‘conferencing’ with dedicated equipment. It is something to be delivered anywhere and everywhere, integrated as a service to provide organisational value and individual ease of use.

To deliver this, enterprises should look to the cloud for video, but also look to those who deliver it as a solution to a business problem and not simply a way to add head and shoulders onto a phone call. Video conferencing may not strictly have peaked, but the use of video is entering a new phase.


April 20, 2017  7:27 AM

Winning the Domain Game

Bob Tarzey Profile: Bob Tarzey

Over the last quarter century the Internet has become a fundamental utility that businesses, governments and consumers rely on; being off-line is less and less acceptable. And yet, a 2017 Quocirca research report, Winning the Domain Game (sponsored by Neustar), shows that 72% of UK business face internet down time regularly or occasionally; 61% suffer performance problems.

The problems blamed for this, range from server down-time to DDoS attacks, with around one third citing the domain name system (DNS) for at least some their internet access woes: DNS itself suffers from downtime, attacks and other inefficiencies. DNS is the Internet’s own fundamental utility which links users with online resources, translating hard to remember internet protocol addresses (e.g. 212.58.246.95) into meaningful names (e.g. bbc.co.uk).slide1

DNS problems are probably worse than these figures suggest. By its very nature, DNS is transparent to users, so the role it plays in impacting internet access may go unreported. That users do not recognise DNS issues is unsurprising, but IT managers are also likely to overlook it; 55% report poor visibility in at least one aspect of DNS Management.

This is partly due to a lack of tools (the majority lack many DNS management capabilities), but also due to the complex way in which DNS services are provisioned. More than three quarters of organisations have five or more different ways of accessing DNS, ranging from in-house servers to internet registrars and service providers. Correlations within Quocirca’s research show that they tolerate this for a reason, having multiple paths to DNS improves availability – but, it also degrades overall internet performance.

Due to the varied needs of users and profusion of online services, few organisations expect to end up with a single management point for all their DNS needs. However, those that have committed to a specialist DNS service provider reduce DNS complexity. This has a big impact, improving visibility in all areas and providing access to a wide range of value-added DNS features, ranging from the ability to route internet traffic to blocking unwanted content.

No organisation can manage long without reliable internet access, so it follows that reliable DNS services are needed too. Poor management of the latter is likely to be responsible for problems with the former more often than is currently understood.

Quocirca’s report, Winning the Domain Game is free to download HERE.


April 19, 2017  3:16 PM

Open for business: Hortonworks aims for open source profitability

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

It used to be the Hadoop Summit, but the strategic focus at Hortonworks the enterprise-ready open source Apache Hadoop provider, has evolved. So, this year it was renamed DataWorks Summit. The company now encompasses data at rest (the Hadoop Data Platform now in version 2.6), data in motion (the Hadoop Data Flow) and data in the cloud (the Hadoop Data Cloud). Hortonworks aims to become a multi-platform and multi-cloud company. The focus is on the data in data driven organisations. Just a few years ago Hortonworks connected with IT architects. Today it’s launching conversations with lines of business and chief marketing officers.

The company

Since the company launch in 2011 backed by Yahoo, Hortonworks has grown to over a thousand employees in 15 countries and customers in sixty countries. Its European presence is operated out of the UK with sales staff in North and Central Europe. It’s a young organisation with many newly graduated employees, strong on technology but lacking business domain insights. Many have maintained their links with universities to address big data and IoT issues. Hortonworks is involved in several joint R&D projects, in what Hortonworks co-founder Owen O’Malley terms the ‘community over code’ approach. One such project is the Digitisation of Energy, aiming to connect 1 million electrical car batteries to the grid to act as a sustainable energy reservoir.

Where’s the money coming from?

Sustained strong growth still evades Hortonworks. In response it is shifting product focus from selling converged Hadoop systems to IT departments, to selling data platforms to lines of business. Of its two main competitors, MapR remains a VC backed private company, while Cloudera is in the IPO funnel, touting its hybrid open source software (HOSS) model which ties open source elements with proprietary software for its enterprise‑grade platform. So Hortonworks may be tempted to add more proprietary elements to the open source Hadoop platforms, to increase its profitability.

Critical to maintaining an open source focus are the fast expanding fields of artificial intelligence and machine learning. Hortonworks is investing a lot of resources in developing open source code, and sees significant revenue opportunities across all business verticals. This is exemplified by its Hadoop data lake developments that encompass data analytics, mobility and IoT using Hadoop Distributed File System (HDFS) and persistent memory data structures. With increasing legal requirements for data to reside in specific geo-locations, computing must come to the data. This requires data tiering for ‘hot’, ‘warm’ and ‘cold’ data storage to optimise local computing power requirements.

Who’s helping?

Partners on the Hortonworks Data Platform include IBM, HPE, Dell EMC, Pivotal, Teradata and Microsoft. Data Flow partners have not been named yet, but several major carriers are Hortonworks customers, and may soon become partners. Especially if the Federal Communications Committee under Trump abandons its net neutrality stance and allows carriers to offer different Internet QoS (quality of service) levels. Hortonworks will help them develop differentiated services for their customers. Data Cloud partners are the two majors AWS and Microsoft Azure. Hortonworks also has domain expertise alliances with Accenture, Cap Gemini and Deloitte to roll out industry wide IoT and cyber security offerings.

Where’s the future for Hortonworks

Hybrid cloud, IoT, hyper-convergence, big data and AI all point to massive data accumulation and the need for mobile and multi-tier data processing. These are all areas where Hortonworks is active. This was exemplified by an automotive case study. Mercedes, a front-runner in the automotive market, operates with five levels of development, from yesterday’s ‘assisted driving’ to today’s ‘partially automated’ and tomorrow’s ‘conditional automation’. Then follows ‘high automation’ in 2021, and finally ‘full automation’ in 2025. Today’s top-of-the-line cars generate around 500GB of data per day. In ‘full automation’ mode, data volumes will go up to 50TB a day. That requires intelligence at the edge and real-time hand-off to cloud computing processes.

Hortonworks wants to be on that journey, not just with the automotive industry, but across many other verticals. The company believes that only open source can evolve fast enough and create the standards needed to keep up with the data frenzy.


April 7, 2017  10:18 AM

Bad-bots and CNP fraud

Bob Tarzey Profile: Bob Tarzey

The use of bad-bots to further payment card not present (CNP) fraud.

According to Trustwave’s 2016 Global Security Report 60% of cybercrime incidents target payment card data. Half involve magnetic stripe data (generally stolen via point-of-sales devices) whilst the other half involves card not present (CNP) data; data stored by organisations that transact online.

Of course, any organisation that deals with CNP data should be PCI-DSS (Payment Card Industry Data Security Standard) compliant. Followed to the letter, this should put CNP data beyond the reach of cybercriminals. The real-world experience of many consumers suggests that all too often CNP data is being compromised and used fraudulently.

One of the reasons for this is that thieves do not need to rely on stealing complete and up to date payment card records. A CNP data record should consist of just three data items; the card holder name, the 16-digit primary account number and the expiry date (there is also a service code with magnetic stripe data). The CCV code, which is needed to complete many CNP transactions, should never be stored.

With a substantial heist, criminals can waste a lot of time trying to use card details that are no longer valid. However, they have a few tricks up their sleeve, such as using software robots (bots) to enrich their data. These techniques are described by OWASP (the Open Web Application Security Project) in its Automate Threat Handbook; carding, card cracking and cashing out.

Carding works through long lists of payment card data to checking each card number against a target merchant’s online payment process to find which ones are still valid. There are even specialist card checking sites for this. Card cracking enables missing or out-of-date expiry dates and CVC codes to be added by testing the range of possible values (which is small) against target sites. Cashing out helps with the monetisation of completed payment card records, often using multiple micro-payments.

Any of these techniques can turn even the most PCI-DSS compliant organisation into a victim. Sites may be targeted for validation purposes, impacting performance for other users, or may be targeted for monetisation. These payment card bots are just three of a broader set of automated threats listed by OWASP that can impact online resources. Fortunately, there are range of bad-bot mitigation techniques which are described in a series of e-books written by Quocirca and sponsored by Distil Networks.

Quocirca’s Transaction Fraud eBook can be viewed at this link:

https://resources.distilnetworks.com/white-paper-reports/online-transaction-fraud-ebook

For a full list of the Cyber-security threat Series of e-books follow this link:

http://quocirca.com/content/april-2017-mitigating-payment-card-fraud


April 4, 2017  2:39 PM

The focus on IoT in the arable food chain – and the worries around its use

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
iot

The arable food chain, consisting of farms, logistics/warehousing, food processing and retail, is a complex one with a major focus on food hygiene and pest management. In research carried out by Quocirca for Rentokil Initial in late 2016, the views of those responsible for managing these areas were found related to where the internet of things (IoT), cloud computing and big data could help them.

When first asked where their focus was on technology investment, figure 1 shows that end-to-end traceability of goods was a top priority for most, with predictive analytics of data coming a way back.

Both of these areas would seem to be ideal candidates for the use of IoT devices – the capability to add, for example, radio frequency identification (RFID) or near field communication (NFC) tags to foodstuffs as they move along the processing chain would make sense.

Also, the way that multiple different types of IoT devices along that chain can create data that can then be aggregated and analyzed via cloud platforms would also make sense.

Figure 1

Figure 1

However, the research also showed that few organizations were planning on large adoptions of IoT projects in the near – or even far – future. A degree of this was undoubtedly based on a lack of knowledge of what the IoT really was (as covered in a previous blog here). It was also apparent from the research that there were other areas where the research respondents had worries that were keeping them away from using the IoT.

Figure 2 shows the analysis of responses where interviewees were asked to rank their top three issues when it came to implementing connected technologies. As can be seen, data privacy was the top issue for them, with a perception that the IoT would create a greater number of process vulnerabilities a close second.

Somewhat surprisingly, the costs of implementing an IoT project barely registered in respondents’ top three issues.
ri-iot-2

Figure 2

This would seem to be bad news for those technology vendors and service providers trying to push IoT systems into the market. The interviewee profile for the research were not technology people – and the gap between other research carried out by Quocirca (such as the findings with technology people in organisations for ForeScout here) and this research could not be more stark.

How the technology community bridges this chasm and makes sure that the business value of the IoT is seen and understood by those in the business itself will be its next challenge. Technology vendors need to try to prove to those holding the purse strings that the IoT is a valid direction across a whole value chain.

Certainly, the research does show that the market is ready for the IoT – as long as it is demonstrably fit for purpose, that it does result in desired business outcomes and that the perceptions around its shortcomings have been dealt with.

In the specific case of the arable food chain, there is not only a business need for the IoT, but a sustainability one. Only through effectively dealing with pest and hygiene issues can the growth in need for foodstuffs by a growing population be adequately met.

Time to stop focusing on the technology of IoT and major on the business benefits.

The full report on the findings of the research carried out for Rentokil Initial can be accessed for free here.


March 22, 2017  3:26 PM

The key to application success? Usability…

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

canstockphoto7718964The business has made a request to IT for something to be done. IT has done all its due diligence and has come up with a system that meets every technical requirement laid down by the business. IT acquires the software, provisions it and sits back waiting for the undoubted thanks from the business for a job well done. Instead, the business gets quite irate – what does IT think it was doing in forcing such a half-baked system on the end-users?

What tends to be the case here is that IT (and often the business as well) forgets the one really important issue – any system has to be as intuitive and transparent in use to the users as possible. Anything that is seen as getting in the way of the user will be worked around. And this working around can make the original problem worse than it was.

Amongst more technical areas where this tends to be the case are data aggregation, analysis and reporting, along with many areas of customer relationship management (CRM) and enterprise resource planning (ERP). However, an area that should be of very high importance is one that impacts pretty much everyone involved in the business – document management, or more to the point, information management. Many enterprise document management (EDM) systems require documents to be placed in the system either through an import or specific export mechanism. During such action, the user is expected to input a lot more information on the document, such as what level of classification it is, tags around what the document is about and so on.

Instead, users either take default settings or just don’t bother to put the document in the system at all. Certainly, such lack of transparent usage means that it is only the ‘really important’ documents that are deemed worthwhile for all that trouble.

Just what is ‘really important’ though? Sure, those documents that the organisation is mandated by law to submit to a central entity are. Anything that is to do with mergers and acquisitions probably are as well. How about that document that Joe down the corridor has been working on looking at the future pricing of raw materials used in the organisation’s products? How about the results of the web search that Mary has done looking at the performance of the organisation’s main competitors?

Further, what about all the extra people who are key contributors to the organisation’s value chain these days – suppliers, customers, contractors, consultants, etc – how can information be shared by and to them in an efficient and secure manner?

In comes enterprise information management (EIM). By managing information assets from a much earlier stage of their lifecycle, the business gets the control and management of the assets that it requires.

However, the system must not make usage harder for users: any extra input required by the user must be offset by the overall value that they perceive coming out from the system.

So – rather than a system that requires the user to make a physical decision to put the information asset in the system, start with templates. Use metadata around these templates so that document classification is decided as soon as the user starts work on the document. As such, a document on a general subject – say, ‘Summary of discussions on usage of tea bags in the canteen’ – can be worked on by opening a ‘Public’ template. One on ‘Expected future pricing of raw materials from suppliers’ can be created from a template with a ‘Commercial in confidence’ classification. And so on.

As the document is worked on, versioning can be applied. Through the use of a global namespace, the documents do not need to be stored in a single, large database – they can be left where they are created with logical pointers being stored in the system to provide access to them. The documents can be indexed to provide easy search and recovery capabilities across the whole enterprise.

Those in the extended value chain can be invited to work on the documents through the provision of secure links – and their activities around the information asset logged at every stage.

At every stage, the user is helped by the system, rather than hindered. The value to the individual and the business is enhanced with very little, if any, extra work involved from the user. The business gains greater governance, risk and compliance (GRC) capabilities; the individual gains through having greater input into decisions being made.

Ease of usage in any system is key. Hiding the complexities of enterprise systems is not easy, but without it being done, even the most technically competent and elegant system is bound to fail.

Quocirca has authored a report on how an EIM system must adhere to the KISS (Keep It Simple, Stupid) principle, commissioned by M-Files. The report can be downloaded here.


Page 3 of 3012345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: