Quocirca Insights

Page 3 of 3112345...102030...Last »

July 11, 2017  1:45 PM

The importance of Operational Intelligence in emergency situations

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

flooding-article2At a recent roundtable event organised by Esri UK, representatives of the North Wales police force, Wessex Water and the Environment Agency talked about how they were using geographic information systems (GIS) in their work to provide Operational Intelligence (OI).

The focus of the discussions revolved around how each organisation needed to deal with emergency situations – particularly around flood events.

For the Environment Agency, the focus is shifting from a reactive response to a more proactive position.  Using a mix of internet of things (IoT) devices combined with more standard meteorological weather forecasting and satellite data, the department is aiming to both avoid flooding through dealing with issues before they become problems and to better respond to problems when they are unavoidable.

An example here is in using topological data in order to predict water runoff in upland areas and then to use analytics to better understand what that may mean further downstream.  Nick Jones, a senior advisor at the Agency, described how it is also using real-time analytics for OI – an example here is in holding certain stocks of items such as mobile flood barriers, but being able to get them to a point of greatest need at the right time via a just-in-time logistics model.  This allows for the optimisation of inventory – and avoids issues with the public when they find that items that could have prevented a flood were available – but in the wrong place.

Wessex Water also has to respond to such events.  Floods can force sewage up into streets and gardens, or even into people’s homes.  Andy Nicholson, Asset Data Manager at the company, described how the company was using GIS-based OI to better prioritise where to apply its resources in emergency situations.  For example, partnering with the Environment Agency to gain access to its data means that Wessex Water can track the progress of a flood – and can then both advise its customers of possible problems and allocate people and other resources to go and mitigate issues by working to potentially stop, redirect or slow down any issues caused by a flood event along the areas it has responsibility for.

Likewise, the North Wales police force has a responsibility to citizens.  Dave Abernethy-Clark, a PC with the force, described how he came up with Exodus, an OI system using GIS data to provide the force with a better means of dealing with vulnerable people during events such as a flood (but could also be a fire, civil disturbance or any other event).  Again, the real-time and predictive usage of OI can be invaluable. It is better to evacuate a vulnerable person before an event overtakes them.

However, doing this based on just basic safety assessments can mean that a vulnerable person is removed from their property where there is actually little real need to do so.  OI ensures that only those who are very likely to be impacted are identified and dealt with, minimising such upsets and saving resource costs and time.

Andy Nicholson from Wessex Water also emphasised how OI should never be a single data set approach.  By pulling together multiple data sets, the end result is far more illuminating and accurate.  The use of a flexible front end is also important – he discussed how one event resulted in a complex situation appraisal being shown on a screen.  From this view, it looked like the event could be a major one that would impact a large number of customers.  However, with a few extra filters in place, he managed to narrow this down to just a few core points being shown, and this then allowed for causality to be rapidly ascertained and the problem dealt with along with the minimum impact on just a few customers.

The takeaways from the event were that OI is an increasingly valuable tool to organisations out there.  The increasing capabilities of tools and the underlying power of the platforms they run on means that real-time OI is now possible.  The opening up of different organisations’ data sets also means that other organisations can directly plug in to existing useful – and often free – data.

The way that these three, seemingly disparate organisations worked together to deal with an emergency event such as a flood was apparent.  By all of them (plus other groups) being able to work against the same underlying data and using collaborative systems built over the OI platform, the sum total of the capability to deal with an event is enhanced greatly. How this helps all of us cannot be underestimated.

July 6, 2017  7:56 AM

The ever-growing IoT attack surface

Bob Tarzey Profile: Bob Tarzey

The theme of a recent Infosecurity Europe 2017 Strategy Talk, facilitated by Quocirca, was how to limit the ability of hackers to exploit the expanding IT attack surface created by the deployment of increasing numbers of IoT (Internet of Things) devices. The two panellists, representing the security vendors FireEye and ForeScout (the session sponsor), contended that a better integrated approach to network security was required.

Some estimates regarding future numbers of IoT devices run into the tens of billions (for example, Gartner, Juniper and McKinsey). Quocirca’s own 2016 business-focused research, European Perceptions, Preparedness and Strategies for IoT Security, based on the short-term estimates of German and UK IT managers, were more conservative. However, many may be in denial about the scale of the IoT opportunity they will be expected to enable. The need for IoT device discovery and security is prescient.

The roll-out of IoT devices may be a carefully planned and application-specific or ad hoc, as the rising tides of shadow IT and consumerisation lead lines-of-business and end users to deploy their own devices. Pragmatic IoT security must be generic and able to deal with all types of devices; whether they have been endorsed by IT management or not. Both security products and network design have a part to play.

IoT security is a pressing issue for four reasons. First, there are data protection issues; devices may transmit sensitive information (even IP addresses may be considered personal data under the EU GDPR). Second, IoT devices can be used as ingress points to broader IT infrastructure. Third, IoT devices are being recruited to botnets for the ongoing launch of denial of service attacks. Fourth, attempts to disrupt business processes may be targeted at poorly defended IoT deployments.

This last point is perhaps the most worrying, many IoT deployments are all about the better monitoring and management of critical infrastructure. Such attacks have the potential for kinetic impact beyond cyber-space, for example causing power outages, disrupting transport systems and industrial espionage.

Quocirca’s research shows that most organisations recognise the need to be able to discover and classify IoT devices. Furthermore, there is growing recognition that this must be achieved without on-device agents; the variety of devices and operating system is too great, the device processing power often limited and, of course, ad hoc devices will be unknown when they first connect to a network.

Gateway to the IoT

IoT gateways are turning out to be fundamental from design perspective (see Quocirca Computer Weekly buyer’s guide, No silver bullet for business IoT security). For planned IoT roll-outs, all devices associated with a given application can be deployed behind single gateway where security functions can be aggregated. Gateways can also help with ad hoc device attachments, for example, through isolating network segments for supply chain interactions and guest access.

IoT security needs to recognise new threats and ensure all relevant devices are protected from them. FireEye and ForeScout advocate that their integrated approaches can achieve this at the scale required for current and future IoT deployments.

ForeScout’s CounterACT technology acts as a gateway, discovering, classifying and continuously monitoring network attached devices and ensuring a given level of security. It does not require pre-installed agents to do this (although agents are available for advanced management of known devices). CounterACT can recognise hundreds of different IoT device types, ranging from sensors and probes to printers and security cameras as well as more traditional user end-points.

FireEye’s Network Threat Prevention Platform (NX Series) identifies potential threats on devices via its virtual execution engine (sandbox). It can also identify anomalous behaviours, for example, recognising and blocking the outbound call-backs made by malware to command and control (C&C) systems.

FireEye can inform CounterACT about the threats it discovers via ForeScout’s ControlFabric, a means for sharing security information. CounterACT can then scan other endpoints for presence of the newly identified threats and enact policy-based controls depending on the severity of the threat and priority of the device. End-points can be quarantined, network access limited, remedial actions taken and notifications issued.

ForeScout CounterACT and FireEye NX are complementary, for the former is not equipped to recognise previously unknown threats, whilst the latter does not monitor new devices attaching to networks. The partnership is a good example of how integrated security can achieve a greater level of protection than the sum of that achieved each product alone. Other network security and advanced threat protection products are available.


July 5, 2017  9:18 AM

GDPR and the UK: sorting fact from fiction

Bob Tarzey Profile: Bob Tarzey

How will data protection enforcement change in the UK once the EU General Data Protection Regulation (GDPR) comes in to force? Will the GDPR come into force at all, with the UK planning to leave the EU?

Insight can be gained from looking at how the UK Information Commissioner’s Office (ICO) has enforced the existing 1998 Data Protection Act in recent years. Quocirca will be presenting analysis of recent ICO actions at a webinar on July 11th 2017.

Join Quocirca and RiskIQ to find out how many data leaks the ICO has recorded in the last two years and how many have led to actions being taken in the form of fines as well as prosecutions, enforcement notices and undertakings.

The biggest challenge with GDPR for many will be ensuring the new, more stringent administrative requirements are met. These will enable the ICO’s to act in many new areas. The webinar will discuss these and how you can avoid the ICO’s attention by better understanding how and where personally identifiable information (PII) is being processed in the name of your organisation.

More details and for free registration HERE.

What you don’t know CAN hurt you. Are you GDPR PII compliant?

https://www.brighttalk.com/webcast/14683/267289?utm_source=RiskIQ&utm_medium=brighttalk&utm_campaign=267289

 


July 4, 2017  11:08 AM

Out of the shadows – IT as a service broker

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

Organisations need to balance innovation and business improvement with good governance and efficient use of IT budgets and resources. But this is not easy. Plenty of people right across the organisation think they know what IT they need and can easily go out and buy it.

windThe term ‘shadow IT’ has become a popular way to describe this in recent years. But the practice of parts of the business buying IT systems, software or services without the knowledge of the IT department, is not new. It has happened since the days of the first mini servers, dot matrix printers and personal PCs.

However, a couple of things have changed.

Bypassing the IT function

IT has become more affordable. From consumer-priced mobile devices to subscription based services in the cloud, technology is widely available and relatively inexpensive. There is also an increase in personal choice and awareness of IT potential. Individuals know what they want and will often make their own choices of personal technology, hence the rise of bring your own device (BYOD).

Lines of business too have greater understanding of what is available. Despite not quite understanding the full implications, they want to get on with digitising assets and making processes more efficient. The IT function can often come to be seen as a blockage, or lacking awareness of the business drivers. So, a line of business makes its own decisions and spends its own budget on IT. This is then outside of the control and visibility of the IT function.

Anecdotal evidence suggests that in some organisations, this ‘unofficial’ IT spending can be much larger than the central IT budget. It is something that many IT departments have to live with, but potentially it will cause problems of governance. It may also be much more inefficient and costly overall than having IT co-ordinated through a central IT function.

Encouraging innovation

However, the appetite for lines of business to procure technology to bring about a ‘digital transformation’ should be encouraged. There is clearly an unfulfilled business need, and shadow IT is a symptom of trying to address that need. Rather than being overly defensive and resistive, the IT function should see this as an opportunity to take a different approach. It can then improve relationships by fostering the following process:

  • Innovate – those closest to the business are likely to know what needs to be done, but not necessarily the best way to accomplish it. IT could support and encourage innovation in the business, rather than trying, and often failing, to do it itself.
  • Accelerate –scale-up by IT and the line of business working closer together developing the implementation required to both meet the business need and fit within a supportable IT strategy.
  • Operate – once this approach is delivering results, shift to a production model. This can be orchestrated by IT, with the business offloading the task to be run by IT, as a service.

In this way, IT and the business move together through the ‘Cycle of Innovation’. This is detailed in Geoffrey Moore’s book, Dealing with Darwin (Invent, Deploy, Context, Offload). It describes where something non-mission critical but offering differentiation is invented, then fully deployed at scale to become mission critical and core to the business. As the differentiation diminishes, it moves from core to context. It still has to be managed at scale, but can at some point be offloaded to free-up resources for the next core innovation.

IT as a service broker

The IT function cannot deliver everything, but it is well placed to understand where value can be added and where there are others with the right capabilities. The process of managing how and what services have to be delivered needs to be co-ordinated. It involves the technical, commercial and legal integration of internal capabilities and those from external services providers. In doing this, IT can be a ‘service broker’ to the organisation, not an obstacle or limiting factor to change. Rather than a focus on hardware, software and services, bought and deployed in some combination to try to meet often mis-understood business needs, there is an opportunity to think differently about what IT involves and how it is consumed by the business.

Ultimately, IT can then be measured on the value obtained by the business, not the cost (or value) of the assets being employed. Diverse innovation is encouraged and supported within a well managed centrally coordinated strategy. For more details about the ‘IT as a service broker’ approach, download Quocirca’s free report here.


June 21, 2017  8:24 AM

Bad-bots in financial services

Bob Tarzey Profile: Bob Tarzey

These days, some automated financial services are taken for granted; automated telling machines (ATMs) or cash points have been around since the 1960s, internet banking since the 1990s. Few these days write cheques made out to cash or visit banks to make transfers.

Now there is a new wave of automation underway, driven by software robots (or bots), that are changing not just how financial organisations interact with customers but how they deal with each other. Among other things, bots can provide financial advice and carry out trades. Bots are especially good at complex repetitive activities such as producing wealth assessments and gathering the data to populate price comparison sites, for example to show comparative quotes for insurance.

All this automated activity can benefit both banks and their customers, however, there is a down side; criminals are using bots too. In 2017, robbing a bank is much more likely to be perpetrated by a cybercriminal with an army of bots than a mobster with a sawn-off shotgun. So-called bad-bots are hard at work cracking access credentials, seeking out vulnerabilities in web sites and online applications, obfuscating targeted criminal activity with volume denial-of-service attacks and so on.

The trouble is distinguishing the good-bots from the bad and working out what some of the ones in-between are up to. Credential cracking bots, testing bank accounts for the use are of commonly used passwords need to be blocked, only an organisation’s own vulnerability scanners should be allowed to probe infrastructure, screen-scrapping bots from price comparison sites may be OK, those from unscrupulous competitors not so.

Fortunately, there are now technologies to help sort bots out based on policy. Quocirca’s Ultimate Guide to how bad bots affect financial services is free to download and provides more detail about how bots are operating in financial services and how to let the let the good ones through the stop the bad ones in their tracks. The full list of Quocirca’s Cyber-Security Threat Series on Mitigating Bad Bots can be viewed HERE. These e-books were sponsored by Distil Networks.


June 20, 2017  11:15 AM

Passing the buck – who pays for IT?

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

The word ‘investment’ is often put next to IT spending and budgets in order to make the costs sound palatable. Organisations want a ‘return on investment’, many try to measure it and some even succeed. But not many. Perhaps IT could be thought of as something to consume, not own?

The problem is that like most industrial progress, IT is still rapidly evolving and expanding its capabilities. This means more spending on IT. According to recent Quocirca research, IT spending is flat for two in five UK companies but growing for the rest. For one in five it is growing fast. Costs are rising because IT has the potential to be used by more people for more purposes. IT departments struggle to cope with this as they often lack staff or the right skills and end up having to spend so much time and effort simply supporting and managing what they already have. Innovation is hamstrung by legacy.

Simply increasing budgets (even where this is possible) is not the solution. From the research, IT capital expenditure constraints are a significant reason why it is difficult to secure IT funds, but changes in business requirements tops the list and growth in user demand is also high up. Financing needs to go hand in hand with flexibility. Clearly a different approach is required.

sunburst2

Looking to the cloud

For many, public cloud services provide an opportunity to shift expenses to the operational expenditure (OpEx) side of the ledger, which many chief financial officers (CFOs) appreciate. Quocirca’s research shows cloud adoption is widespread in around a fifth of companies – both public and private cloud – and around half of UK companies are expecting overall cloud usage to grow. The primary drivers for public cloud in particular, revolve around reducing upfront outlay and adding flexibility in both cost and headcount. Even security, once a significant concern, is now thought to be being addressed.

There are significant challenges with cloud. A pay-as-you-go cost model seems appealing, but there may need to be significant architectural changes and costs up front. Once in place, costs can grow unpredictably. Most cloud services are delivering flexible technical capacity, with investment costs spread as OpEx, but this does not necessarily match the business requirement.

Hybrid cloud, hybrid financing

While public cloud has flexibility for providing certain resources – storage, compute power and application platforms on demand – this does not account for all of IT requirements. The end user needs other IT systems in order to access cloud services. A hybrid architecture, with some elements deployed on premise, often fits well from a technical, management and governance perspective. But it may not offer complete commercial flexibility as it will require capital expenditure (CapEx) and upfront investment.

Increasingly there is an appetite for delivering more IT capabilities as a service. This includes the well-established managed print services (MPS). The research also shows increasing interest in desktop as a service, mobile device management and video and collaboration services.

This means there needs to be a more all-encompassing hybrid model for financing, based on consumption of IT services in a manner that makes sense to the business user, that ultimately is paying for it and can measure the value. This is not something that should be tackled in the ‘shadows’, hidden away from IT. Shadow IT can be embraced and managed by the IT function. This can then act as a service broker by integrating a mix of internal and external capacity and capabilities to deliver the services required by the business.

Consumption based IT

Organisations prefer to spread costs over time and dependent on usage. Over time means that upfront CapEx can be avoided, with predictable recurring costs spread like a subscription. Rather than variability in capacity an initial service level should be agreed upfront. Pre-agreed and priced ‘burst through’ capacity could be made available and then be measured and billed for, only if used. Instead of just applying this approach to individual services from the public cloud, with the right financing this could be applied to all IT systems; moving the model from ‘owning’ IT assets, to ‘consuming’ them.

With this type of financial model, organisations have the technical flexibility akin to public cloud, shifting CapEx into OpEx. IT can be consumed as a service, with the value delivered much more closely aligned to costs. The details from Quocirca’s research (commissioned by Rigby Capital Ltd) and information for organisations considering how consumption based IT might work for their business, is available in this free to download report, “The next step in digital business transformation”.


June 16, 2017  5:12 PM

Connect, collect, analyse and activate NOW!

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

There is a chill stirring in the corridors of corporate power. It’s the data driven company paradigm that is subverting the very foundations of corporate hierarchies and established divisions of labour. Well-established positions within the corporate power structure risk being undermined by new internal alliances, new business processes and shifting revenue streams.

Historically, we see that It’s not always the strongest business that survive in times of fast paced technology change – it’s the companies that are best able to incorporate change to their advantage. Today very small companies can challenge even the market leaders using pay-as-you-go IT resources and disruptive business propositions. It’s cold at the top as as shown in an analysis of S&P 500 companies (http://www.cnbc.com/2014/06/04/15-years-to-extinction-sp-500-companies.html) demonstrated; it found that the average lifespan of their top-500 US companies has dropped to 15 years.

Successful data driven companies are forging new alliances across their production, marketing, sales and IT teams to bolster customer satisfaction levels and address changed buying behaviour. At the recent TIBCO European summit in Berlin, many of the talks focused on these levers of change.

interconnect-everything

The message coming out of the TIBCO NOW SummiIt’s an API World that requires analysis and decision-making

Very few companies – and certainly not companies with many years of existence, have IT systems and applications that interoperate effortlessly. The drag of legacy is inevitable. The traditional response has been the application program interface (API). By describing a program interface and working towards standardised gateways between applications using brokers like Google APIGEE, Red Hat 3Scale, companies can exchange data between their applications. Actually of course, there are thousands of API definitions. Just finding them and keeping them updated is a serious task.

To get into this market, TIBCO acquired Mashery, a UK based software provider of API services in 2015. This company develops and maintains a large library of APIs and provides them as-a-service. So the company has become indispensable for a wide range of companies with large and heterogeneous applications, as well as companies needing to interface with customers’ and partners’ applications. There are clear and present challenges to widespread adoption of API solutions. One is security, where API-security companies like Elastic Beam point out that signature and rules-based security tools do not detect API attacks.

The other challenge is user friendliness. If you are going to integrate lines-of-business and enable fishing in your data lake, you need to ensure that the IT gear needed is easy-to-use for everyone. Users outside the IT community have very little patience with slow or complicated systems, let alone systems that require reading a manual! Horizontal business tools require connectivity, the ability to find and sort the relevant data. Then follows the analytics phase and finally identifying what actions to undertake.

At the TIBCO event, we met some of the customers:

Small business case study:

The London Theatre Direct (LTD) company (www.londontheatredirect.com) is a ticket aggregator selling tickets on-line to London West-End performances. The company has three people in London and 30 back office developers in Prague. It is completely AWS cloud-based and relies on Mashery to keep its APIs running. Building up this business required interfacing with a host of different ticket systems and then creating a unified and easy-to-use selection and booking platform. The early contributors to the aggregator platform soon experienced faster ticket sales here than on their own. So three years into the process, the tide turned and theatres now come to LTD. The company also white labels its platform to other ticket outlets, and has expanded its operations abroad.

Large business case study:

The Mercedes Formula 1 team (www.mercedesamgf1.com) competes in the heady world of extreme motor sports. The F1 industry epitomises the constant challenge to improve efficiencies, sturdiness and adaptability. Not least when servers and all peripherals are transported around the world 21 times a year. During a race decisions must be based on extracting the right data from the 3.5TB of car performance data that is accumulated on board.  Too much data sprawl is just more noise. There is a need to economise when your download window is the 60 seconds’ pit stop for tire change and the car refuelling. To analyse the data and optimise car performance the Mercedes team relies on TIBCO Spotfire and Streambase. While this provides data at rest and streaming data analysis, Mercedes is constantly looking for better automation and machine learning tools.

Analysing high volumes of data is also a priority in the Mercedes F1 marketing department with 11 million followers on Facebook. Hitherto they have all been getting the same feeds. TIBCO data scientists are now enrolled into the Mercedes strategy planning to tailor information to the preferences of its followers. The aim is to climb higher on their Facebook wall using tools such as software regression analysis. Messages and icons on users’ walls can also be seen by others.

 

The data driven challenge

Data driven organisations need to bring together the two symbiotic elements of interconnection using APIs and augmentation using analysis of data at rest and streaming. The guiding principles are: cloud first, ease of use, and industrialisation. The software underpinning this drive must be pleasant and appealing to a wider group of users in the enterprise. Developers need to address the whole range of ‘personas’ who need to interact with the data. Ease of use and robust solution packages must be industry specific and maintained continuously.


May 23, 2017  7:03 AM

WannaCry? Not really. A report from the 11th Eskenzi PR IT Analyst and CISO Forum

Bob Tarzey Profile: Bob Tarzey

May 16th and 17th 2017 saw Eskenzi PR stage its 11th 2-day IT Security Analyst Forum, as usual, in London. The morning of the second day (which was a Wednesday) was the customary CISO (chief information security officer) roundtable. A coming together of 20 or so IT security leaders from blue chip UK enterprises and public sector organisations to share their views with the analysts and IT security vendors that sponsor the event.

The timing was interesting, as although no one knew it in the run-up, the event took place just after the WannaCry weekend (the global release of a worm carrying ransomware that infected certain unpatched and unprotected devices running Microsoft Windows). Speculation was rife on the Tuesday night that there would be gaps in the CISO line-up as some absented themselves to deal with the aftermath. Not so, they all turned up.

It was already becoming apparent that the WannaCry attack was not as bad as many had speculated. True, many of the CISOs’ weekends had been interrupted to assess any impact, but by Monday most felt their update and protection regimes has done the job. There was little sympathy for the more lackadaisical organisations that had been hit. It may be true that some organisations still have to run now unsupported Windows XP devices, but this does not mean they should be unprotected from intrusions.

The CISOs had plenty of other things to talk about. The impending GDPR (the EU General Data Protection Regulation) inevitably came up a few times. However, as Quocirca speculated in its recent Computer Weekly GDPR buyer’s guide, most CISOs felt their organisations were well on their way with their GDPR compliance plans. Generally it was considered that personal data has already been secured to deal with existing requirements, so the main new challenges were around data processing rules. There was a generally positive view of the regulation’s enhanced privacy aims and, for the record, the CISOs did not think Brexit would make much difference to UK data privacy laws.

Other issues arising included the need for CISOs to engage with their boards, to demonstrate the value of investing IT security and especially measured against the value of data. To this end, some saw WannaCry as a wakeup call, others as a distraction (this is an attack from yesteryear, which should have been repelled unnoticed). There was strong message to the vendors to focus more on integration within and between their products. DDoS (distributed-denial-of-service) attacks are still a problem but one that can be overcome.

A gripe that more could be done to keep crime at bay on the internet (if criminals were running down the high street on a smash and grab raid, law enforcers would not standby). That said, there was generally acceptance that public cloud providers were building platforms that could meet enterprise security needs and that most issues arose from the users of such platforms poorly managing access rights.

The event would not happen without the security vendors who sponsor it. The first day is a kind of speed-dating where the analysts get to meet each vendor and hear its latest news and value proposition.

Corero’s SmartWall, blocks DDoS attacks at scale, especially for service providers, cleaning up internet traffic for us all. Barracuda firewalls could have protected networks from WannaCry, if well managed. To that end, FireMon, has broadened its multi-vendor firewall management capability to other network devices and cloud-based deployments. If you are going to store sensitive data in the cloud, then perhaps you should consider bringing your own encryption keys, perhaps as enabled by Eperi.

Even if WannaCry had managed to get past network defences, then Darktrace’s threat detection with its machine learning should recognise the abnormal behaviour on your network and block it; Cylance would do the same on individual end-points. One of your security products may have safely tested WannaCry in a Lastline sandbox (although the WannaCry kill switch aimed to prevent this).

When it comes to controlling access, a new offering from MIRACL’s Trust ZFA uses software tokens on devices to authenticate users and devices, including Internet of Things (IoT) devices with direct end users. Although when it comes to the heavy weight stuff, it could be worth turning to Belden-owned specialist Tripwire which has a number of existing and new offerings for securing the industrial Internet of Things (IIoT).

And if your organisation does not want to do any of this itself, then NTT Security  has pulled together the resources of its sister companies Dimension, NTT Comms and NTT Data to provide robust security across its portfolio and on to its customers.

Thanks to Eskenzi PR for another great event, the CISOs for their time and the vendors for their sponsorship. The WannaCry criminals should also be acknowledged for providing a talking point, which despite the ill-intent, did not have the impact that at first seemed possible; perhaps in-part due to some of the innovation on show last week.


May 9, 2017  8:58 AM

GDPR: Why print is a crucial element of endpoint security

Louella Fernandes Louella Fernandes Profile: Louella Fernandes
GDPR, MPS, Security

The EU General Data Protection Regulation (GDPR), which takes effect on 25th May 2018, could prove to be a catalyst to change the existing haphazard approach to print security.

Networked printers and multifunction printers (MFPs) are often overlooked when it comes to wider information security measures. Yet these devices store and process data, and as intelligent devices have the same security vulnerabilities as any other networked endpoint. With Quocirca’s recent research revealing that almost two thirds of large organisations have experienced a print-related data breach1, organisations cannot afford to be complacent. The biggest incentive to rethink print security is the substantial potential fines imposed by the GDPR. Infringement can attract a fine of up to 4% of total global annual turnover or €20m (whichever is the higher).

Securing the print environment

Today’s smart MFPs have evolved into sophisticated document processing hubs that in addition to print and copy, enable the capture, routing and storage of information. However, as  intelligent, networked devices, they have several points of vulnerability. A printer or MFP, is effectively an Internet of Things (IoT) device and as such, left unsecured, is an open door into the entire corporate network. Without the appropriate controls, information on the device, in transit or on the device can be accessed by unauthorised users. The risks are real – recent Quocirca research indicating that almost two thirds of large organisations have suffered a print related data breach.

There are two key issues – the printer/MFP as an access point to the network, and the printer/MFP as a storage device for “personally identifiable information” (PII).

Mitigating the print security risk and addressing GDPR compliance

As critical endpoints, printers and MFPs must be part an overall information security strategy. This should ensure that all networked printers and MFPs are protected at a device, document and user level. This means, for instance, that data is encrypted in transmission, hard drives are encrypted and overwritten, print jobs are only released to authorised users and devices are protected from malicious malware.

Many organisations may believe that they are covered by existing technology, but in many cases this does not protect against the latest threats. Consequently, operating a large, mixed fleet of oled and new devices, can leave gaping security holes.

Given the complexity of print security in large organisations, particularly those with a diverse fleet, Quocirca recommends seeking guidance from vendors that understand the internal and external risks and the risk of unprotected data on printer/MFP devices. Organisations should select vendors that can address both legacy and new devices and offer solutions for encryption, fleet visibility and intelligent tracking of all device usage. This should ensure the ability to track what information is being printed or scanned, for instance, where and on what device, therefore enabling faster breach remediation.

Managed print service (MPS) providers should be the first port of call, as they are best positioned to advise on print security technology. The emergence of advanced managed print security services (offerings vary from vendors that include those from HP, Lexmark, Ricoh and Xerox) aim to improve resilience against hacking attempts on devices, rapidly detect malicious threats, continually monitor the print infrastructure and enhance security policies and employee awareness.

Look for comprehensive print security services that offer:

  • Assessment: A full security assessment of the printer infrastructure to identify any security gaps in the existing device fleet. This should be part of the broader Data Protection Impact Assessment (DPIA) that an organisation may conduct internally or using external providers. Recommendations can be made for ensuring all devices use data encryption, user access control and features such as hardware disk overwrite (the erasure of information stored on the MFP hard disk). Also look to use endpoint data loss prevention (DLP) tools at this stage to gain insight as to what likely PII could be transferring via an MFP (for instance scanning personal information via the MFP to email or cloud storage).
  • Monitoring: In order to monitor and detect breaches, ongoing and proactive monitoring ensures devices  are being used appropriately in accordance with organisational policies. More advanced print security controls use run-time intrusion detection. Integration with Security Information and Event Management (SIEM) systems can help accelerate the time to identify and respond to a data breach, which is key to GDPR compliance. Consider third-party managed services support in order to streamline data logging and security intelligence gathering.
  • Reporting: GDPR’s demanding reporting requirements can be addressed through reporting usage by device and user. This will highlight any non-compliant behaviour or ‘gaps’ in controls so that they can be identified and addressed, and allow audit trails to be created to support the demonstration of compliance.

Conclusion

GDPR is a reminder that organisations should proactively assess their security position. Organisations must move quickly to understand the legislation and put appropriate measures in place. Ultimately print security is part of a broader GDPR compliance exercise, and it is vital that organisations act now to evaluate the security of their print infrastructure.

Fore more information on the steps that should be taken to protect the print environment in light of GDPR, please contact Louella.Fernandes@quocirca.com

Further reading:

1 http://quocirca.com/content/print-security-imperative-iot-era

https://www.computerweekly.com/blog/Quocirca-Insights/Quocircas-whistle-stop-tour-of-the-GDPR


May 7, 2017  9:02 AM

Email security retrospection

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

Keeping anything safe and secure involves multiple considerations. Avoid putting yourself in danger, put up a protective ‘shield’, detect when that is compromised, take mitigating action.

When it comes to communication and the modern hyperconnected world based on open protocols, it’s much harder to avoid danger. Hence shields and detection are the significant part of the online security proposition.

Mitigation is another matter. While some of it can be accomplished financially or through after the event actions such as insurance claims, smarter or a more holistic use of technology can also offer useful and timely support reducing the negative impact of incidents.

electronic mail email

Securing email

Consider email. From a business perspective, many will feel it is a double-edged sword – can’t live with it, can’t live without it. The general expectation with email is that it should work, be used sensibly to deliver important messages which arrive promptly and be addressed to the correct people! Emails should be uncluttered by lots of surrounding dross, especially malware.

The reality is of course all too often the exact opposite. While the internet might have robust survival in its core packet switching protocols, some of the applications layered over it can be flimsy at times. Email systems need enhancing, at the very least with anti-SPAM and anti-malware shields and data leak prevention. The  major security players have built up considerable expertise in this sector, including Trend Micro, Sophos, McAfee and Symantec.

However, there is no such thing as 100% protection. Given email’s significance and business dependence on it, many could benefit from going further with additional layers of defence. This does not mean a protectionist approach of ‘closing the door’, but better monitoring of communication flow, and critically, an ability to learn, adapt and follow up in real time.

Recognised malicious software should be detected by antivirus scanners and stopped on entry to an organisation well before delivery to its final destination inbox. But this is a moving feast. New malware appears, before scanners are updated, meaning there will be traffic that has already passed through the secure perimeter.

Email retrospection

This is where solutions that take a retrospective view, such as Retarus’ Patient Zero Detection, can add value to the internal flow of email within an organisation. In addition to perimeter protection, it uses an innovative follow-up approach based on post-delivery identification of attacks. This takes advantage of the time lag between an email entering an organisation’s email system and then being accessed. It applies a trace to all incoming messages so that this can be done rapidly, without resorting to searching all inboxes.

Imagine the scenario. Email with new virus that is not picked up by the barrier protection passes into the organisation and through to inboxes without being noticed. Normally when it is accessed, by one recipient or another, the problem will manifest…or perhaps not, but the damage is done.

With the Patient Zero Detection approach, each incoming email is given a unique digital fingerprint/hash code so that it can be traced. If an email containing a new virus gets through, but the virus is subsequently identified, recipients can be identified retrospectively. Administrators (and if required, recipients) can be immediately warned and appropriate action taken to prevent further impact.

This approach not only supports rapid and targeted responses to potential problems, it also allows for more forensic exploration of the flow of messages – both safe ones and those containing malware. It is not foolproof, and needs to be part of a wider email security strategy. But it does provide an additional layer of support in avoiding the worst possible impacts of malware and user behaviour.

Security strategy – a portfolio of tools and educated users

It is often said that security is everybody’s responsibility. This is true, but everybody also needs as much automated support as possible. This means not only looking at protection and detection, but also remedial action. Tackling all of email security challenges requires a range of tools and addressing every stage of the communications flow. That should help ensure that one side of the email ‘sword’ is sharper than the other!


Page 3 of 3112345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: