Quocirca Insights

Page 10 of 30« First...89101112...2030...Last »

March 22, 2016  10:14 AM

Repelling targeted attacks in the cloud

Bob Tarzey Profile: Bob Tarzey
Cloud Computing, Trend Micro

In a previous blog post, ‘The rise and rise of public cloud services‘, Quocirca pointed out that the crowds heading for Cloud Security Expo in April 2016 should be more enthusiastic than ever given the growing use of cloud-based services. The blog looked at the measures organisations can take to enable the use of cloud services whilst ensuring their data is reasonably protected; knowing what users are up to in the cloud rather than just saying no.

However, there is another side to the cloud coin. For many businesses, adopting cloud services will actually be a way of ensuring better protection of data, for example from the growing number of targeted cyber-attacks. A recent Quocirca research report, ‘The trouble at your door‘, sponsored by Trend Micro, shows that the greatest concern about such attacks is that they will be used by cybercriminals to steal personal data.

The scale of the problem is certainly worrying. Of the 600 European business surveyed, 62% knew they had been targeted (many of the others were unsure) and for 42%, at least one recent attack had been successful.  One in five had lost data as a result; for one in ten it was a lot or devastating amount of data. One in six say a targeted attack had caused reputational damage to their business.

So how can cloud services reduce the risk? For a start, the greatest concern regarding how IT infrastructure might be attacked is the exploitation of software vulnerabilities. End user organisations and cloud service providers alike face similar flaws that are inevitable through the software development process. The difference is that many businesses are poor at identifying and tardy in fixing such vulnerabilities, whilst for cloud service providers, their raison d’être means they must have rigorous processes in place for scanning and patching their infrastructure.

Second, when it comes to regulated data, cloud service providers make sure they are able to tick all the check boxes and more. After personal data, the data type of greatest concern is payment card data – a very specific type of personal data. Many cloud service providers will already have implemented the relevant controls for the PCI-DSS standard that must be adhered to when storing payment card data (or of course you could simply outsource collections to a cloud-based payment services provider). They will also adhere to other data security standards such as ISO27001. Cloud service providers cannot afford to claim adherence and then fall short.

If infrastructure security and regulatory compliance is not enough, think of the physical security that surrounds the cloud service providers’ data centres. And of course, it goes beyond security to resilience and availability through backup power supplies and multiple data connections.

No organisation can fully outsource the responsibility for caring for its data, but most can do a lot to make sure it is better protected and for many a move to a cloud service provider will be a step in the right direction. Quocirca has often posed the question, “think of a data theft that has happened because an organisation was using cloud-based rather than on-premise infrastructure“: no examples have been forthcoming. Sure, data has been stolen from cloud data stores and cloud deployed applications, but these are usually the fault of the customer, for example a compromised identity or faulty application software deployed on to a more robust cloud platform.

Targeted cyberattacks are not going to go away, in fact all the evidence suggests they will continue to increase in number and sophistication. The good news is that cybercriminals will seek out the most vulnerabile targets, and if your infrastructure proves too hard to penetrate they will move on the next target. A cloud service provider may give your organisation the edge that ensures this is the case.

March 21, 2016  4:10 PM

What can be done with that old data centre?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Uncategorized

canstockphoto0965238.jpgData centres used to be built with the knowledge that they could, with a degree of reworking, be used for 25 years or more.  Now, it is a brave person who would hazard a guess as to how long a brand new data centre would be fit for purpose without an extensive refit.

Why?  Power densities have been rapidly increasing requiring different distribution models.  Cooling has changed from the standardised computer room air conditioning (CRAC) model to a range of approaches including free air, swamp, Kyoto Wheel and hot running, while also moving from full volume cooling to highly targeted contained rows or racks.

The basic approach to IT has changed too – physical, one-application-per-box has been superseded by the more abstract virtualisation. This in turn is being superseded by private clouds, often interoperating with public infrastructure and platform as a service (I/PaaS) systems, which are also continuously challenged by software as a service (SaaS).

Even at the business level, the demands have changed.  The economic meltdown in 2008 led to most organisations realising that many of their business processes were far too static and slow to change.  Businesses are therefore placing more pressure on IT teams to ensure that the IT platform can respond to provide support for more flexible processes and, indeed, to provide what individual employees are now used to in their consumer world – continuous delivery of incremental functional improvements.

What does this mean for the data centre, then?  Quocirca believes that it would be a very brave or foolish (no – just a foolish) organisation that embarked on building itself a new general data centre now.

Organisations must start to prioritise their workloads and plan as to when and how these are renewed, replaced or relocated.  If a workload is to be renewed, is it better replaced with SaaS, or relocated onto I/PaaS?  If it is supporting the business to the right extent, would it be better placed in a private cloud in a colocation facility, or hived off to I/PaaS?

Each approach has its merits – and its problems.  What is clear is that the problem will continue to be a dynamic one, and that organisations must plan for continuous change.

Tools will be required to intelligently monitor workloads and move them and their data to the right part of the overall platform as necessary.  This ‘necessary’ may be defined by price, performance and or availability – but has to be automated as much as possible so as to provide the right levels of support to the business. 

Therefore, the tools chosen must be able to deal with future predictions – when is it likely that a workload will run out of resources; what will be the best way to avoid such issues; what impact could this have on users?

These tools need to be able to move things rapidly and seamlessly – this will require use of application containers and advanced data management systems.  End-to-end performance monitoring will also be key, along with root cause identification, as the finger pointing of different people across the extended platform has to be avoided at all costs.

If it becomes apparent that the data centre that you own is changing massively, what can you do with the facility?  Downsizing is an option – but can be costly.  A smaller data centre could leave you with space that could be repurposed for office or other business usage – but this only works if the conversion can be carried out effectively.  New walls will be required that run from real floor to real ceiling – otherwise you could end up trying to cool down office workers while trying to keep the IT equipment cool at the same time.

Overall security needs to be fully maintained – is converting a part of the data centre to general office space a physical security issue?  It may make sense to turn it into space for the IT department – or it may just not be economical.

A data centre facility is constructed to do one job: support the IT systems.  If it finds itself with a much smaller amount of IT to deal with, you could find that replacing UPS, auxiliary power and cooling systems is just too expensive.  In this case, colocation makes much better sense – which leaves you with the nuclear option – an empty data centre that needs repurposing.

Repurposing a data centre is probably a good business decision.  It could be cost-effectively converted into office space – unlike where only part of it is converted, a full conversion can avoid many of the pitfalls of trying to run a data centre and an office in the same facility.  If all else fails, that data centre is valuable real estate.  If the business cannot make direct use of it, a decommissioned data centre could be a suitable addition to the organisation’s bottom line through selling it off. 

In April, DataCentreWorld will be held at the Excel Centre in London, where there will be much to discuss around the future of the data centre itself.  Registration for the event can be found here.


March 18, 2016  10:09 AM

The ‘software defined mainframe’ – smoke and mirrors, or reality?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Uncategorized

It is a truth universally acknowledged that the mainframe’s future stopped in 1981 when the PC was invented.  The trouble was that no-one told IBM, nor did they tell those who continued to buy mainframes or mainframe applications and continued to employ coders to keep putting things on the platform.

lzlabs.JPG

This ‘dead’ platform has continued to grow, and IBM has put a lot of money into continuing to modernise the platform through adding ‘Specialty Engines’ (zIIPs and zAAps), as well as porting Linux onto it.

However, there are many reasons why users would want to move workloads from the mainframe onto an alternative platform.

For some, it is purely a matter of freeing up MIPS to enable the mainframe to better serve the needs of workloads that they would prefer to keep on the mainframe.  For others, it is to move the workload from what is seen as a high-cost platform to a cheaper, more commoditised one.  For others, it is a case of wanting to gain the easier horizontal scaling models of a distributed platform.

Whatever the reason, there have been problems in moving the workloads over.  Bought applications tend to be very platform specific.  The mainframe is the bastion of hand-built code, and quite a lot of this has been running on the platform for 10 years or more.  In many cases, the original code has been lost, or the code has been modified and no change logs exist.  In other cases, the code will have been recompiled to make the most out of new compiler engines – but only the old compiler logs have been kept. 

Porting code from the mainframe to an alternative platform is fraught with danger.  As an example, let’s consider a highly regulated, mainframe-heavy user vertical such as financial services.  Re-writing an application from Cobol to e.g. C## is bad enough – but the amount of retro testing to ensure that the new application does exactly what the old one did makes the task uneconomical. 

What if the application could be taken and converted in a bit-for-bit approach, so that the existing business logic and the technical manner in which it runs could be captured and moved onto a new platform?

This is where a company just coming out of stealth mode is aiming to help.  LzLabs has developed a platform that can take an existing mainframe application and create a ‘container’ that can then be run in a distributed environment.  It still does exactly what the mainframe application did: it does not change the way the data is stored or accessed (EBSDIC data remains EBSDIC, for example).

It has a small proof-of-concept box that it can make available to those running mainframe apps where they can see how it works and try out some of their own applications.  This box, based on an Intel NUC running an i7 CPU, is smaller than the size of a hardback book, but can run workloads as if it was a reasonable-sized mainframe.  It is not aimed at being a mainframe replacement itself, obviously, but it provides a great experimentation and demonstration platform.

On top of just being able to move the workload from the mainframe to a distributed platform, those who choose to engage with LzLabs will then gain a higher degree of future-proofing.  Not because the mainframe is dying (it isn’t), but because they will be able to ride the wave of Moore’s Law and the growth in distributed computing power, whereas the improvements in mainframe power have tended to be a bit more linear.

The overall approach, which LzLabs has chosen to call the ‘software defined mainframe’ (SDM) makes a great deal of sense.  For those who are looking to optimise their usage of the mainframe through to those looking to move off the mainframe completely, a chat with LzLabs could be well worth it.

Of course, life will not be easy for LzLabs.  It has to rapidly prove that its product not only manages to work, but that it works every time; that it is unbreakable; that its performance is at least as good as the platform users will be moving away from.  It will come up against the fierce loyalty of mainframe idealists.  It will come up, at some point, against the might of IBM itself.  It needs to be able to position its product and services in terms that both the business and IT can understand and appreciate.  It needs to find the right channels to work with to ensure that it can hit what is a relatively contained and known market at the right time with the right messages.

The devil is in the detail, but this SDM approach looks good.  It will not be a mainframe killer – but it will allow mainframe users to be more flexible in how they utilise the platform.


March 17, 2016  9:00 AM

A managed services model for the collaborative workplace

Louella Fernandes Profile: Louella Fernandes
Uncategorized

Embracing the digital workplace

An information managed service (IMS) is emerging as a key approach to enable digital transformation. Such providers are building on their traditional print heritage and evolving their offerings beyond the traditional managed print service (MPS). IMS offers a comprehensive range of services and solutions to manage enterprise wide information – both paper and digital – to drive improved productivity, lower costs and better employee engagement. 

Digital collaboration technology has revolutionised how we communicate and live our lives. Mobility has eliminated the boundaries of location and time. Now it is easy to communicate and collaborate and share information rapidly with others no matter their location, time zone, or geography. As consumer technology permeates the workplace, employees expect to engage in new and higher levels of collaboration and information sharing in the workplace. This is pushing organisations to accelerate digital transformation and gain better control and management of information throughout the organisation.  As such more organisations are increasing investment in digital workplace tools – such as connectivity solutions, collaboration and information management – to build their digital future.  

The information management challenge

The digital workplace must enable effective information collaboration amongst employees, partners and customers. Information is the lifeblood of any organisation, whether it is email, documents, web sites, transactions or other forms of knowledge, and it continues to grow. Yet although many organisations are building an information-sharing workplace, all too often information resides in inaccessible silos and is not easy to access or collaborate or share information. In many cases, information still resides on paper format which hampers productivity and makes it more difficult for effective collaboration and faster decision making. 

Ensuring that employees, customers and partners access to information at the right time, right place and from the right device requires a holistic approach to information capture, distribution and output. This means the use of digital workflow tools – such as document capture – for instance through a multifunction device (MFD); video conferencing, digital documents, cloud sharing portals and multichannel communications. Certainly, those using such solutions are reaping the benefits. A forthcoming Quocirca study on digitisation trends reveals that those organisation that have already adopted digital workplace tools are reporting multiple business benefits. These include increased productivity, faster business processes and improved customer satisfaction, faster decision making and increased profit – far beyond those that have yet to invest in such tools.

A new approach to information management

An information management service (IMS) provides a compelling solution for businesses that want to use a single strategic partner for areas such as information capture, collaboration and output management, with all the benefits of a traditional managed print services (MPS) model. Typically it compromises the following three key elements in order to take a broad view of the information lifecycle across an organisation. 

  • Phase 1: Information capture. Information may be captured at the point of origin through multifunction printers (MFPs), business scanners or mobile devices. For instance, paper invoices or expense receipts can be scanned and routed directly to an accounts application through an MFP interface panel. 
  • Phase 2: Information management and collaboration. This may include solutions for cloud document management and sharing portals (for instance such as Dropbox, Box, OneDrive or a privately hosted platform). This enables employees to access, share and collaborate on content, while maintaining high levels of control, privacy and protection. Most advanced MFPs offer this capability direct from the user panel.  Better collaboration can also be achieved through more effective visual collaboration – such as interactive meeting rooms which employ interactive displays where information can be shared and annotated on the screen.
  • Phase 3: Information output. The need to create and deliver content that is personal, relevant and timely is paramount in today’s fast-moving digital era. Through the use of advanced customer communications management solutions, content can be created and distributed by an organisation in a multitude of formats. This ensures customers receive personalised content to their channel of choice – be it mobile, email or even paper formats. Meanwhile digital signage is becoming more prevalent in businesses of all sizes as a lower cost and dynamic alternative to static printed communications. In addition to traditional functions like meeting room presentation, businesses are using digital displays for signage such as conference room identification and scheduling, dynamic branding and information display and communications.  

Conclusion

An integrated managed services approach enables organisations to take a broader approach to integrating and managing paper and digital information.  Quocirca research has revealed that those who have moved beyond MPS and implemented information workflow tools are most confident in their overall information management strategy. 

Organisations should capitalise on this significant opportunity to improve productivity, customer satisfaction and employee engagement. Harnessing the emerging digital tools for collaboration – such as cloud document capture and sharing and interactive visual communications, is fundamental to the success of digital transformation. 

Read Quocirca’s report on The Collaborative and Connected Workplace


March 9, 2016  5:35 PM

Akamai takes on Distil Networks in bot control

Bob Tarzey Profile: Bob Tarzey

In an April 2015 Quocirca wrote about the problem of bad bots (web robots) that are causing more and more trouble for IT security teams (The rise and rise of bad bots – part 2 – beyond web-scraping). Bad bots are used for all sorts of activities including brute force login attempts, online ad fraud (creating false clicks), co-ordinating man-in-the-middle attacks, scanning for IT vulnerabilities that attackers can exploit and clustered as bot-nets to perpetrate denial of service attacks.

Blocking such activity is desirable, but not if it also blocks good bots, such as web crawlers (that keep search engines up to date), web scrappers that populate price comparison and content aggregation web sites and the bots of legitimate vulnerability scanning and pen testing services from vendors such as Qualys, Rapid7 and WhiteHat.

Distil Networks has emerged as a thought leader in the space with appliances and web services to identify bot-like activity and add bots to black lists (blocked) or white lists (allowed). The service also recognises that whether a bot is good or bad may depend on the target organisation, some news sites may welcome aggregator bots others may not and policy can be set accordingly.

As of February 2016 Distil has a formidable new competitor. The web content distribution and security vendor Akamai has released a new service called Bot Manager, which is linked to Akamai’s Client Reputation Service (released in 2015) that helps to detect bots and assess their behaviour in real-time.

Akamai accepts that it aims to capitalise on the market opened up by Distil and others. Akamai will of course hope to make fast progress into the bot protection market through the loyalty of its large customer base, many of who will see the benefit of adding closely linked bot protection to other Akamai services including its Prolexic DDoS mitigation and Kona web site protection.

Akamai says it has already identified 1,300 good bots. It was also keen to point out that it believes it has taken responding to bots to a new level. This includes:

  • Silent denial where a bot (and its owner) does not know it has been blocked
  • Serving alternate content (such as sending competitors false pricing information)
  • Limiting the activity of good bots to certain times to limit impact on performance for real users
  • Prioritising good bots for different partners
  • Slowing down aggressive bots (be they good or bad)

The control of Bot Manager and what how it responds is down to individual customers that can take action on different groups of bots based on either Akamai’s or the customer’s classification. They can take this to extremes; for example, if your organisation wanted to stop its content being searchable by Google you could block its web-crawler.

Distil and Akamai do not have the market to themselves, other bot protection products and services include Shape Security’s Botwall and the anti-web scrapping service ShieldSquare. Bot blocking capabilities are also built into Imperva’s Incapsula DDoS services and F5’s Application Security Manager. Bad bots and good bots alike are going to have to work harder and harder to get the access they need to carry out their work.


March 2, 2016  10:19 AM

How’s the innovation of your digital transformation going?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Uncategorized

I want to take a holistic look at a current paradigm shift, running a couple of ideas up your flag pole to see if you salute them.  I may be pushing the envelope, but I trust that moving you out of your comfort zone will be seen as empowering.

Hopefully, you cringed at least twice during that introduction. The world is full of meaningless clichés, and there are two more that seem to be permeating the IT world that I would like to consign to Room 101.

Firstly, in 2003, IBM carried out some research that showed that CEOs were heavily into ‘innovation’.  The term was not defined to the survey respondents, and my belief was (and still is) that the use of the term is like asking someone “Do you believe in good?”  Who is going to say “No”?

To define innovation, we need to go all the way back to its Latin roots.  It comes from ‘in novare’ – to make new.  Innovation is all about making changes, finding a different way to do something that you are already doing.

So – how’s it going in your organisation?  Over the past week, how many new ways of buying paperclips have you found?  How many new ways of writing code and patching systems; of purchasing applications and in paying your staff have you introduced?

If you are eternally innovating, then your organisation will be in eternal change – a chaotic set up that is unsustainable and unsurvivable.

There are three ways that an organisation can change; improvement, innovation and invention. Improvement is how an organisation does what it already does, with less cost, less risk and more productivity.  Innovation introduces the means to do something in a different way (which may mean incremental extra costs while the new way of doing things beds in), and then the biggie – invention: bringing in a new product or service that has never been done by the organisation before.

Balancing these three areas is what is key – for many organisations, a focus on improvement may have far greater payback than trying to be truly innovative.  For organisations in verticals such as pharmaceuticals, invention is far more important than improvement or innovation: the race to the 25 or so new molecular or chemical entities (NME/NCEs) cannot be won through just innovating existing processes.

Further, throwing technology at all of this is not the way to do it either.  And this brings me to my next term that needs deep investigation – ‘digital transformation’.  Reading the technology media and much analyst output would have you believe that your organisation will die next week if it isn’t going through some form of digital transformation.

However, what does this mean?  Replacing existing technical platforms with another – such as a move from client server to cloud?  Moving from on-premise applications to software as a service (SaaS)? Sticking two fingers up at your boss as they ask you to implement a strategic digital transformation project?

As is often the case, this terminology is an attempt to place technology at the centre of the organisation. A determination not to let IT be relegated to a position of a facilitator to the business is self-serving and can actually be harmful to the business itself.

Back in the days when customer relationship management (CRM) and enterprise resource planning (ERP) were all the rage, the number of companies that I saw who pretty much stated that they had ‘done’ CRM as they had bought Seibel, or ‘done’ ERP as they had bought SAP was frightening.

No strategic changes in business processes had been planned for or implemented – many of the companies just implemented the software and then changed their business processes to meet the way that the software worked.  Unsurprisingly, they then wondered why they struggled.

It is important to remember what technology is there for – it is there simply to enable an organisation to better carry out its business. The secret to a successful organisation is not in the technology it chooses and implements- it is in how well it chooses and implements technology to support its changing needs; in how the business can flexibly modify or replace its processes to meet market forces and so be successful in what it does.  If it can do this with an abacus, baked bean cans and pieces of wet string, then so be it – you do not need to be seen as the uber-nerd who introduced a scale-out supercomputer with a multi-lambda optical global network.

So, fine – if you want to tick off two terms on your cliché bingo card, then make sure that you are innovative in your digital transformation.  Just make sure that you sit down with the business and understand how it what it needs in that balance of improvement, innovation and invention, and provide it with the IT platform that enables that to happen for as long as possible.

Just think – providing the organisation with a technical platform that actually supports it: one that is flexible enough to embrace the future and enable rapid change.  That would be truly innovative.


March 1, 2016  6:55 AM

There’s money in improving WAN connectivity to the likes of AWS, Azure and Google

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard
Uncategorized

WAN capacity pains are hot

The global WAN infrastructure capacity debate will be a feature of the upcoming Cloud Expo Europe event in April (www.cloudexpoeurope.com). And hopefully the debate will also explore the financial advantages to be gained by infrastructure providers from closer cooperation with cloud service providers.

This year’s Mobile World Congress (MWC) also highlighted many of the network capacity challenges – mostly from the carrier perspective. This led Mark Zuckerberg to remark that he may need to use laser transmissions from light aircraft and some 10,000 old school hotspots to open up access to Facebook in India. Both parties, infrastructure and content providers agree that: – 2016 will see fast growth take-up of cloud services and a shift from private cloud to hybrid managed and public cloud computing.

Enterprises will massively shift from private cloud only deployments to hybrid private-public cloud solutions. – Fewer and bigger cloud SPs. The mega-players like AWS, Google and Azure with global infrastructures will take a bigger slice of the market – We can expect 30% annual growth in Internet traffic volumes – more so on the mobile side – Commoditisation, standardisation, virtualisation spells a richer communication environment to accommodate the many varieties of cloud services.

‘If it computes, it connects – more every day’

To keep abreast of cloud computing where scalable and elastic IT-enabled capabilities are delivered “as a service” using Internet technologies, telco infrastructure providers must co-operate more and share resources with cloud service providers.

They share a common interest in developing open, modular and expandable network architectures to keep up with customer demands for more bandwidth and ubiquitous network access. However, the carrier infrastructure providers’ litany of pains is well known:

  • The net neutrality regulations imposed on them invalidates attempts to charge specifically for additional bandwidth or higher quality connections across the Internet.
  • The huge growth in video communication across the Internet has strained their infrastructure everywhere, and the investments in last mile broadband connections must rely on customers buying quad service bundles.
  • The emergence of software-defined WAN routing, that provides channel bonding across fixed line, cellular and Internet connections reduces the demand for enterprise Quality of Service products like MPLS.

A New Deal between carriers and clouds

The way forward requires a rethink in the infrastructure provider community. Carriers must understand what the cloud provides are trying to deliver, and help them get there. The storage and computing revenues that many cloud providers rely on, requires secure and responsive WANs, and that means faster roll-out of high-speed infrastructure – not just on the heavy traffic routes, but also widespread, high-speed 3G and 4G LTE mobile access. Many of these global cloud SPs are willing to invest in design and roll out of extended WAN networks.

Improving core WAN performance

Improving Internet transport capabilities and security is coming up against the fundamental Internet Border Gateway Protocol (BGP) which is running out of steam. Fast expanding network traffic volumes create longer maintenance windows required to update routing tables. Errors in this updating process may lead to route hijacking, when a hacker uses BGP to announce illegitimate routes directing users to bogus web pages. BGP was never designed with security in mind. BGP routing table overflow has caused sudden outages in services such as eBay, Facebook, LinkedIn, and Comcast. Software Defined Networking (SDN) will certainly alleviate some of these issues when the SDN controllers can read and revalidate routing policies as fast as changes happen and then configure routers in real time.

Monetising traffic analytics

Better real-time network management will not only improve the cloud experience; it may also improve infrastructure economics. The ability to use big data analytics to identify anomalies in BGP routing behaviour, will also allow infrastructure operators to monetise the user traffic behaviour they log, but seldom use. Thousands of analytics companies have sprung up around the globe analysing users’ web site behaviour. However, the digital service providers are also very interested in understanding and adjusting real-time their access network performance. Faster SDN infrastructure build-outs will improve traffic flows and cloud performance. That could in turn become a win-win for carriers and the cloud providers.


February 26, 2016  12:32 PM

Simplifying meeting room management – the AV/IT challenge

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Collaboration, Intel, Microsoft, Miracast, television, WiDi

Can’t get the projector or video conferencing working? Cables, adaptors or remotes missing? It is a common problem and it might seem like these issues get in the way of progress in most meetings, but spare a thought for those having to manage this environment. Being the over-stretched technical expert called in just to find a cable or click a button for those unwilling or unable to follow an instruction sheet (which was probably tidied away by the previous night’s cleaners…) can be a bit of a thankless task.

Things were simpler a decade or two ago when audio-visual (AV) meant finding a spare bulb for the overhead projector or acetate sheet for the printer – and it was straightforward to ask someone designated as office manager. Then along came laptops and cheaper reliable projectors and pretty much anyone who needed to could press to switch to external display.

Today, both AV and IT has progressed enormously and along the way, things have become much more complicated.

Low cost flat panel displays can be placed pretty much anywhere that workers might congregate, but with almost anybody capable of having a device they would like to present from; no longer just laptops, but also tablets and smartphones. The options for cables and connectors have therefore proliferated – chances are adaptors will be forgotten, mislaid or lost in a vacuum cleaner somewhere.

Smart wireless connectivity would be a great answer, and suppliers of professional AV systems have come up with several options, but with too much variety and purchasing decisions often made in facilities or workplace resources departments, the consistency and ease of manageability (especially remotely) is often missing.

Many rooms and AV systems already are, or will have to be, connected to the network. Screen sharing, remote participants, unified communications and video conferencing tools are becoming more widely deployed as organisation seek the holy grail of productive collaboration and individuals are increasingly accepting of being on camera and sharing data with colleagues. However, not only again are there many options, users quickly lose confidence after a bad experience.

Keeping meeting room technology under control and working effectively is increasingly a complex IT task, with a bit of asset management thrown in, but few organisations would be looking to put in more people just to support it, despite user frustrations from un-integrated or unusable expensive screens and equipment.

One answer might come from the approach Intel has taken with its Unite collaboration technology. The software can be incorporated by hardware companies into an AV ‘hub’ which allows simple and secured connection from either in-room or remote participants – wirelessly or over the network, so AV cables and adaptors should be a thing of the past.

Helping participants make meetings more productive is one thing, but because the hardware used in the Unite hub has to be based on Intel’s vPro range of processors, remote management and security is built in from the start. Whilst this level of performance might seem like overkill in what is on the face of it an AV collaboration hub, it does open up some very interesting opportunities.

Unite is already capable for sharing and collaboration in meetings, but Intel has made the platform flexible so that its functionality can be extended. This means further integration of IT and AV capabilities, offering a more unified communication and meeting room experience by combining with conferencing systems and incorporating room booking or other facilities management needs.

A powerful in-room hub also offers the opportunity to extend to incorporate practical applications of the Internet of Things (IoT). These could include managing in-room controls – lighting, environmental controls, blackout blinds etc. – but also tagging and tracking valuable assets like projection systems, screens and cameras or ones easily lost such as remote controls. This might be useful for security, but also could be used to check temperature, system health etc. for proactive maintenance monitoring.

Many organisations are adding ever more sophisticated AV technology to open ‘huddle’ spaces as well as conventional meeting rooms and keeping on top of managing it all, with even fewer resources and more demanding users, is an increasing challenge now often being faced by IT managers. They need something to integrate the diverse needs of AV, IT and facilities management and help address the problem. For more thoughts on how to make meeting rooms smarter and better connected download this free to access Quocirca report.


February 19, 2016  12:49 PM

Is the outlook cloudy for the IoT?

Bob Tarzey Profile: Bob Tarzey
BT, Cloud Computing, Google, Internet of Things, Microsoft, Virgin Media

In a recent Quocirca research report, The many guises of the IoT (sponsored by Neustar), 37% of the UK-based enterprises surveyed said the IoT was already having a major impact on their organisation and other 45% expected there would be an impact soon. The remaining 18% were sceptical about the whole IoT thing.

The numbers reported in another 2015 Quocirca survey of UK enterprises regarding attitudes to public clouds services (From NO to KNOW, sponsored by Digital Guardian) were along similar lines, 32% were enthusiasts, 58% had various cloud initiatives and 10% said they were avoiding such services.

Whilst the two data sets cannot be correlated as they involved different sets of respondents, at the very least there must be a bit of overlap between the organisations that are enthusiastic about the IoT and those that feel the same about public cloud services. However, Quocirca expects there is a strong alignment between the two as organisations that seek to exploit the latest innovations tend to do so on a broad front.

That said, the teams involved within an individual organisation will be different. Those looking at IoT, as Quocirca’s research shows, will be looking to improve existing and introduce new processes for managing supply chains, controlling infrastructure and so on. Those looking at cloud will be seeking new ways to deliver IT to their organisation or, perhaps, responding to initiatives taken elsewhere in the business (i.e. managing shadow IT).

So, is there any overlap between these teams? Should the IoT team be heading to events like Cloud Expo Europe in April 2016 to seek inspiration? The answer is surely yes. To build IoT applications requires many of the things public cloud platforms can offer. The top concerns for those steaming ahead with IoT deployments, identified in Quocirca’s research, are that networks will be overwhelmed by data and that they will be unable to analyse all the data collected. Both are scalability issues that can be addressed with public cloud platforms.

For any IoT application, there will be a need for the large scale and often long term storage of data and the need for, sometimes intermittent, processing power to analyse it. Cloud service providers can provide both the storage and flexible computing capacity to support this. Furthermore, more than a third of the organisations Quocirca surveyed already expect to roll IoT applications out on a national scale; using a cloud platform to process the data can also mean using the provider’s secure wide area networks to transmit it.

It is not surprising then that most cloud service providers now have IoT offerings. This includes Microsoft’s Azure IoT Suite that comes with preconfigured options for deploying IoT end points (sensors and so on) and gathering data from them. The AWS IoT offers a similar capability to connect and securely collect and process data. The Google Cloud Platform provides “the tools to scale connections, gather and make sense of [IoT] data”.

That’s just what three of the biggest public cloud service providers are up to with the IoT. There will be many more offerings from other providers, of particular relevance may be providers in the UK that have strong local networks, such as Virgin Media and BT that both have IoT initiatives. Who knows what else may be discovered by those with IoT ambitions at shows such as Cloud Expo Europe., where there will be ready access to innovative vendors and informative conference presentations. It is perhaps no coincidence that it is collocated with a sister show Smart IoT London.


February 16, 2016  7:47 AM

The challenge of transatlantic data security

Bob Tarzey Profile: Bob Tarzey
European Union, GDPR, safe-harbour, TTIP

US companies that operate in the European Union (EU) need to understand what drives European organisations when it comes to data protection. This applies to both commercial organisations that want to trade in Europe and IT suppliers that need to ensure the messaging around their products and services resound with local concerns.

A recent Quocirca report, The trouble at your door; Targeted cyber-attacks in the UK and Europe (sponsored by Trend Micro), shows the scale of cybercrime in Europe. Of 600 organisations surveyed, 369 said they had definitely been the target of a cyber-attack during the previous 12 months. For 251 these attacks had been successful, 133 had had data stolen (or were unsure if it had been stolen), 54 said it was a significant amount of data and 94 reported serious reputational damage. The reality is almost certainly worse; many of the remainder were uncertain if they had been a victim or not. Cybercriminals are the top concern for European businesses, above hacktivists, industrial espionage and nation state attackers.

This shows that European businesses have plenty to worry about with regard to data security – even before the added complications of the seemingly ever-changing EU data protection laws. The new EU General Data Protection Regulation (GDPR) is looming and seems likely to come into force in early 2018. The good news for any business trading in Europe, is that the GDPR provides a standard way of dealing with personal data in all EU states (the current Data Protection Directive only provides guidance, from which many EU states deviate). The bad news is the new stringencies that come with the regulation; fines up to €20M (Euro) or 4% of a non-compliant organisation’s revenue, requirements to report breaches ‘without undue delay’ and the ‘right to erasure‘ (often referred to as the ‘right to be forgotten’).

Given the scale of crime and the pressure to protect customer privacy, it is not surprising that protecting customers’ personal data is the highest priority in Europe, more so than payment card data (the processing of which can be outsourced) and intellectual property (which is less regulated). US businesses trading in Europe need to adapt their processes to take account of the new regulation and the changing Safe Harbour arrangements that are in-place between the EU and USA following a successful 2015 court challenge to the status quo.

The attack vectors of greatest concern for European organisations are exploited software vulnerabilities and compromised user identities. Protection against these threats is reflected in the measures put in place to help prevent targeted cyber-attacks in the first place and to stop them once in progress. User identities can be protected by improved awareness around safe email and web use whilst infrastructure can be protected through software scanning and update regimes, all of which top the list of deployed security measures.

Addressing concerns about secure infrastructure should play well for US cloud service providers that get across the message that their platforms are more likely to be kept up to date, have vulnerabilities fixed at an early stage and generally will be better managed than is the case with much in-house infrastructure. The higher up the stack the cloud service goes, the better, so these benefits apply more to application level software-as-a-service (SaaS) than more basic infrastructure-as-a-service (IaaS). The caveat is that with new doubts about Safe Harbour, US providers really need to put in place European infrastructure to satisfy data protection concerns, a move many are now making.

All this said, European businesses know that sooner or later they will have to deal with a first, or for many another successful breach of their systems and a potential data loss. So assistance with after measures will also go down well. Malware clean up technology tops the list of deployed measures, but the ability to identify compromised systems, data and users is also understood. Of course, all of these should be in place to assist with the execution of breach response plans, which should also include processes for informing compromised data subjects and data regulators, as well as having plans for good media relations. Less than half of European businesses have such a plan in place, but there is a wiliness to implement them, perhaps with some help and advice from those with the skills and services to offer.

The volume of trade between the US and EU is huge, especially when it comes to technology. Talks to establish the Transatlantic Trade Investment Partnership (TTIP) should make it even easier for US companies to trade with those countries that remain in the EU (the UK may leave following an in/out vote later in 2016). TTIP will provide common trading rules on both sides of the North Atlantic, but it will not change the need for US-companies to be savvy about local EU data protection concerns.


Page 10 of 30« First...89101112...2030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: