ThreatQuotient ups the ante for dealing with security incidents
The hardware and software that constitutes the average organisation’s IT infrastructure records millions of events a day which are recorded in log files. This is known as machine data. Nearly all such events are benign and of little interest to IT operators. However, some represent anomalies that may indicate problems arising. Dealing this with such incidents was the subject of a 2017 Quocirca research report sponsored by Splunk – Damage Control: The Impact of Critical IT Incidents.
Recognising incidents is one thing, understanding what they mean and prioritising how they are dealt with is another. This requires enriching the machine data with information from other sources. Splunk’s operational intelligence platform does this for IT incidents in general but also specifically for security incidents, which Quocirca’s report identifies as the top concern for IT managers.
When it comes to dealing with security incidents the process is known as security information and event management (SIEM). Here Splunk has several competitors including Micro Focus’s ArcSight, LogRhythm, IBM’s QRadar and McAfee’s Enterprise Security Manager.
SIEM tools enrich machine data to provide context. However, any one tool may not provide all the insight needed to deal with and prioritise all security incidents. Some organisations use multiple operational intelligence and SIEM tools, furthermore, the range of sources for enriching and guiding the process of dealing with security incidents are myriad. These include:
- Threat intelligence feeds that indicate what a security incident might mean, for example, is a there known criminal activity that is leading to certain types of events. Providers of threat intelligence feeds include Digital Shadows, CrowdStrike, Recorded Future and FireEye’s iSIGHT.
- Databases of know malware and scams such as Virus Total, Spamhaus and Malware Domain List.
- Vulnerability management tools which know about current software bugs, the threats they represent and fixes available, such as Qualys and Tenable.
Bringing together all the information from these sources and applying them to the security incidents is daunting task. That is the challenge the ThreatQuotient has taken on with its ThreatQ platform. All the organisations listed above are among the 70 plus partners that integrate with ThreatQ.
ThreatQ was first released in 2013 and launched in Europe in 2016, where ThreatQuotient now has operations in the larger countries and a growing customer base. This week it is upping in the ante with the release of a new interface called ThreatQ Investigations.
ThreatQ Investigations supplements ThreatQ’s existing tabular interface with a graphical tool that shows core incidents with links to all the sources of information that may help deal with them. With a few clicks an operator may be guided from an anomalous event on a firewall to news of a recently detected surge in activity by a criminal gang seeking to exploit a newly found software vulnerability. ThreatQ Investigations aims not just to empower individual operators but to improve collaboration across the teams that come together, often war-room style, to deal with security incidents.
As cybercrime becomes ever more widespread and the actors involved diversify, targeted organisations must become more sophisticated and timely in their ability to detect and respond. ThreatQuotient and the tools its ThreatQ platform brings together can help achieve this.
Bob Tarzey is and freelance analyst and writer formerly of Quocirca:
The position of Chief Information Security Officer (CISO) has become well established in recent years, but where is it heading next? For many it is often perceived as an inward directed role more accustomed to saying ‘no’ than anything else. But is this really fair and does it represent the modern CISO?
Most organisations are under intense pressure to be flexible as well as secure to protect their own assets as well as the privacy of customer data. Going forward, a more pragmatic approach has to blend the agile needs of the business with the continuing challenges of security.
All organisations like to base success on results. Part of the challenge is that when some initially look at what this means for security, it is often about preventing things from happening (bad things), rather than doing good things for the organisation. While this may still be true, it is not a great yardstick for encouraging best behaviours and attitudes. It runs the risk of fostering inaction and retrenchment, rather than moves in a positive directions.
The term ‘Next Gen CISO’ might not be entirely new, but it surfaced again in a recent discussion with LogMeIn CISO, Gerry Beuchelt. This revolved around the evolving relationship between business and security and how by changing behaviours CISOs can add real value to the business as well as keeping it safe.
So what are the attributes of a Next Gen CISO?
The first attribute that a Next Gen CISO needs is to be outward looking. It helps of course to be acutely aware of the challenges faced by other organisations and changes in the market landscape. However, the outward looking CISO needs to be much more than that. They need to be able to engage with, and understand, their organisation’s customers. This should involve working alongside the sales force and channel partners. Why? To understand and appreciate the commercial challenges of any organisational security issue it really helps to see the impact it has on customers.
Risk in context
This outward perspective assists with another attribute for the Next Gen CISO, awareness of the business reward/risk spectrum. Good CISOs will already understand the risks being faced by their organisation and be aware of their vulnerabilities. But it is rarely their responsibility to decide if those risks are worth accepting, depending on the consequential impact on the business. Nor is this a decision for those who do have the business responsibility to take, without being fully aware of the facts.
The Next Gen CISO should be able to present the risks and consequences of different actions (or inaction) in the context of the consequences they will have on the business. It is no good simply presenting information about speed of patching, number of phishing attacks or level of malware exposure. These may be relevant performance indicators within the security function, but mean little in the context of the business overall. Neither should CISOs hide or diminish the risks being faced. The important thing is to make clear and transparent the impacts that different security issues will have on specific aspects of the business. This is about putting security and risk into a clear and understandable business context.
As well as reaching out to customers and fellow C-level staff, the Next Gen CISO needs to be able to engage with employees from across the organisation. Security is not a pinpoint issue that affects only certain individuals or business processes. All roles have some element of security and risk for which they have to accept some responsibility. It might have seemed fine at one time to focus this in the hands of one individual, but that is too onerous. The risk then is the default reaction is that individual would be overly defensive and too often say “no”.
The Next Gen CISO needs to be able to understand business progresses and empathise with the challenges faced by those that undertake them. This helps spread involvement and understanding of the importance of security and what everyone needs to do, to the widest possible audience. By reaching out and engaging with fellow employees, the Next Gen CISO is also extending their threat intelligence and impact assessment information network.
Building understanding and changing behaviour towards security across the organisation then becomes a realistic goal. But this is rarely accomplished with tick box assessments or tedious training courses. Computer based training can play a part in building awareness, but risks downplaying the importance of specific security threats. A Next Gen CISO will enthuse and engage using more pervasive training models. These will include simulation and live role play to ensure the security message hits home and remains embedded in the organisational culture.
The CISO role may be built around information and security. But it is delivered through a passion for protection that aligns and fits closely to the needs of the business. The Next Gen CISO needs hybrid attributes to which many management roles should aspire. That and an ability to assess the value of technical aspects, with a realisation that success will depend on human ones.
Quocirca’s Global Print 2025 report reveals that print manufacturers are set to lose their influence on customer relationships in favour of IT service providers that deliver print services as part of a broader offering. Businesses are increasingly looking for suppliers that can demonstrate IT expertise and be strategic partners to both IT and various lines of business (LOB). Building IT services capabilities would present print manufacturers with the opportunity in the small and mid-sized business (SMB) market, helping to offset declining legacy revenue. However, to do so manufacturers must ensure the right mix of channel and technology partnerships.
Changing channel dynamics
The changing ways SMBs wish to purchase, consume and pay for their IT is redefining the role of the channel, fundamentally changing business models and relationships. While print channel partners are gradually transitioning to a managed print services (MPS) model, extending this to other aspects of IT will be the key to sustaining growth.
While printing is not set to disappear any time soon – overall 64% of businesses expect to still rely on printing by 2025 – digitisation efforts are also accelerating, and security is a top concern. This convergence demands a new breed of supplier that can support the business transformation needs of SMBs.
In the SMB market, print vendors have an opportunity to offset diminishing revenues from traditional hardware-centric business models by advancing services portfolios. The Global Print 2025 study reveals that by 2025, 26% of SMBs expect their organisations to have the deepest relationship with IT service providers, increasing from 23% today. This is at the expense of print manufacturers, which see their influence drop from 27% today to 13% in 2025. A further 17% of SMBs expect a stronger relationship with MPS providers in 2025, up from 14%.
The evolving technology needs of SMBs
SMBs are diverse, ranging in scale and ambition, from fast-growth start-ups to stable, medium-sized businesses. SME technology investment plans vary depending on business focus and size, but according to the Global 2025 report IT security and cloud top the agenda.
Just like larger companies, SMBs are interested in deploying new technology, but are constrained by budget and limited IT expertise. This lack of expertise is good news for suppliers which understand their customers’ business and industry needs and have the technical expertise to deliver a broader array of solutions and services. SMBs are increasingly adopting low-cost, cloud-based services and managed services to reduce operational costs, remain competitive and improve efficiency. Consequently they are placing increased demands on suppliers.
Quocirca’s Global Print 2025 research reflects these changing requirements. In organisations with 100-249 employees, 57% are looking for a provider that can be a strategic partner to both IT and LOB – this rises to 60% in organisations with 500-999 employees. Over half of SMBs expect a supplier to have strong IT security expertise, rising to 65% in SMBs with 500-999 employees. Other top requirements are industry specific expertise, business process automation capabilities and providing analytic insight.
Can the channel shift gear?
Although some print-focussed channel partners have successfully made the transition to managed print services (MPS), the majority remain focused on hardware-centric transactional sales. When it comes to IT services, traditional print partners often lack the skills, experience and capabilities to be credible providers. There may not be the incentives or knowledge in place to sell broader IT solutions, and provide a consultative sales approach which is a core capability of many broader IT service providers. As a result, many print channel partners may view a move to IT services as high risk requiring too much investment and time.
Typically, SMBs do not typically look to traditional print channel partners or print vendors as a source of innovative services beyond print. They are more likely to turn to existing IT service providers focused on business outcomes, rather than speeds and feeds.
So how can the print channel step up its game, and build IT services credibility and reputation? Consider the following recommendations:
- Change the conversation. Channel partners must change their expertise, shifting from the outdated print-centric reselling model to embrace a new role as trusted and strategic advisors to their customers. They must change the nature of the conversation they have with SMBs, engaging with the influential business decision-makers responsible for strategy. The conversation must be around the how to drive efficiency and productivity – not just about technology or products. As businesses turn to them for guidance and support, channel partners will need to be able to deliver consultative services and expertise. This also means tapping into adjacencies such as digitisation and security, which are increasingly part of the broader printing proposition.
- Partner for IT expertise. Partnering with accredited and experienced IT service providers gives print channel partners access to both a broader product portfolio and provides a direct route into the IT services market, supported by specialist technology sales and support resources. For instance, this enables partners to potentially offer print security services and solutions as part of a broader managed security service offering. For manufacturers or large channel organisations, acquiring IT providers can be an effective means of gaining specialised expertise to develop and augment IT services in-house. It is also a direct means accessing experience in selling or supporting IT services. Some manufacturers including Konica Minolta, Ricoh and Sharp have already made the shift, expanding their managed IT service capabilities largely through acquisition.
- Become specialised. The shift to margin-rich services means developing industry specific expertise. Invest in the skills needed to deploy and connect a range of technologies – both across hardware and software – and consider developing vertical specific offerings.
- Focus on delivering business outcomes. As SMB purchasing decisions are increasingly influenced by non-IT decision makers, channel partners will need to expand their influence to multiple stakeholders. For larger businesses, the channel needs to focus on building skills in delivering business outcomes to LOB buyers, while retaining a strong relationship with the IT department.
- Monetise solutions. Channel partners that invest in software development to expand their offerings should consider monetising and building the resultant intellectual property (IP) through delivering applications. Building a portfolio of applications wrapped around the core business – for example, MPS or document workflow – should lead to new opportunities. Consider including assessment or consulting services as well as integration services.
The channel must shift gears and change its business model in order to increase engagement with SMBs. Repositioning as an IT services and solution provider may seem high risk, but by developing credible converged IT offerings, channel partners may be able to increase their relevance, create differentiation and create longer term and more profitable relationships.
Despite the opportunities for technology to be really disruptive, it is surprising how often it simply digitally replicates existing processes. There is one common business process where the results could be described as patchy – meetings.
Meetings tend not to divide opinion that much. Most would say they have too many, they go on for too long and often appear to accomplish little. Decades of training courses and humorous videos have had some impact, but clearly not enough. Surely technology should be able to make it easier for people to work and collaborate efficiently and effectively in meetings?
Tools supporting collaboration (or claiming to) often try to impose a new working agenda of their own, or YACS (yet another communications stream). This might be innovative messaging based on timelines mirroring those that many have become accustomed to in personal lives via social media. It might include more visual interaction with video and screen sharing. But in most cases the focus is more on the media and ‘unification’ rather than their use. This is more like unified plumbing than unified communications from the individual’s perspective.
Some tools might offer significant improvements, especially if all potential users can be compelled to switch over to them or be encouraged by grass roots adoption. The problem is this rarely occurs smoothly. There are often issues in the edge cases – individuals, processes and data – where the new super collaboration system doesn’t fit well. So people move back to their trusty default approaches, typically email and more meetings.
Moves to reduce email sound good in principal, but the reality often disappoints. However, since meetings occupy probably even more working hours that email, surely it would be a good idea to shift the emphasis to them?
There are many important tasks that occur during meetings; sharing information, discussion, decisions, and allocating actions. Information and communication are important, but only on the path towards beneficial outcomes ie decisions and action. Otherwise the outcome is more, and more and more meetings….
Before and After, as well as During
So where is the effort actually expended in meetings? It is useful to consider the whole process as meetings have a ‘before’ and ‘after’ as well as ‘during’. While there has been a lot of technology applied to ‘during’, not enough attention has been applied to the efficiency opportunities across the entire process.
‘Before’ has to mean more than sharing a calendar invite, of obtuse conference call codes, or to a far flung location via a fire and forget email. To get the right people together at the right time, even with decent remote audio or video conferencing equipment, requires some intelligent juggling and scheduling.
This requires time and effort. But since the information about potential meeting participants is often already there, the intelligence employed could be ‘artificial’. More auto-scheduling effort to streamline and simplify arrangements would pay dividends in terms of time saved and would be appreciated.
Even during meetings, for many the use of technology has focused on the medium of sharing. Despite this, getting connected (the video adaptor challenge, followed by the function key shuffle) and getting remote colleagues involved (does anyone know the dial in code or where the remote is, or how to contact support?) seem to take more time than they should.
Capturing information during meetings and sharing accurately afterwards, jogging the memories of those present and informing non-participants, would be hugely beneficial in steering towards these positive outcomes. Technology to voice record, intelligently transcribe to text would make sharing and searching simpler, and is readily available. The key is to seamlessly integrate this into the collaboration tools that participants are already using for their meetings.
Shifting beyond collaboration
This involves a shift in thinking from the unified communications and conferencing industry. Most have already made the jump towards a focus on collaboration. This is necessary, but not sufficient. The next step is to recognise that the long-entrenched models of how people work together will be hard to change. The whole lifecycle of meetings needs to be enhanced, and where possible, automated.
Meetings might seem tedious and wasteful, but few organisations are going to replace them entirely with virtual timelines, shared repositories or interactive online realities. There is a need to look at the elements of greatest inefficiency, apply technology to make incremental improvements, assess the results and then repeat.
This looks a lot like the agile and DevOps approaches now being used in software development. These are yielding great results in terms of both speed and quality. Isn’t that an outcome all organisations would like to see for meetings as well? Look for tool vendors that are moving beyond the audio and visual media. The ones that are extracting meaning and understanding from how people are communicating are putting real business value into collaboration.
Despite the potential of ‘digital transformation’ and IT in general, many organisations find the reality a little disappointing. Development takes longer than expected, quality is lacking and what is delivered often fails to match the business requirements. Despite the continuous, rapid innovation and evolution of technology, with faster, cheaper and more readily available hardware, the software process seems to still struggle to deliver.
There have been many attempts to improve matters from computer aided software engineering (CASE) tools and methodologies to object orientation, 4th generation languages and tools for ‘citizen development’. So far, no silver bullets in either tools, methods, or superhero developers. One problem is that the software challenge has expanded and become increasingly monolithic. Roles have become specialised and the process structured, typically around a traditional engineering model with well-defined stages. Fine in principle, but inflexible and slow to adapt.
A break with the model was inevitable and shifted (typically) towards fragmentation, blurring of teams and speed with the Agile ‘manifesto’. This favours; individuals and interactions over processes and tools; working software over comprehensive documentation; customer collaboration over contract negotiation; responding to change over following a plan.
Agile often leads in turn to more fluid and continuous development and deployment processes, generically termed DevOps. Based on how Agile and DevOps are sometimes presented with almost alternative programming lifestyle enthusiasm, it might seem easy to believe they require some sort of cult following.
Certainly, the word ‘culture’ or phrase ‘cultural shift’ occurs very frequently when talking to enthusiasts. So much so, that many organisations become nervous of the almost revolutionary zeal. They wonder how they will be able to cope with adopting the apparently wholesale change.
While there are significant differences in its approach, DevOps should not be seen as an all-or-nothing dangerously disruptive force. It requires a certain way of thinking and organising teams, but this does not have to change and challenge the entire organisation. It might, over time, have more broad-reaching impacts, but these are based on its results, not its initiation.
Adopting a DevOps strategy
The primary requirement is commitment. Management must be bought into the process, and set out the strategic vision and goals. The starting point is – what are we trying to achieve?
While there are always clear end benefits to aim for – increase revenue, reduce costs, mitigate risk – it is the intermediate drivers that shape and derive value from DevOps. Being more responsive to customers, improving software quality, delivering services that meet business needs. The focus for DevOps adoption has to be decided upfront.
Some tasks are far better suited to the potential benefits of DevOps than others. Well-established software and systems that record and manage well-defined data requirements are unlikely to benefit the most. A better place to start would be aspects of customer or external engagement where changes in market, social, political or personal context open up opportunities for innovation. The questions are, what will be required and does anybody have any clear views yet?
So, next is an acceptance that this will require some experimentation. Some may call this ‘fail fast’, but a much better way to view it is ‘learn fast’. The experimentation is being done for a purpose based on the strategy. Small steps are taken, tested, refined and retried in pursuit of the goal.
Doing this requires a tightly-coupled effort based around different skills; design, programming, quality assurance, build, deployment, customer feedback. Rather than distinct stages or disparate teams, these are provided by close collaborators aiming to simplify and streamline the process.
The culture for accomplishing this does not need to pervade the entire organisation, nor does it have to be applied to every challenge. DevOps has to be done with serious intent and commitment – from the top as well as those directly involved – but only for one simple reason. To address a problem hotspot and get the results that the strategy was put in place to achieve.
The reality is that those who put DevOps in place for sound reasons do enjoy the benefits. The frequency of software delivery for assessment by the user is increased and ‘good enough’ initial solutions can be honed to be better. Plus, with direct feedback the whole team feels and takes on responsibility for quality.
DevOps is not a panacea or a cult requiring blind commitment, but nor is it a passing fad. Used pragmatically it moves software development away from either exacting engineering or casual coding to poised production at pace. Getting better business value from software should be what ‘digital transformation’ is all about. Find the right project, set suitable goals and give DevOps a go. Who knows, your business culture might change once it’s benefited from the results?
There is little argument that cloud is having a major impact on how organisations design, provision and operate their IT platforms. However, there still seem to be major arguments as to what an overall cloud platform actually should be.
For example, should the end result be 100% private cloud, 100% public cloud or something in between? Where do existing systems running on physical one-server or clustered systems fit in to this model? Should any private cloud be housed in an owned facility, in a colocation facility or operated by a third party on their equipment? Who should be the main bet when it comes to public cloud services – and how should these services be integrated together, either with other public cloud services, or with existing functions running on owned platforms? How about the cost models lying behind different approaches – which one will work best not only at an overall organisation level, but at a more granular, individual workload-based model?
Once a basic platform has been decided upon, then the fun starts. How should data be secured, both from an organisation’s intellectual property point of view, and also from a legal point of view as when it comes to areas such as GDPR? How do organisations make sure that users are provided with the best means of access functions without having to remember a multitude of different usernames and passwords through the use of single sign on (SSO) systems?
How can data leakage/loss prevention (DLP) and digital rights management (DRM) help to ensure that information is secure no matter where it resides – not only when the organisation has direct control of it on its own network, but also as that information passes from its control to other parts of the hybrid cloud and to partners, suppliers and customers on completely separate networks? It is also important to look at how other security approaches, such as encryption of data at rest and on the move, along with hardware, application and database security fit in with this.
Once a cloud platform is in place, it can lead to a completely different approach in how functions are created, provisioned and managed. Many organisations have found that a hybrid cloud is an ideal platform to support and make the most of DevOps, with continuous development and continuous delivery being far easier than using cascade/waterfall project approaches against a less flexible physical or virtualised platform.
However, effective DevOps requires effective orchestration – tooling has to be carefully chosen to ensure that the right capabilities are in place to enable the right levels of intelligence in how functions are packaged, provisioned and monitored in a contextually aware manner such that the overall platform maintains the desired performance and capabilities the organisation demands.
This then brings in a need for better technical approaches in how functions are packaged: the rise of microservices provided via containers is taking over from large, monolithic packages and virtual machines (VMs).
Systems management becomes a different beast: not only do IT staff need to monitor and maintain functions that they own within their fully managed private cloud environment, but they also must monitor in a contextually aware manner those functions that are running in a public cloud environment. Ensuring that root cause analysis is rapidly identified and that remediation can be carried out in real time, with workloads being replatformed and redirected as required is a key requirement of a hybrid cloud platform.
A key area that many have struggled with is that although they know at a technical level that cloud is a better way forward, they have found it difficult to sell the idea to the organisation is ways that the business can understand. However, a simple approach known as a Total Value Proposition provides technical people with a better means of getting their point across to the business – and so acquiring the funding required to implement such a major change in technical platform.
Hybrid cloud is the future. No matter where you are in your journey to this, there are many pitfalls to avoid, and many areas of additional possible value that are too easy to miss out on.
A new BCS book, “Evolution of Cloud Computing: How to plan for change” is available that covers all these areas – and more.
Reviews of organisational security can be viewed in many positive ways, but all too often with trepidation or resignation. The rise of phishing, where spoof, but increasingly credible, messages try to obtain sensitive information, is a particularly troublesome challenge. It exploits one of the weakest links in digital security, people, to get them to click on something they should not.
The latest (4th) annual State of the Phish report from Wombat Security Technologies is an interesting overview of the issue. This article quotes some of its data. It is based on feedback from infosec professionals, technology end users and Wombat’s customers. The report outlines the breadth and impact of phishing and what organisations might try to do to address it.
The scale of the challenge
The scale of the issue is staggering. Around half of organisations are seeing an increase in the rate of attacks, and the approaches are diversifying. Over half are experiencing spear fishing – targeting specific individuals, roles or organisations – and 45% are experiencing phishing by phone calls or text messages (‘vishing’ and ‘smishing’).
To combat this, companies can go beyond basic training courses and awareness campaigns, and simulate actual phishing attacks to see how employees behave and how they might respond to specific phishing styles. Those in the survey typically use four different styles of email templates to assess how end users react:
- Cloud emails, relating to accessing documents from cloud storage or using other cloud services.
- Commercial emails such as simulated shipping confirmations or wire transfer requests.
- Corporate emails, which look like the sort of messages normally circulated internally such as ‘memos’ from IT or HR departments.
- Consumer emails related to social media, retail offers or bonuses and gift cards notifications.
Although individuals are starting to get wise to the issue and average click rates have fallen from 2016 to 2017, sophisticated attacks can be very effective. Two particular simulated corporate templates had almost 100% click rate – one pretending to include an updated building evacuation plan, the other a database password reset alert. Other high deceivers were messages about corporate email improvements or corporate voicemails from an unknown caller. The only high rated consumer attack was about online shopping security updates.
The impact of phishing attacks seems to be growing, or at least is being more widely recognised as it has a strong impact on IT professionals. Almost half of organisations noted malware infection as a consequence of phishing. Compromised accounts and loss of data were the other most significant responses. It was also noted it causes a greater burden on IT in terms of, for example, helpdesk calls. So too was the potentially more far-reaching business consequences in terms of cost, time and disruption.
Addressing the issue
The key to mitigating the risk is to change end user behaviour. This is much more than simply making users aware of the consequences, although many organisations do have severe punishments. ‘Repeat offenders’ will face counselling in three quarters of organisations, removal of access to systems in around a quarter and in one in ten organisations it may result in being sacked. (The research was conducted in the US, UK and Germany, and employment regulations may differ significantly.)
As phishing seems not only to be inevitable (around three quarters of organisations say they are experiencing phishing attacks), but also very damaging, the most pragmatic approach would be to tackle the issue head on. Better to avoid any of those unfortunate consequences and address the problem at source by highly targeted training. This would benefit enormously from ongoing reinforcement since both the threats and the workforces having to deal with them will change over time.
This is where training using regular simulated attacks appears to help greatly. The majority of organisations are doing this very frequently with 40% training quarterly and 35%, monthly. Why? Well, an increasing number, now over three quarters of organisations, measure their own susceptibility to phishing attacks. Over half say they have been able to quantify a reduction based on their training activities. Given those metrics and the growing risks and potentially significant business disruption and impact, who wouldn’t do more to avoid being caught in the phishing net?
This is the third in a series of articles exploring the challenges and opportunities facing the audio visual (AV) industry, which has its annual European flagship event, ISE2018, in February. This article looks at how innovative and immersive technologies can improve the user experience.
A deeper user experience?
Technology companies used to talk about ‘intuitive’ interfaces and ease of use as if this was an end in itself. It is the overall outcome that really matters. Many still spend too much time and effort making individual elements ‘easier to use’, without improving the user experience and the effectiveness of the entire process.
The result is that individuals have dozens of separately ‘easy to use’ applications and businesses have too many disconnected and siloed repositories of data. This leads to poor use of information; data is incomplete, inaccurate or delivered late from a process perspective. From an individual point of view users feel overloaded and overburdened.
The relationship between data and those who have to use it needs to change. Users need more effective tools to deal with the mass of data they face. Recent innovations in visualisation offers some welcome opportunities.
Immersive visual technologies have always generated a lot of hype. While much of the innovation is new and disruptive, many elements have been around for some time. The difference now is that compute power, storage and network capabilities have grown to a level that delivers a high quality experience and removes disconcerting delays. There are several key immersive technologies:
- Virtual Reality (VR) is probably the best known and longest established immersive experience. It is also perhaps the most intrusive through the use of enclosed headsets, which have only in recent years dropped in size and cost. VR replaces the real world with an entirely digital one.
- 360 (360 degree imagery) has become possible from advances in both camera and drone technologies. The experience is immersion in either images or videos previously captured in 360 degrees and can be displayed stereoscopically if desired. Headsets are not always required as handheld mobile devices can be almost as effective.
- Augmented Reality (AR) has moved from sci-fi films screens into people’s hands and became widely popularised by the Pokemon Go game. AR overlays digitally created content into the real world. It can work using live camera feeds on mobile device displays, as images overlaid onto smart glasses or even as projected holograms. The flexibility and lower levels of intrusion means AR is a great fit into many application areas.
- Mixed Reality (MR) is often confused or conjoined with AR. The differences are slight, but important. MR offers a more seamless experience where digital content is not simply overlaid, but it can also react to, and interact with, the real world. There is more sensor and input interpretation used in order to make this happen.
An opportunity being partially addressed
All immersive technologies are being well marketed. Numbers of specialised immersive devices such as headsets are growing, but still small relative to other areas of visual technology. The total number of VR headsets sold in 2017 was around 12 million units. There was probably similar numbers again shipped as low cost, but still offering an immersive experience, cardboard headsets. Entertainment and gaming plays a big part and the installed base of Sony Playstation VR headsets has passed 2 million.
As an opportunity, this is not something that the AV industry will have to itself. While the early VR industry from the 1990s was very specialised and thus fraught with investment challenges, the current immersive sector is much more mainstream, with significant investments from major IT vendors.
Microsoft launched its HoloLens in 2015 as a gaming oriented consumer product. Now it is businesses that have picked up what is widely regarded as a mixed reality headset. It is even certified for use as basic protective eyewear and there is a hardhat accessory currently in production.
Google’s approach has been more oriented on VR and AR, but it has been promoting business use in an assistive way. It prefixes typical user experience scenarios with the term “Help me”. These are oriented on helping the user to learn, create, operate or sell. As an approach this is very pragmatic, focusing much more on the outcome and less on the technology. Google has also been a strong advocate of ‘immersion for all’ through the promotion of cardboard headsets.
Apple’s executives reportedly held meetings with AR device parts suppliers at CES2018 and may be planning to follow up with an update to its ARKit software later this year. If hardware, such as an AR headset, were to arrive next year that would not be a massive surprise. It would also have a big impact, but Apple tends to delay or even cancel products if it does not feel they are ready. Its focus on software and content will in any event give a boost to the AR ecosystem.
Making an immersive experience acceptable and enjoyable
The broad market drive is good, but many technologies flounder through insufficient focus on the user. This is where the experience of the AV sector could be vital. If there is lag or display inconsistencies (such as for people who wear glasses) or even a tendency for motion sickness, then most users will find this unacceptable, except for short periods of usage. Some are trying to reduce the impact and companies such as MagicLeap. Its Lightwear goggles promise a more natural and comfortable overlay of digital and physical worlds. It is not yet known how these approaches will be adopted for business use.
The delivery of visual stunning performance on large screen technology is something very familiar to the AV sector and some have put immersive technologies around users in the physical world. Barco’s multi-sided cave display provides an immersive environment to fully surround people so they can stand inside a VR or 360 space. Realfiction has taken this open immersion a step further with its DeepFrame augmentation of reality through holographic projection. This removes the need for specialised eyewear or goggles to deliver visual information in a way that sparks viewer engagement.
Delivering on outcomes
In many cases visual appeal will not be enough. Achieving business goals from engaging or immersive access to visually stimulating information will often depend upon what can then be done with the information, and perhaps most importantly, who with? What is required is a shared immersive experience that permits not only access to information but also effective collaborative tools for interaction across a disparate group or team.
It is also vitally important for collaboration technology to enable the sharing of information across multiple devices. Teams bring laptops, tablets and smartphones to meetings to take notes and to reference work in real-time. With the immersive collaboration tools, such as Oblong’s Mezzanine, teams can share data instantly for everyone to see and understand.
The most sophisticated and productive collaboration rooms provide participants with the means to share data simultaneously, utilise the most intuitive controls like gesture and/or touch, and instantly access information to make collective decisions. When collaborators share a united workspace, regardless of their location – meetings are more engaging, enlightening and immersive.
These are the outcomes that most businesses are looking to achieve. Better use of data, more effective teams. To get more insight into how different immersive technologies are being applied to improve both user experiences and business outcomes, by a significant number of innovators across the AV sector, visit ISE2018 in Amsterdam in February.
Cloud computing is promising much – but is failing in many areas as users get to grips with some of its more complex areas. An example here is when organisations start to look at how best to use multiple cloud platforms across a private and public environment – what is known as a ‘hybrid cloud’.
In essence, such a cloud sounds easy: an organisation maintains certain workloads on its own equipment, using public cloud where and when suitable to meet the needs of a process.
However, the devil is in the detail. The first problem starts where the choice of technical cloud platform has to be made. Using different cloud technologies can make workload and data mobility difficult. For example, using an OpenStack private cloud and a Microsoft Azure public cloud means that compromises must be made in certain areas.
However, Microsoft has now addressed this with the launching of Azure Stack – a highly engineered hardware/software system that can be made available to organisations to create a private cloud that is highly compatible with the Azure Public Cloud.
Great – a major step forward. However, the next problem comes with performance. Having an Azure Stack instance in an organisation’s own datacentre and then trying to connect through to the Azure Public Cloud means that data (and, in many cases, application workloads) has to traverse across a wide area network (WAN) connection.
If this is a public internet connection, then performance will generally be bounded by a ‘best efforts’ constraint. If it is a dedicated link, then costs may be a problem.
Again, Microsoft has tried to address this, introducing its own dedicated connections into the Azure Public Cloud. These connections are offered under the ExpressRoute banner, and provide ultra-low latency, high bandwidth paths into Microsoft’s cloud facilities.
Wonderful – except that these connections do not terminate in a general customer’s premises. To achieve suitable scalability within cost constraints, Microsoft has struck deals with other facility providers to offer points of presence (PoPs) for ExpressRoute terminations. End customers then need to strike a deal with these PoP providers, who will then provide dedicated connections using quality of service technologies to connect into the end customer’s facility – so there is still the issue of this last link that must be monitored and managed outside of the Microsoft environment.
Such complexity is hard to avoid, but can lead to problems for those aiming for a seamless logical hybrid cloud platform. Luckily, there is one way around all of this: the use of a colocation provider who is also an Express Route termination point.
Here, the end customer takes space within the colocation facility and places their Azure Stack equipment within it. Using intra-facility connectivity speeds, they then connect through to the ExpressRoute environment, giving an Azure Stack/Azure Public Cloud experience that is essentially one consistently performing platform.
By choosing the colocation provider carefully, it can be possible to further open up the options around connectivity. Good colocation providers will have both dedicated, high performance networks underpinning connectivity between their own and other facilities, but will also offer direct connectivity into the main public clouds.
By ensuring that a colocation provider has the right portfolio of services, end user organisations can ensure that they maintain flexibility over how they continue their journey into a fully-functional hybrid cloud, and can make the most of the opportunities that such a platform can provide.
Quocirca has written a short report, commissioned by NGD, on how collocated Azure Stack using ExpressConnect can provide the best option for organisations wanting to utilise a hybrid cloud. It can be downloaded for free here.
This is the second in a series of articles exploring the challenges and opportunities facing the audio visual (AV) industry, which has its annual European flagship event, ISE2018, in February. This article looks at how artificial intelligence (AI) and sensor data driven automation might transform the use and perceptions of AV.
IT connected AV systems have become pervasive. Low cost flat panel displays are replacing projection and can be placed anywhere that workers or customers might congregate. Many already are, or will have to be, connected to the network. Screen sharing, remote participants, unified communications and video conferencing tools are becoming more widely deployed. Organisations are expecting productive collaboration and individuals are increasingly expecting to be on camera and sharing data with colleagues. They will, however, still quickly lose confidence after a bad experience.
AV technology is now as sophisticated as anything else on the IT network, but there remain some fundamental usage challenges. Cabling difficulties or getting a screen or video collaboration system working should no longer be an issue, but total system complexity might be. Which begs the question, could intelligence embedded in the AV systems themselves make for simpler and more effective usage?
Intelligent audio visual
Keeping meeting room and display technology under control and working effectively is increasingly a complex IT task, with some asset management challenges thrown in. However, few organisations would be looking to deploy more people just to help support or augment this, despite potential user frustrations from un-integrated or unusable expensive AV displays and equipment. Artificial and automated intelligence needs to be applied.
Automation is already playing increasingly important roles in other areas of connectivity. Networks are becoming smarter with software defined approaches allowing for the intelligent centralisation of control with distribution of impact or power to the edge. Sensors and the internet of things (IoT) are gathering masses of data available for use for machine learning and automation.
The combination of smart edge devices and smart networks means that once manual processes can now be intelligently automated. For AV, this can be applied to enhance the already important user experience to new levels.
Joined up meetings
Since a large element of AV is about supporting people conveying information to other people, an obvious area to apply AV intelligence is to meetings. Many organisations will find that much of information shared and discussed during meetings will be lost or forgotten. Collaboration tools and repositories help, but only if everyone is sufficiently aware and disciplined to remember to use them.
This challenge can be addressed with a bit of joined up thinking. For example, Ricoh and IBM have together created the Intelligent Workplace Solution. Rather than just providing an interactive whiteboard, it includes IBM’s Watson as an active participant capturing meeting notes and action items, including side conversations that might otherwise be missed. It logs attendance using smart badges allowing participants to keep track of the meeting content and uses simple, voice control so that any participant, whether present or located remotely, can easily control what is on the screen through simple voice commands.
The intelligence can also be used to augment other aspects of more complex meetings. For example, the system can translate speakers’ words into several other languages. These can then be displayed live on screen or in a transcript. Applying automation to integrate the visual, audio and network elements in this way improves the experience for the participants. It also makes the overall meeting process much more efficient.
Smarter command and control
In many mission critical settings AV is already widely used to view live content. This may be from numbers of cameras, industrial sources, social media feeds and computer visualisations. Videowalls or large screens complemented by smaller displays are often used to allow large numbers of information feeds to be simultaneously monitored. Increasingly this centralised command and control approach is becoming a constraint rather than an ideal solution.
Firstly, there is a risk from having a single location and single point of failure. But also getting all the right individuals to one place to see and absorb the information is a challenge. Expertise may be widely spread; many employees will choose to work remotely or need to be mobile and so the control ‘centre’ needs to be distributed.
This more sophisticated model for command and control requires more intelligence in the AV network. Information must be shared to those who need to see it, but too much information could easily overload networks. This is especially true given that video content quality has advanced through high definition (HD) and is now increasing 4K. Information needs to be shared intelligently.
Higher definition increases the possibility to apply advanced recognition systems to the higher quality images. Intelligent analytics can be applied to do more automated monitoring. This means that the total volume of data does not need to be shared and manually monitored. Smart applications of this type of technology will make command and control systems more effective. They will also allow more worker flexibility and ensure that individuals (and networks) are not being overwhelmed by unnecessary data.
Wearable and IoT data
Technology innovations, such as sensors, IoT and wearable devices will increasingly impact on the AV world. This will add further intelligence and live data feeds to systems already dealing with masses of video and audio content. Coping with the vast array of sources, let alone the levels of data will be an increasing challenge.
As the data variety and volumes increase, intelligent systems are required to automatically capture and analyse this ‘big data’ live. Then, human operators can react quickly and appropriately. For example, combining conventional surveillance cameras or audio capture systems with machine learning or artificial intelligence systems will help to automatically detect anomalous data or abnormal situations.
Human operators, no matter how good or well-trained become tired or lose focus over time, especially if the task in monotonous. However, augmenting their decision making with automated systems will improve their responses. This is applicable to issues such as security, but also to any type of application where change monitoring is required. This could be changes in an industrial process indicating failure or a drop-in quality, or patient vital signs in healthcare. New data sources and AV need to be well integrated within the overall system in order to fully realise the benefits.
Automatic for the people
In all situations, the ability to rapidly visualise and comprehend the implications of changes or trends in masses of data sources highlights how critical AV is to the user experience can be and how it needs to be integrated into a broader IT systems approach. Many organisations are already evaluating how AI and IoT can augment and automate some of their business processes. Many of these processes now rely heavily on AV infrastructure and especially the use of video. It will become increasingly important to think about this holistically as an AV/IT integrated architecture and not one where the visual elements and displays are added a potentially expensive afterthought. To get more insight into how intelligence is being applied in many different ways to the AV sector, visit ISE2018 in Amsterdam in February.