High performance computing (HPC) used to only be within the reach of those with extremely deep pockets. The need for proprietary architectures and dedicated resources meant that everything from the ground up needed to be specially built.
This included the facility the HPC platform ran in – the need for specialised cooling and massive power densities meant that general purpose datacentres were not up to the job. Even where the costs of the HPC platform were just within reach, the extra costs of building specialised facilities counted against HPC being something for anyone who needed that extra bit of ‘oomph’ from their technology platform.
Latterly, however, HPC has moved from highly specialised hardware to more of a commoditised approach. Sure, the platform is not just a basic collection of servers, storage and network equipment, but the underlying components are no longer highly specific to the job.
This more standardised HPC platform, built on commodity CPUs, storage and network components, is within financial reach. This still leaves that small issue of how an organisation can countenance building a dedicated facility for a platform that may be out of data in just a couple of years?
For those with a more generic IT platform, colocation has become a major option for many. Offloading the building and its maintenance has obvious merit, especially for an organisation that is struggling to understand whether its own facility will grow or shrink in the future as equipment densities improve and more workloads move to cloud platforms.
However, the use of colocation for HPC is not so easy. The power, emergency power and cooling requirement needed for HPS will be beyond all but certain specialist co-location providers.
Hyper-dense HPC equipment needs high power densities – far more than your average colocation facility provides. For example, the average power per rack for a ‘standard’ platform rarely exceeds 8kW per rack – indeed, the average in colocation facilities is more like 5kW.
Now consider a dense HPC platform with energy needs of, say 12kW per rack. Can the colocation facility provide that extra power? Will it charge a premium price for routing more power to your system – even before you start using it? Will the multi-cabled power aggregation systems required provide power redundancy, or just more weak links in an important chain?
Also consider the future for HPC. What happens as density increases further? How about 20kW per rack? 30kW? 40kW? Can the colocation facility provider give guarantees that not only will it be able to route enough power to your equipment – but also that it has access to enough grid power to meet requirements?
What happens if there is a problem with grid power? With a general colocation facility, there will be some form of immediate failover power supply (generally battery, but sometimes spinning wheel or possibly – but very rarely – supercapacitors), which is then replaced by auxiliary power from diesel generators. However, such immediate power provision is expensive, particularly when there is a continuous high draw, as is required by HPC. Make sure that the provider not only has an uninterruptable power supply (UPS) and auxiliary power system in place, but that it is also big enough to provide power to all workloads running in the facility at the same time, along with overhead and enough redundancy to deal with any failure within the emergency power supply system itself. Also, make sure that it is not ‘just a bunch of batteries’: look for in-line power systems that smooth out any issues with the mains power, such as spikes, brown-outs and so on.
Remember that a lot of power also gets turned into heat. Hyper-dense HPC platforms, even where they are using solid state drives instead of spinning disks, will still produce a lot of heat. The facility must be able to remove that heat effectively.
Taking an old-style approach of volume cooling, where the air filling the facility is kept at a low temperature and sweeps through equipment to remove the heat which is then extracted outside the facility is unlikely to be good enough for HPC. Even hot and cold aisles may struggle if the cooling is not engineered well enough.
A colocation facility provider that supports HPC will understand this and will have highly targeted means of applying cooling to equipment where it is most needed.
HPC is moving to a price point where many more organisations can now consider it for their big data, IoT, analysis and other workloads. There are colocation providers out there who specialise in providing facilities that can support the highly-specialised needs of an ultra-dense HPC platform. It makes sense to search these providers out.
Quocirca has written a report on the subject, commissioned by NGD and Schneider. The report is available for free download here: http://www.nextgenerationdata.co.uk/white-papers/new-report-increasing-needs-hpc-colocation-facilities/
According to the GSMA, there are nearly 5 billion active individual mobile phone contracts on the planet at the moment. Sure – many of these will still be for individuals who have more than one device, but it is still felt that by 2020, around 75% of the world’s population will have some form of mobile device.
With global and local handset manufacturers moving from the provision of low-end, voice-only handsets for emerging markets to making cheap smartphones available, this can lead to a whole new approach in how such markets can operate at the social and economic basis.
As mobile connectivity increases in these countries and the use of 4G and 5G overtakes the old 2G and 3G connections originally put in place for the major conurbations, relatively high speed, universal wireless connectivity becomes the norm. The smartphone can become a personal hotspot for the individuals to use for other items to connect to the greater world as needed. But what sort of things could this bring in?
Firstly, consider health. Low-cost wearable sensors could be provided to monitor such things as blood pressure, blood sugar levels and so on. For patients that have been seen by a travelling doctor and have been diagnosed with, say, a fever, cheap, disposable digital thermometers can measure and send back data via the mobile device on a regular basis, so that the doctor can respond on a more ‘as needed’ basis.
The same goes for pregnancy – rather than hoping that nothing untoward will happen between visits when the doctor/midwife just happens to be in the area, wearables can send back data as needed so that the health of the mother can be monitored centrally on a regular basis. Many issues can then be dealt with directly over the mobile device, via voice or video call; other areas through the sending of links to the phone; others by scheduling a visit from a lower-skilled local healthcare professional. Only where a real emergency is obvious does the doctor have to go to the patient directly.
Now consider the economic basis.
As these smartphones do all have browser capabilities, individuals can now cooperate and trade with each other far more easily. A farmer in one area of the country can use cloud-based systems to find customers in other areas – or can input details of crop availability to food processor companies that may wish to buy the crops. Issues, such as the occurrence of a pest such as locusts or impending drought, can be quickly logged so that plague tracking can be initiated and dealt with far more effectively. The farmer can also keep a closer eye on what is happening across their farm through the use of internet of things (IoT) devices being connected to the mobile device.
Small, local farmers can let villagers know when they will be in the area with specific crops, and what price they would like for them. They can then take orders and adjust prices as necessary to ensure that the entire crop is sold at a good margin in the minimum number of journeys required.
Farmers can also become cooperatives. They can come together to provide a more complete offer – one lorry can pick up supplies from multiple farms and deliver packages of, say, maize, milk, meat, vegetables and fruit to markets, or even directly to customers. Smartphones can provide mapping and geo-analytical systems to ensure that the lorries take the optimum route, minimising the costs of fuel and stress on the vehicle itself.
By coming together as a cooperative, it also provides the farmers with greater collective bargaining power when dealing with downstream food processing and wholesale companies. Offers of crops can be sent to multiple different prospective customers at the same time, getting them to compete with each other to gain delivery of the crops to themselves.
Individuals can create their own businesses. Goods that sell well to richer foreigners, such as ethnic art and jewellery can be advertised directly via the web, using the mobile device as a means of inputting the goods into cloud-based retail systems. On the sale of an item, the monies paid by the customer can be cleared via e.g. PayPal into an easily accessible account; the individual can arrange for the items to be picked up by a courier or to be sent for first-stage delivery to a more central place via train, boat or plane as required.
For the countries involved, the rise in personal mobile device ownership must be seen as a major chance for individual, local and central innovation. However, contract prices need to be managed to ensure that the cost equation to the individual is obvious.
Governments may need to provide community systems, where a few mobile devices are made available to a community on the understanding that the devices will be made available to individuals on an as needed basis. However, this is a minor issue, as the figures show such major growth in device ownership. Where real help will be required is in creating and providing low-cost access to the cloud-based services involved. It may be that data contracts are subsidised under a country’s health budgets, as the returns can be so major in this area. Healthcare based cloud services can also be funded the same way – or via foreign aid or non-governmental organisation (NGO) funding projects. If the device and data contracts are so covered, the individual and their community can then work on building the additional services themselves.
In the early stages, governments may find that providing grants or prizes based around individuals and groups which create innovative cloud-based services that help a specific group of people or deal with a specific general need will drive innovation in how mobile devices can be used.
A mobile device-first approach to social and economic success will be different to that which has already occurred in more mature markets. It is far more of an opportunity, as there is little technology already existing that must be considered. Such an environment gives massive opportunities to those involved.
There were plenty of amazing products launched and on display at ISE2017 in Amsterdam in early February. But in the background buzz there was a common theme of an industry in transition. While many talked about convergence between AV and IT, some fear the risk that it will actually be more of a ‘collision’. This will have a consequential impact on jobs and revenues.
None of this restrained the exuberance of showcasing the best of the audio visual (AV) sector. The event brought in a record number of over seventy-three thousand attendees. In many quarters, there was also a more upbeat assessment of the new opportunities that might be created as the AV and IT sectors move closer together. There was also an acknowledgement that this would require some work.
Now the dust has settled and the exhibition paraphernalia is dismantled for another year, it is possible to take a pragmatic view of where the opportunities may lay.
The AV industry is undoubtedly undergoing change, but the IT sector is by no means static or settled. There has been a significant and ongoing shift towards the utility or ‘as-a-service’ model, which some find unsettling for both job security as well as data security. There has also been the liberation of IT into the hands of consumers. Mobile, wearables and the internet of things (IoT) have seen IT shift from the easily managed desktop into a voracious hydra of access options. Great for users and customers, but adding to the already challenging IT operational burden.
Is now a good time then for IT to work more closely with AV?
Historically, the focus of AV could be characterised as the experience within the room and an increasingly spectacular ability to convey information. For many, that meant presentations and over the years, the technology that this encompasses has grown in capability and usability. It has also become more connected.
This is where the overlap with IT, with its focus ‘beyond’ the room and across the network, becomes more apparent.
AV is all about the user experience and supporting media-rich communication. With recent advances in large touch screen and interactive displays systems – mirroring the advances in mobile IT with tablets and smartphones – this user experience has expanded into the important, but often elusive, area of collaboration.
This is high on the agenda for IT. The word ‘collaboration’ has been added onto the end of the term Unified Communications, and peppered liberally across many PowerPoint presentations. Making it a reality that delivers its anticipated value has proved difficult.
Making collaboration a reality
IT is very used to tackling the challenges of integration, security and resilience. It has also been unifying the communications plumbing with the help of major IT vendors. But turning this into seamless simple experiences that people delight in using every day is rarely a core competency. Here is where a closer relationship with AV would be beneficial to both sectors – collaboration rather than collision.
Tools for enhancing communications, by unifying or incorporating different strands of media such as video is only one of the areas where the AV world is moving away from point products toward solutions and building broader relationships in open ecosystems of partners. The industry is now showcasing integrated systems to specific business problems. This is not just for collaboration, but also with omni-channel commerce solutions for retail, tools for education and smart buildings as well as the more obvious sectors focused around entertainment.
This was evident at ISE2017, not only in the way that halls oriented around these business topics as themes, but also in that the discussions and presentations on stands and in the conference, had moved on from form and features to addressing business needs and challenges. With this positive attitude, the AV industry does not need to fear convergence along with IT, but embrace it as this will be good for both sectors.
In October 2016 Quocirca reported on a new breed of digital rights management (DRM) tools which have emerged in recent years. These tools have security built in to their core and are designed to support the growing used of cloud stores and mobile computing (DRM 2.0). The post looked in detail at three vendors; Vera, FinalCode and Fasoo. Some others were mentioned in passing, including Seclore, a California-based vendor, with origins in India and some major European customers.
Perhaps the most striking thing about Seclore is its claimed DRM market share for its Rights Management product which it says is second only to Microsoft (the latter embeds DRM in certain of its other offerings). Seclore says its own directly managed customer base of 470 enterprise customers accounts for 4.5 million end users. However, via OEM partners it claims another ten thousand customers with 8-9 million users.
In many case partners are using Seclore Rights Management to extend the scope of existing content management or productivity products to ensure protection continues beyond the scope of the base product. For example, “Citrix ShareFile enables security to “follow the file” through integrating Seclore”; this was required to extend DRM to cloud and mobile use. Seclore has been integrated with IBM’s FileNet content management for the same reasons.
It is not just content management systems. Data loss prevention (DLP) systems were originally designed to deal with content moving around within an organisation’s network and police what left it. This has become too limited an approach with the growing use of cloud stores and Seclore claims both Symantec and McAfee’s DLPs are being extended to enable the external use of DLP using its product.
As well as being designed to address the need for external sharing, Seclore ensures it remains independent of device types and document formats to support as wide a range of use cases as possible.
Another intriguing initiative is that, wherever possible it aims to inherit rights and policies from the original systems, for example SAP and Microsoft SharePoint, rather than having to re-write them. However, policy can be modified and Seclore Right Management also enables policy to change as documents progress through a work flow, for example as financial results move from confidential to public domain. These capabilities are key to making Seclore’s OEM partnerships work.
If, like many, your organisation has reached the point where the management of rights needs to be extended to cloud stores and mobile users, then Seclore should be added to the list of products for consideration. Better still, it may be possible to simply upgrade some of your existing technology if there is an existing Seclore integration that allows you to do so.
The impact of self-driving technology, whether it be Uber-style driverless ride sharing vehicles, automated long-haul lorry driving or drone transport, will be felt across all transport sectors.
The next 20 years will see the steady uptake in driving automation. It will increase real-time communications in order to minimise travel time and cost. Legislators must grapple with standardisation, liability and security issues; while the industry is adding more and more driver-assistant services under the hood without significantly increasing the price point. But what about connectivity requirements, and will driverless cars actually reduce travel time?
Automation in the works
High-end cars today are more than semi-autonomous. Many hundreds of meters of wiring connect sensors to computers, that directly interface with the engine and steering systems. The development of car automation technology is a multi-billion-dollar race – a mix of competition and co-operation between the IT and automotive industries.
Major players on the IT side include Google with its Waymo driverless car technology, and Amazon, Microsoft and Apple with their navigation technologies. On the car manufacturing side GM, has acquired Getcruise to create a range of driverless cars, and Mercedes is developing its Car-to-X technology that lets the car exchange information with the surrounding infrastructure, like traffic lights, and other connected vehicles. Ford is partnering with Amazon to provide its driverless cars with Alexa, Amazon’s smart voice assistant technology, allowing drivers to voice communicate with the car systems.
Automated traffic infrastructures
In a fully automated road traffic scenario (something the airlines are pretty close to in the sky), the speed and course of driverless vehicles is optimised by a city-wide computing system. That requires fast and secure active-to-active WAN connectivity between cars and traffic management systems. The automated – and ultimately driverless cars, will need network connections capabilities to handle in-car IoT communication between sensors and computers, as well as external wireless 4G LTE and WiFi connectivity. The cars may also need satellite connectivity in rural environments.
Advanced navigation systems already have network connectivity to check weather and traffic conditions ahead. Intelligent mapping systems like HERE, supply information to control self-driving cars equipped with street-scanning sensors to measure traffic and road conditions. This location data can in turn be shared with other map users.
Ultimately, driving cars will be left entirely to computers – in cars without steering wheels. We will all be passengers or freight. Mobile connectivity must be maintained using dedicated roadside Wi-Fi networks as well as the existing mobile data services. The ability to switch, select and bond with constantly changing wireless base stations will be crucial for success. This is where SD-WAN routers from vendors like Peplink, that can handle multiple connections as a single virtual connection, are needed across a wide range of mobile environments.
With the driver gone, next to go may be the privately-owned car. The Singapore government estimates that replacing today’s 700.000 private vehicles with network connected, driverless vehicles would reduce the Singapore car pool to 300.000. It would simultaniously reduce transport times and the need for parking spaces. It would generally lower pollution levels and improve road safety.
Reduced travel time?
The Singapore scenario, and similar assessments of driverless traffic, vector in the advantages of much higher traffic density and the reduced need for parking spaces. With central management of in-city transport, users will buy transportation services – not vehicles. What these scenarios do not vector in, is traffic increases, if transport becomes as easy as using your smart phone. When every child, disabled, elderly or drunk person can order driverless transport, we risk a physical traffic volume explosion. Just look at the traffic increases the smart phone caused. So maybe queuing is not going away, just because we automate it.
Consumerisation of mobile technology has had many benefits. It has driven down the prices of devices, improved the user experience to the benefit of non-technical users. Plus, awareness and crucially acceptance, of mobile devices, has soared. All of this might seem good for the business, but there is a significant drawback. Consumer attitudes to technology can lead to a throwaway culture, with the obvious impact on wastage. This is not sustainable.
Mobile technology uses precious materials which are becoming scare, expensive to find or politically harder to gain access. Devices also include hazardous waste materials and lots of energy is consumed during production. No surprise that governments are increasingly introducing legislation to decrease waste, lower carbon emissions and penalise polluters.
This is being felt by organisations already, and will have further impact in future as regulations tighten. A much larger direct effect of throwaway technology is the disruptive impact on business processes. This has commercial and not just environmental consequences. Low cost consumer devices might seem simple and cheap to replace, but device failures and unexpected changes affect and interrupt the business process.
While few working environments are really ‘hazardous’, they can be unpredictable and unforgiving if devices are mishandled. So, a better approach is to make mobile device design fit for purpose and sufficiently durable. This means considering not only the device itself, but also the peripherals, accessories and software that will be used with it over its life.
More durable ‘whole life’ design should ensure devices can be maintained in the field. It could also take into account that devices should be compatible over several generations with replaceable components such as batteries and other ecosystem elements such as printers, scanners, cases etc. This would keep costs down and address some of the environmental concerns about wastage from replacing items that still work but have been made obsolete because of changes to the core device.
Whole life design would also offer better support for business continuity where workers rely on mobile devices. If devices are not sufficiently durable there is always the risk of something breaking. Failure of any single element of the system causes downtime, aborted processes and user frustration.
A different economic model, which takes the whole life approach, has been suggested from the work of the Ellen MacArthur Foundation. This followed the “cradle to cradle” concepts established by William McDonough and Michael Braungart. It has at its core the term ‘circular economy’, and its approach is to replace the current largely linear approach of ‘take, make and dispose’ with one in which resources circulate at high value, avoiding or reducing the need for new resources.
There are many environmental benefits to a more circular economy and a ‘circular’ or sustainable approach to mobile devices – from reducing greenhouse gas emissions and other pollutants, relieving pressure on raw materials and energy consumption. Such circularity could also be directly beneficial to businesses and the mobile workforce:
- Durable design. Products are built to last and survive the day to day knocks and challenges of an active working environment. Products are increasingly built for energy efficiency with simpler in-field replaceable components e.g. Batteries.
- Sustainable Supply Chain. Products are designed for in-field upgrade and re-use at end of life. This provides opportunities for remanufacturing and refurbishment across the supply chain.
- Recycling and Recovery. Manufacturers operate comprehensive warranties and take-back programmes at the device end of life, making it easy to return and responsibly recycle hardware.
- Product life extension. Manufacturers are able to extending the product life through software-upgrades and firmware updates. This allows for long-term compatibility with peripherals and accessories to ensure all elements of the mobile device ecosystem remain in active use as long as possible.
The key principal is that while a more circular approach offers environmental benefits, it also provides benefits to the business; direct cost savings in the total cost of ownership of devices, and indirect savings when looking at the reduction of disruption to the mobile business process. This approach to mobile device durability is further explored, with guidelines for how to build a sustainable mobile device strategy, “All mobile, still working, becoming sustainable”.
The year is 1750 – just before the industrial revolution. The overall population of the UK is around 6.5m. London has a population of 675,000: the second most populated city is the sea port of Bristol, with 45,000.
Roll on to 1850, when the industrial revolution has passed its peak. The UK population is now 26m. London has a population of over 2.5m. Liverpool, Manchester, Birmingham, Leeds and Sheffield have all overtaken Bristol, all with populations of over 150,000. Whereas London has remained at around 10% of the country’s population, the next 5 cities have moved from being around 2% of the population to around 6%.
The transformation of the northern cities pulled in people from the outlying areas. Moving to the cities ‘paved with gold’ was seen as the way to become rich via working in the new mills. What it really led to was the London that moved from Hogarth to Dickens, and northern cities where malnutrition, illness and the poor houses led to high rates of death in these expanding, coal-fired conurbations. The countryside suffered – there were fewer workers available to work in the fields, and this led to less fresh food being available to feed the growing population of the cities, leading to diseases such as rickets and scurvy. Even where workers remained outside of the cities, they were increasingly pulled in to working down the mines to fuel the growth of the cities.
Is an equivalent happening as technology creates the digital revolution?
The wrong focus
The focus to date has been on the intelligent city. This has a degree of sense, in that it provides constraints around a vision. Those creating the intelligent city can focus on specific boundaries, a specific population of people and specific desired outcomes. However, if the desired value of the intelligent city is forthcoming, then that city becomes more attractive as a place to live than other cities, towns and villages around it. This has been seen in cities where hyperspeed internet has been introduced, and where integrated citizen services have improved accessibility of certain services. The massive growth of cities such as Pune in India (10-year population growth 40%) and Shenzhen in China (25%) shows how people are still being sucked in to high-growth centres.
London now has a population of around 8m – more than the whole UK population back in 1750. Even with massive improvements in technology, it is struggling – many organisations find that advertised internet speeds are rarely (if ever) achieved; housing costs are driving people to live outside the city and travel in; transport and utilities are struggling to cope with demand; homelessness is on the increase. London is far from being an intelligent city.
This is where the internet of things (IoT) may be able to help. Instead of focusing on an individual city, governments, organisations and communities should start to focus on citizens across the whole of the country, and even beyond. Each citizen has their own needs, whether they be a city banker or a country farmer. Prioritising one above the other leads to increasing friction and feelings of ‘us and them’ between individuals and groups. Providing a level playing field leads to a more cohesive community, which then leads to greater success as a country.
As a starter, providing good levels of internet access to villages means that more people can work from these areas, so moving away from the large second home model that is prevalent in the UK. Many of these homes are empty for large parts of the year, meaning that few fully local businesses can survive. Enabling people to spend more time in the villages can revitalise such local businesses – the butchers, bakers and greengrocers, for example. This does not mean the government’s target of a minimum of 10Mb/s as a universal service offering (USO) by 2020 is going to help much.
The UK is already way down the global internet speed rankings. With some countries already working against USOs of 1Gb/s and some cities, such as Chattanooga in the US already stating a USO of 10Gb/s by 2030, 10Mb/s is looking a little like wet string and baked bean cans.
As consumers move increasingly to a digital economy, the need for faster broadband speeds is pressing. Sure – basic browsing, buying and information seeking can be done on 10Mb/s, but telephony, music, video conferencing and HD TV streaming will be constrained by such speeds. 5G could help here by providing high speed connectivity without the need for dependence on ageing copper and aluminium infrastructure.
The broader use of the internet of things (IoT) also needs good connectivity. Farmers can use IoT devices on their farms to optimise the use of the value chains from the farm to the fork – but the data being gathered by thousands of devices on the farm needs to be dealt with adequately. Some of that can be managed through intelligent filtering at the farm itself, but true high speed internet will help enormously in the capabilities to aggregate, analyse and report on the data.
Public transport can also benefit from such connectivity – citizens can ensure that full itineraries are created and managed in real time, linking buses, taxis, trains and so on lowering the needs for owning cars. Indeed, as autonomous vehicles come through, the need for solid connectivity to clouds where the vehicles can exchange and act on data becomes a necessity.
The right skills at the right time
Identifying skills as required in real time can also be better enabled. Need someone to come to you and fix your printer? Maybe there is a mobile engineer not too far away at the moment – GPS tracking and mobile work ticketing can make it that the expert can be there in a matter of minutes or hours. Likewise, need someone to fix your machinery on the farm? Don’t wait until tomorrow – either have the IoT pre-identify the problem before it becomes a major issue and call in an engineer to swap things out, or get the nearest engineer on site as soon as possible – without needing to pick up the phone.
The whole of the UK can benefit from an IoT based around better connectivity – it can move from these too highly focused intelligent city projects to an intelligent, and far more productive, country model. If countries would then start to use connectivity in a positive manner to break down the bureaucratic and nationalistic walls between nations, then we may – just may – be able to move toward the intelligent planet.
Mobile devices put access to IT right into the hands of people while they are out and about performing their work tasks. For many this is not just about ‘being in touch’ or getting access to useful data. IT tasks performed using the mobile device are critical to the business process.
These tasks could include a courier getting delivery jobs or the recipient’s signature or a railway guard checking and selling tickets. It might involve an engineer in the field performing maintenance, identifying failure and scheduling spare parts. Or it might be a retail worker checking stock and inventory. In each case, the mobile worker is reliant on the technology.
Once, these tasks would have involved pen, paper, forms and considerable delays. Now the transactions are instantaneous, with paper mainly being replaced by scanners, sensors and code readers. Physical output will only sometimes be necessary for part of a service interaction or those requiring a receipt. This means that all elements of hardware, software and network connection, are important to continuity of service of the business process.
Is consumer mobile technology sufficiently up to the task?
Not all situations are hazardous, but some, outdoor workplaces such as building sites or damp and dusty locations, will be. Others will involve open interaction in public places on transport, in shops, or over large campuses with indoor and outdoor spaces like hospitals, universities and factories. Most working locations can be unpredictable and unforgiving if devices are mishandled. For many of these working environments, consumer mobile technology is not sufficiently durable. Failure of any single element of the system causes downtime, aborted processes and user frustration.
Individuals will have a connection to mobile devices of their own and (probably) take a little more care with them. Workplace devices are a different matter. Many workers will have other things to consider. They may be working outside or in cold situations where they need gloves. It might not always be possible to pay complete attention to looking after the device. It’s not that employees are being careless, but their primary focus needs to be the task, not device, in hand. The device needs to be sufficiently durable to take care of itself.
The research in Quocirca’s report, “All mobile, still working, becoming sustainable”, covers the adoption of sustainable mobile device strategies. It shows life expectancy is one of the top three buying criteria, after product cost and cost of ownership, for mobile devices. However, too much attention to upfront device cost savings by using cheaper or consumer devices risks introducing increased costs over a longer period. Even the oft-used wider perspective of total cost of ownership risks ignoring consequent cost increases for the business process.
Sustainable mobile device strategy
This is where a sustainable mobile device strategy will be beneficial, not only for the environment, but also for its cost impact on the business. This comes from a mix of hard costs and soft costs, since mobile devices themselves are part of a broader ecosystem. In many mobile use cases in logistics, retail, transportation and field services there will be important ancillary components. These could include vital peripherals – scanners, printers etc. – as well as accessories – protective and carry cases, vehicle mounting points etc. These may well be affected by changing or updating broken devices.
A sustainable approach means that these elements should still be in use over several generations of the primary device. This itself should be sufficiently durable to survive longer. Devices should also be update-able and upgradeable in the field; this includes changing batteries. These simple hardware improvements extend reliable working, and avoid one of the largest soft costs – interruptions to the business process. Consumer-oriented mobile devices are rarely this flexible, robust or designed with longevity of peripheral support in mind.
More consistency and longer working with the same device means less retraining required for users and avoids frustration. Furthermore, happier users are more likely to take a little more care of the tools they have become familiar with. It is therefore better for both the environment and the bottom line to take a longer look at the use of mobile devices over the entire business process, rather than simply trying to make an upfront saving on device cost. These approaches towards a more sustainable mobile device strategy are explored further in the Quocirca report, “All mobile, still working, becoming sustainable”.
Load Balancing (LB), is now popping up on the corporate security agenda! LB is no longer just about managing traffic flows across enterprise routers and servers. In the age of the cloud and software defined networking (SDN), the LB off-loading function has serious possibilities for deflecting DDOS attacks by shifting attack traffic from the corporate server to a public cloud provider. Next generation software load balancers with advanced dashboard capabilities can also provide deep analytics down to the individual application. This is exemplified in the next-generation SDN load balancing just announced by AVI Networks.
Companies increasingly rely on their WAN access for business-critical application performance, and servicing their on-line customers. Previously, that would indicate the need for specialised hardware and significant redundant capacity – just think of retail traffic spikes on Black Fridays! It would also be expensive to upgrade. With SDN, this all becomes a software issue on standardised X86 hardware.
We also continue to see increases in the number and size of DDOS (Distributed Denial Of Service) attacks, with the heaviest attacks now surpassing 600GBps, according to Akamai. This type of cybercrime represents about 25% of corporate cybercrime costs. Building significant hardware-based DDOS avoidance capacity is very costly, and requires high maintenance levels. Software load balancers with cloud offload can provide a much lower cost and elastic protection. To demonstrate scalability in software, AVI Systems recently scaled applications from zero to one million SSL transactions per second in under ten minutes on the Google cloud.
SDN in the data centre
With SDN, enterprise data centres can rely on a converged X86 server base. They can virtualise their WAN access channels by bonding fixed and wireless connections using SDWAN routers (see https://www.computerweekly.com/blog/Quocirca-Insights/Dismantling-data-centre-and-WAN-silos). And now they can deploy software defined load balancing to ensure their application performance, as well as elastically expand (or contract) network capacity as needed.
To do that requires data centre integration, virtualisation and convergence, as well as hybrid cloud management. Furthermore, to be on the leading edge, companies will want to containerise these functions to allow data centres to deploy business applications more rapidly, with reduced development overhead, lower costs, and increased business agility.
The diagram depicts load balancing across a hyper-converged infrastructure (source: AVI Networks).
SDN in the converge data centre
The next generation data centre using products like Big Switch Networks, creates a distributed data centre architecture. This has bare metal hardware that is virtualised, uses containers and hybrid cloud extensions. SDN is still not one-size-fits-all! Inevitably IT departments looking at the next generation of SDN load balancers need to ensure:
- Compatibility with the major public cloud providers.
- Virtualisation presupposes compatibility with VMware and Openstack.
- For X86 compatibility, enterprises can use Intel Bare Metal.
- Automation, management and orchestration can align with Chef, Ansible, Puppet and others.
- SDN controller products are available from a range of providers like Cisco, HPE and Contrail.
- Then there is the container tech coming from Kubernetes, Red Hat OpenShift, Mesosphere DC/OS and Marathon.
- Finally, to manage and orchestrate the hybrid cloud environment will require REST API based dashboards.
Load Balancing as IT insurance
IT departments stand and fall with their ability to deliver business continuity at still lower price points. They need to justify their own existence every day! Call it insurance in the broader sense. Providing elastic allocation of compute resources, and using the ability of major public clouds to suck up DDOS attacks to ensure business continuity, can be viewed as an insurance policy. IT faces line-of-business demands for more agility to support their DevOps plans, and the ability to provide different corporate constituencies with deeper analytics into individual apps performance, to determine where the delay bottlenecks are. Providing user-friendly and flexible business continuity options that deter lines-of-business from going off-piste, will also curry favour with the company board, as it attempts to implement Governance, Risk and Compliance (GRC) policies.
Consumerisation and collaboration bring many positive changes to the enterprise. Employees can now use the devices they prefer. Through social media, they have also become used to sharing and communicating more readily with friends and colleagues. However, these changes also introduce security risks – just who and what have you got connected to the network?
If it was simply a matter of traditional IT products and regular employees, that would be complicated enough. Now all manner of smart devices and itinerant visitors are connected.
In a concerted industry effort to tackle these types of issues head on, cyber security is for the first time going to form part of conference discussions at the forthcoming Audio Visual (AV) event, Integrated Systems Europe (ISE) in Amsterdam in February. The industry’s two main associations, CEDIA and InfoComm have assembled an array of experts to discuss cyber security and the associated risks over a morning conference on Friday 10th February.
This initiative is to be welcomed as the security aspects of IT rise to the fore. Technology is not only pervasive in working environments, but also an integral element of our home lives as consumers. Widespread use can breed complacency. Organisations need to have the tools, systems and processes in place for technology to be used safely and securely in the workplace.
In many organisations cloud based services are simple to buy to extend a project without bothering IT. Employees are also used to bringing or wearing their own devices. This trend towards ‘shadow IT’ and BYOD (Bring Your Own Device) has few technical boundaries. So, when a meeting room screen needs to be connected or a video feed is required it is equally easy to buy consumer AV devices or services.
This ‘BYOAV’ (Bring Your Own AV) might seem innocuous, but AV technology, consumer and enterprise, has followed the same trends as many other technologies. Cost reduced (so easily affordable), network ready, often wirelessly (so always accessible), and open (so should be interoperable). But it also introduces, often invisible, security issues.
AV equipment is frequently placed in locations where presenting and sharing involves third parties, either as recipients or co-presenters. Guest access to Wi-Fi networks is expected too and should be secured or managed, but connections to AV equipment are more lax. Older systems may still rely on VGA connectors and cables. Sophisticated modern AV installations and low-cost consumer options are increasingly wireless. Even if they include security, the chances are high that it will be different to devices from other manufacturers in other rooms. It will also most likely be different to what is already in place elsewhere in enterprise IT.
Some control and consistency will need to be imposed, but historically, AV installations have been part of office management and facilities, often with little involvement from IT. Current AV equipment is highly sophisticated. Its potential impact both on fixed and wireless networks and security, means that AV needs to be incorporated and integrated into the IT management function.
AV also needs to be considered as part of overall enterprise security. Decades ago, some companies worried about the ability for snoopers to pick up the signals from monitors from a car parked outside of offices. Today badly protected wireless devices and networks pose risks. So too do big bright screens that can be photographed surreptitiously by mobile devices.
Snooping by visual means or via an unprotected wireless network both constitute security risks when using AV. So too does the way that users – employees and third parties – authenticate to use or access AV systems. Dial-in codes, logins and guest access should all be treated in the same rigorous way as any other IT security. As it becomes increasingly simple to seamlessly share content electronically, so it has to be managed.
This has to include a combination of polices and processes as well as tools, but the first step is to understand the scale of the problem. To do this requires co-operation and integration between those involved in AV and IT. It starts with better understanding of the current capabilities of products available and the direction of innovation.
The AV industry has undergone much recent innovation. With large display technology becoming much more affordable, screens are popping up everywhere. These include ad hoc meeting spaces and huddle rooms as well as more formal conference rooms. These are being made accessible by companies from Google and Intel to Barco, Sony and AMX. These companies also attempt to apply security and control through their own, different, systems.
Each are all very well in isolation. In mixed environments with so many other elements to consider, IT security needs to seamlessly consolidate diverse technologies. If the measures that keep AV systems secure become too complicated or restrictive, users will simply bypass them.
In addition to AV/IT integration, IT security managers need to extend security training and best practices to include visual and audible components. Unwanted data leakage is not just what is sent over the network, but may also be what is seen and heard.
AV security now needs to be taken seriously within IT. Given the current focus on collaboration and collaborative tools, IT managers would benefit from engaging with AV professionals. This could include a visit to major trade shows, such as ISE, and perhaps taking time to look in on the conference on cybersecurity.