Quocirca Insights

Page 4 of 30« First...23456...102030...Last »

March 16, 2017  8:33 AM

Bad-bots, the new charlatans of healthcare

Bob Tarzey Profile: Bob Tarzey

Healthcare providers have many challenges, but if you stick with the mainstream, you can usually still expect a reassuring bedside manner from healthcare professionals; you have to actively seek out charlatans in the 21st Century! However, healthcare professionals are busy and consultations are often hurried. Anything that can help them save time is welcome and, as in many other industries, the healthcare sector is turning to automation.

In healthcare automation is often in the form of software robots (or bots) that can automate certain tasks. Admin bots make appointments, provide access to clinical records, answer billing queries and process payments. Chat-bots can deal with routine ailments, freeing healthcare professionals to deal with more complex ones. Artificial intelligence (AI) will see the field move forward apace with advanced symptom checkers like Babylon Health and there are already a number of healthcare projects based around artificial intelligences like IBM’s Watson and Google’s DeepMind.

However, there is a downside, charlatans may find their way back into mainstream healthcare in the form of automated threats or bad-bots. These bots can be used to gain access to online healthcare systems, either via brute force entry of personal accounts or seeking out and exploiting software vulnerabilities. Once in, the criminals that drive the bad-bots steal valuable data (a full US Medicare record sells for around $500) or perpetrate insurance and payment card fraud.

These bad-bots may not be harming patients by dishing out poor medical advice like the charlatans of old, but their effects can be just as harmful. They impact the availability of healthcare applications, invade privacy and undermine the confidence in what should be a brave new round of automation in the sector which frees healthcare professionals to deal with complex problems.

Fortunately, there are ways to  identify, control and, when necessary, block bots, which are now estimated to be responsible for 46% of all online interactions. Quocirca has written a series of e-books on the problem in conjunction with Distil Networks, a provider of direct bot detection and mitigation technology. The latest e-book in the series, The ultimate guide to how bad-bots affect healthcare can be viewed HERE.

March 15, 2017  3:33 PM

Information management – when metadata is king

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

canstockphoto7719120Consider a document.  It makes no odds as to whether it is a Microsoft Word document, an Adobe pdf file, an Autodesk file or whatever.  Just what can you find out about it?

Well, every file has a digital fingerprint associated with it:  an operating system can look at more than just the file extension to identify just what type of file it really is.  Within the zeroes and ones of the binary content of the file on the disk is a ‘wrapper’, a set of details that describe what the file is.

Once the wrapper is understood, the contents of the file can be indexed, so that systems can search this index as well as the actual document contents. For example, on my Windows device, a ‘search’ in for the term ‘information management’ would pull up this document (and many other files) up in the Windows File Explorer.

However, although this has uses, there are some problems.  Much metadata is not immutable.  As an example, open a Microsoft Word document.  Click on the ‘File’ tab and then look at the right-hand pane marked ‘properties’.  You should see an author marked there.  However, if you click on the ‘properties’ marker itself, you can choose ‘advanced properties’ – here, you can change the author to anything you want.

Likewise, much of the metadata associated with the document can be changed.  Someone with very basic knowledge and the right tools can change the content of a document, along with its dates and make it look to all intents and purposes that it was the original document.  As such, should a conflict arise between the actual creator of the file and the recipient of the same file who has then changed it, it becomes a case of one person’s word against another.

However, if immutable metadata is used, then things change.  By storing the file with extra information where all modifications are logged, such content changing is no longer possible.  By ensuring that the original author is logged and held against the document, along with all dates and times that the document has had an action taken against it (opened, edited, emailed, printed, whatever), full governance, risk and compliance (GRC) needs should be covered.

Let’s just start with document classification.  By assigning a simple set of metadata tags, such as ‘Public’, ‘Commercial’ and ‘Private’ to documents, a lot of process flows can be made more intelligent.  A Public document can be left unencrypted and moved along a process flow with very little interruption.  It can also be passed through email systems without too much scrutiny, apart from a content check to ensure that certain types of data or alphanumeric strings aren’t found within the document for data loss prevention purposes.  A Private document may need to be encrypted, and can only be made available to certain named individuals or discrete roles within the organisation.  The credentials of the sender and receiver of such a Private document should also be checked before it can be sent as an attachment to an email.

Enterprise information systems (EIM) make extensive use of metadata as it enables so much more to be done.  It can do away with folder and file constraints, as pointers to the document are metadata in themselves and the documents can reside anywhere.  Rather than taking an old-style enterprise content management (ECM) approach of pushing files into a relational database as binary large objects (BLObs), EIM content can stay where it is, using the EIM index and global namespace (the database of the pointers and all the metadata held on the files) to find the files themselves.

With EIM, when an individual searches for something, the system searches the metadata.  When they want to read or edit a document, the pointer shows the path to the file and enables access to it.

This provides a much more flexible information management approach, and by copying the metadata store across multiple different locations, provides a level of high availability without the need for expense on dedicated systems using synchronised content databases.

A metadata-driven EIM system also improves security.  A cyclic redundancy check (CRC) can be carried out on each file as it is embraced by the system.  This creates a unique code based on the content of the file.  Should anyone change that file outside of the system, for example by using a hex editor at the hard drive storage level, the EIM system will know that this has happened, as the CRC check will identify that something has changed.

All told, in the new world of highly open information sharing chains, immutable metadata is a need, not a nice to have.

Quocirca has authored a report on the subject, commissioned by M-Files, which can be downloaded here.


March 14, 2017  12:17 PM

Real-world views on the role of IoT in the farm-to-fork food chain

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

Late on in 2016, Quocirca carried out primary research for Rentokil Initial, looking at perceptions about the current and future impact the internet of things (IoT) will have on organisations.  The respondents were from large companies in the farm, logistics/warehousing, food processing and retail industries in Australia, China, the UK and the USA.  None of the respondents was in a technical position – they were all chosen because they had responsibility for food hygiene within their organisation.

And herein was where a wake-up call to all those technology companies that believe that the IoT is fully understood within their target organisations, for example when it comes to the likely number of devices involved.  In research carried out by Quocirca for ForeScout earlier in 2016, where the respondent profile was senior IT decision makers in German-speaking countries and the UK, the average number of IoT devices expected to be in use within an organisation within 12 months was 7,000.

What quantity of IoT devices do you expect to deploy in the coming 24 months? (From Rentokil Initial research)

Figure 1: What quantity of IoT devices do you expect to deploy in the coming 24 months? (From Rentokil Initial research)

Compare this to the Rentokil Initial research, where only 10 respondents out of the 400 expected to have more than 1,000 devices – with nearly half expecting “very few” (less than 10) (see Figure 1).

Why the discrepancy? A more granular drill down into the data, hints at the reasons.  Within the farm-to-fork food chain, the logistics function is already a big user of IoT devices.  Chilled transport uses temperature detectors and cab-based GPS generally linked to central control systems; some advanced logistics companies are using multi-function systems that not only monitor temperatures, but also things like when and where the lorry or container’s doors were opened; the G-forces on the food packed in transit; CO2, nitrogen and other gas levels and so on.

It would have been expected that amongst the 100 logistics and warehousing companies interviewed, more would have had such capabilities – and therefore, the number of IoT devices already in use would already exceeded 1,000.

Food processing lines also tend to be full of IoT devices – for example, devices monitor the quality of food; the temperature of blanching or cleansing water; look for any problems along the line.

Rentokil Initial carried out some roundtables with some of their customers to drill further into perceptions around IoT.  One respondent stated that they had never even heard of the term IoT.  Others stated that they had specific needs – but did not see things such as the monitoring of how employees dealt with personal hygiene as an IoT issue.

It becomes apparent that whereas technical staff are seeing all of these as areas where IoT is of use, less technical staff see them as general tools of the trade – something that is part and parcel of what is needed, but not part of a more coherent, joined up environment.

In the context of managing food safety within your environment, how important are the following pieces of information?

Figure 2: In the context of managing food safety within your environment, how important are the following pieces of information?

“Having enough data to rapidly and effectively deal with an infestation/hygiene incident” was the number one concern.  However, other parts of the research showed that tying this in to a need for a more standardised IoT platform where such data could be pulled together so that this can be done was not being thought about.

It is apparent that there is a deep chasm between those in the technology space who are building up a knowledge of the IoT and those in the line of business who are actually trying to deal with the day-to-day problems.

Vendors with an interest in IoT approaching IT departments may well find that they are shouting in the echo chamber – the people that they are targeting will agree with what they say, but will not be able to raise the funds necessary for funding meaningful IoT projects.

Instead, these vendors must construct solid business messages around why the IoT matters to the business; they must have solid use case scenarios that use the right language to empathise with the line of business’ needs.

Otherwise, IoT projects will be carried out in silos of usage, leading to the age-old IT problems of islands of data that cannot be pulled together easily and analysed.  This then minimises the value of the IoT and fails to provide the distinct value that benefits the business.


March 13, 2017  12:20 PM

Collaboration innovation – re-thinking the workplace

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

Most organisations are looking for ways to foster collaboration and grow team productivity. How this is achieved is less obvious. For a while it has been assumed that if you throw sufficient communications media (ideally unified into a single tool) at people then they will spontaneously collaborate. This is rarely the case. What happens is either over-communication and information overload if there is a sharing culture, or siloed, secretive, business as usual, if there is not.

divein

More radical approaches employ smart use of facilities or create collaboration spaces within the working environment. These might simply be comfortable seating in a relaxed and accessible part of the workplace for a few people to ‘huddle’, (such as this novel idea from Nook) or some forced Californian cool of beanbags, table football, bright décor and a limited edition coffee served by an on site barista.

Walking or standing

While a comfortable working environment plays a part, there is something about the posture of participants that affects how they collaborate too. Are they walking, standing or sitting?

For those walking, the chances of meaningful collaboration are low. Already multi-tasking, their communication tends to be focused; issuing commands, some information sharing, but complex interaction between multiple participants? Unlikely. All useful, responsive and timely, but it is not collaboration – it tends to be more command and control.

Standing keeps people (literally) on their toes, and has been suggested as a way of holding shorter meetings. Attendees are less able to relax, so more likely to participate and reach decisions quickly. But does it lead to more or better collaboration?

One area where meeting space technology has advanced and become more widely available, does support the notion of collaboration while standing around. The success of tablets has led to wider availability of touch screen displays. What started as recording and copying whiteboards has evolved into large touch enabled interactive screens. These are often smart and connected to the network, enabling remote as well as local interaction and access.

Is this the solution to collaboration?

It depends.

While this will work well for sharing information – presentations, classrooms – it is not necessarily collaboration. One person presents or shares at a time. They might have their back to their audience while they interact with the screen. It works very well in a one to many scenario, and of course presenters can take turns. But this is not really multiple people working together and at the same time in free-flowing collaboration. Ideas may occur to individuals, but by the time they get their ‘turn’ the momentum has been lost or the discussion has moved elsewhere.

Sitting around

Most meetings involved attendees sitting. Keeping people engaged, especially when their email and favourite social media site is only a glance away, is a challenge. Sit them in remote places with only an audio connection on a conference call and the temptation to be distracted in boring moments might be too great. Being there in person or holding shorter meetings might be better, but that is not always possible. Adding video to the connection might help, but in a group setting, with everyone in the room looking at a distant screen, the situation is similar to a standing presenter.

Two recent product developments put their own distinct twists on how to do it differently and improve interaction.

One is Polycom’s portable video unit, the RealPresence Centro. Four screens with integral cameras and microphones make this connected Dalek the centre of attention in a meeting. Those involved sit, or stand, and talk to each other facing the device, which can be connected to remote participants on any other video device. It might seem quirky, but rapidly feels natural and engaging for everyone, who can participate locally and remotely, facing everyone else across the unit or across the network. With the concept of ‘huddle’ spaces proving popular, the RealPresence Centro might have found an interesting niche.

The other is more unusual, but familiar to anyone who has seen the film, Minority Report. Oblong’s Mezzanine employs a series of large screens and a wand pointing device (not yet holograms and hand gestures, Tom Cruise fans) to share and interact. Participants are surrounded and therefore immersed by information presented on the screens. These are replicated remotely for those beyond the room.

Content on screen can be interacted with, inserted, moved, parked and, crucially, visually presented using a third dimension of depth or distance away. Moving it closer makes it larger, moving in front of other content in a satisfying application of perspective. Everything is coordinated via the wand, but participants can bring and use their own devices to share and integrate into the experience – locally or remotely. Inserting new content, comments and flags is simple and seamless.

Mezzanine definitely has a different feel compared to other systems, and does need the room to be suitably equipped. The approach allows for much more free flowing interaction, avoiding stalling or interrupting thought patterns.

Getting everyone engaged

Technology companies and products have made it much easier to communicate. But this does not always make the process collaborative, engaging or ultimately effective at reaching a desired conclusion. All too often new communications media are dominated by those who ‘shout loudest’ or are restricted to those in senior or special positions. Thinking differently about the process and environment from a human perspective might provide the impetus to make collaboration something that everyone wants to and can participate in and that their contributions are recognised and valued.


March 8, 2017  1:50 PM

From ECM to EIM: the need for control

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

canstockphoto27605501Enterprise content management (ECM) has long been necessary for organisations in highly regulated industries.  From SoftSolutions through to Documentum and OpenText, companies have implemented systems that control the flow and access of information within their business.

However, the problem is that these products tend to be used purely to manage a small subset of an organisation’s content.  Most organisations wait until an information asset has gone through some of the early stages of its lifecycle before it is entered into the system.  This may be based on reasons of cost (per-seat licencing for ECM systems tends to be high) or process: if an ECM system has been put in place to manage a single set of processes, such as those required for Federal Drug Administration (FDA) in the pharmaceutical or Civil Aviation Authority (CAA) in the aviation industries.

Whatever the reason, putting only a subset of information into a system is dangerous.  When an individual carries out a search across the system, they will (unsurprisingly) only get returned what is in that system. If they are then going to make a decision on what is returned, they could be missing out on pertinent information that is still outside the system: documents that are still in the early stages of their lifecycle.

These early-stage documents will be the ones that contain information that is most up-to-date, as they are the ones that are still being worked on.  These could therefore carry the information that can make or break the quality of the decision. Increasingly, such documents may not even be stored within the direct control of the organisation – they may be held in the cloud using services such as Dropbox or Box; they may be elsewhere in the chain of suppliers and customers the organisation is dealing with.  As these assets are not in the ECM system, they are less controlled – access rights are not managed; information flows are not monitored and controlled.

Rather than converging on a system that fully manages information, organisations seem to be struggling to control the divergence of information types and locations – and this can be damaging.

A rethink of ECM that moves thorough to an enterprise information management (EIM) system is required.  EIM is approach where information is captured as close to the point of creation as possible and managed all the way through its complete lifecycle to secure archiving or disposal.

Based around an underpinning of metadata, large amounts of information can be controlled.  Rather than pull all the documents themselves together into a massive, binary large object (BLOb) database, these files can be left where they are and only the metadata needs to be managed.  Such a metadata system will be a fraction the size of the overall information, plus it can be mirrored and replicated across the overall technology platform, providing high availability for searching and retrieving single items from the information asset base.

Through these means, all information sources can be included, so enhancing an organisation’s governance, risk and compliance capabilities.  It makes decision-making more complete and accurate, enabling an organisation to be more competitive.  It also provides better capabilities for collaboration around content, as single sources of original information combined with versioning and change management can be managed through the metadata.

In the first of a series of short reports on the subject, Quocirca looks in more depth as to how an organisation needs to readdress its needs around information management in the light of increasingly diverse information assets and growing GRC constraints.  The report is available for download here.


March 6, 2017  11:10 AM

Colocation HPC? Why not.

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

ngd_rack-aisleHigh performance computing (HPC) used to only be within the reach of those with extremely deep pockets.  The need for proprietary architectures and dedicated resources meant that everything from the ground up needed to be specially built.

This included the facility the HPC platform ran in – the need for specialised cooling and massive power densities meant that general purpose datacentres were not up to the job. Even where the costs of the HPC platform were just within reach, the extra costs of building specialised facilities counted against HPC being something for anyone who needed that extra bit of ‘oomph’ from their technology platform.

Latterly, however, HPC has moved from highly specialised hardware to more of a commoditised approach.  Sure, the platform is not just a basic collection of servers, storage and network equipment, but the underlying components are no longer highly specific to the job.

This more standardised HPC platform, built on commodity CPUs, storage and network components, is within financial reach. This still leaves that small issue of how an organisation can countenance building a dedicated facility for a platform that may be out of data in just a couple of years?

For those with a more generic IT platform, colocation has become a major option for many.  Offloading the building and its maintenance has obvious merit, especially for an organisation that is struggling to understand whether its own facility will grow or shrink in the future as equipment densities improve and more workloads move to cloud platforms.

However, the use of colocation for HPC is not so easy.  The power, emergency power and cooling requirement needed for HPS will be beyond all but certain specialist co-location providers.

Power

Hyper-dense HPC equipment needs high power densities – far more than your average colocation facility provides. For example, the average power per rack for a ‘standard’ platform rarely exceeds 8kW per rack – indeed, the average in colocation facilities is more like 5kW.

Now consider a dense HPC platform with energy needs of, say 12kW per rack. Can the colocation facility provide that extra power?  Will it charge a premium price for routing more power to your system – even before you start using it?  Will the multi-cabled power aggregation systems required provide power redundancy, or just more weak links in an important chain?

Also consider the future for HPC.  What happens as density increases further?  How about 20kW per rack? 30kW? 40kW?  Can the colocation facility provider give guarantees that not only will it be able to route enough power to your equipment – but also that it has access to enough grid power to meet requirements?

Emergency power

What happens if there is a problem with grid power?  With a general colocation facility, there will be some form of immediate failover power supply (generally battery, but sometimes spinning wheel or possibly – but very rarely – supercapacitors), which is then replaced by auxiliary power from diesel generators.  However, such immediate power provision is expensive, particularly when there is a continuous high draw, as is required by HPC.  Make sure that the provider not only has an uninterruptable power supply (UPS) and auxiliary power system in place, but that it is also big enough to provide power to all workloads running in the facility at the same time, along with overhead and enough redundancy to deal with any failure within the emergency power supply system itself.  Also, make sure that it is not ‘just a bunch of batteries’: look for in-line power systems that smooth out any issues with the mains power, such as spikes, brown-outs and so on.

Cooling

Remember that a lot of power also gets turned into heat.  Hyper-dense HPC platforms, even where they are using solid state drives instead of spinning disks, will still produce a lot of heat.  The facility must be able to remove that heat effectively.

Taking an old-style approach of volume cooling, where the air filling the facility is kept at a low temperature and sweeps through equipment to remove the heat which is then extracted outside the facility is unlikely to be good enough for HPC.  Even hot and cold aisles may struggle if the cooling is not engineered well enough.

A colocation facility provider that supports HPC will understand this and will have highly targeted means of applying cooling to equipment where it is most needed.

HPC is moving to a price point where many more organisations can now consider it for their big data, IoT, analysis and other workloads.  There are colocation providers out there who specialise in providing facilities that can support the highly-specialised needs of an ultra-dense HPC platform.  It makes sense to search these providers out.

Quocirca has written a report on the subject, commissioned by NGD and Schneider.  The report is available for free download here: http://www.nextgenerationdata.co.uk/white-papers/new-report-increasing-needs-hpc-colocation-facilities/

 

 


March 6, 2017  9:01 AM

Growth of mobile-owning individuals drives need for innovation

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

mobile-phone-indiaAccording to the GSMA, there are nearly 5 billion active individual mobile phone contracts on the planet at the moment.  Sure – many of these will still be for individuals who have more than one device, but it is still felt that by 2020, around 75% of the world’s population will have some form of mobile device.

With global and local handset manufacturers moving from the provision of low-end, voice-only handsets for emerging markets to making cheap smartphones available, this can lead to a whole new approach in how such markets can operate at the social and economic basis.

As mobile connectivity increases in these countries and the use of 4G and 5G overtakes the old 2G and 3G connections originally put in place for the major conurbations, relatively high speed, universal wireless connectivity becomes the norm.  The smartphone can become a personal hotspot for the individuals to use for other items to connect to the greater world as needed.  But what sort of things could this bring in?

Firstly, consider health.  Low-cost wearable sensors could be provided to monitor such things as blood pressure, blood sugar levels and so on.  For patients that have been seen by a travelling doctor and have been diagnosed with, say, a fever, cheap, disposable digital thermometers can measure and send back data via the mobile device on a regular basis, so that the doctor can respond on a more ‘as needed’ basis.

The same goes for pregnancy – rather than hoping that nothing untoward will happen between visits when the doctor/midwife just happens to be in the area, wearables can send back data as needed so that the health of the mother can be monitored centrally on a regular basis.  Many issues can then be dealt with directly over the mobile device, via voice or video call; other areas through the sending of links to the phone; others by scheduling a visit from a lower-skilled local healthcare professional.  Only where a real emergency is obvious does the doctor have to go to the patient directly.

Now consider the economic basis.

As these smartphones do all have browser capabilities, individuals can now cooperate and trade with each other far more easily.  A farmer in one area of the country can use cloud-based systems to find customers in other areas – or can input details of crop availability to food processor companies that may wish to buy the crops.  Issues, such as the occurrence of a pest such as locusts or impending drought, can be quickly logged so that plague tracking can be initiated and dealt with far more effectively.  The farmer can also keep a closer eye on what is happening across their farm through the use of internet of things (IoT) devices being connected to the mobile device.

Small, local farmers can let villagers know when they will be in the area with specific crops, and what price they would like for them.  They can then take orders and adjust prices as necessary to ensure that the entire crop is sold at a good margin in the minimum number of journeys required.

Farmers can also become cooperatives.  They can come together to provide a more complete offer – one lorry can pick up supplies from multiple farms and deliver packages of, say, maize, milk, meat, vegetables and fruit to markets, or even directly to customers.  Smartphones can provide mapping and geo-analytical systems to ensure that the lorries take the optimum route, minimising the costs of fuel and stress on the vehicle itself.

By coming together as a cooperative, it also provides the farmers with greater collective bargaining power when dealing with downstream food processing and wholesale companies. Offers of crops can be sent to multiple different prospective customers at the same time, getting them to compete with each other to gain delivery of the crops to themselves.

Individuals can create their own businesses.  Goods that sell well to richer foreigners, such as ethnic art and jewellery can be advertised directly via the web, using the mobile device as a means of inputting the goods into cloud-based retail systems.  On the sale of an item, the monies paid by the customer can be cleared via e.g. PayPal into an easily accessible account; the individual can arrange for the items to be picked up by a courier or to be sent for first-stage delivery to a more central place via train, boat or plane as required.

For the countries involved, the rise in personal mobile device ownership must be seen as a major chance for individual, local and central innovation.  However, contract prices need to be managed to ensure that the cost equation to the individual is obvious.

Governments may need to provide community systems, where a few mobile devices are made available to a community on the understanding that the devices will be made available to individuals on an as needed basis.  However, this is a minor issue, as the figures show such major growth in device ownership.  Where real help will be required is in creating and providing low-cost access to the cloud-based services involved. It may be that data contracts are subsidised under a country’s health budgets, as the returns can be so major in this area.  Healthcare based cloud services can also be funded the same way – or via foreign aid or non-governmental organisation (NGO) funding projects.  If the device and data contracts are so covered, the individual and their community can then work on building the additional services themselves.

In the early stages, governments may find that providing grants or prizes based around individuals and groups which create innovative cloud-based services that help a specific group of people or deal with a specific general need will drive innovation in how mobile devices can be used.

A mobile device-first approach to social and economic success will be different to that which has already occurred in more mature markets.  It is far more of an opportunity, as there is little technology already existing that must be considered.  Such an environment gives massive opportunities to those involved.


March 1, 2017  10:46 AM

Collaboration – where AV and IT meet?

Rob Bamforth Rob Bamforth Profile: Rob Bamforth

There were plenty of amazing products launched and on display at ISE2017 in Amsterdam in early February. But in the background buzz there was a common theme of an industry in transition. While many talked about convergence between AV and IT, some fear the risk that it will actually be more of a ‘collision’. This will have a consequential impact on jobs and revenues.

sunburst2

None of this restrained the exuberance of showcasing the best of the audio visual (AV) sector. The event brought in a record number of over seventy-three thousand attendees. In many quarters, there was also a more upbeat assessment of the new opportunities that might be created as the AV and IT sectors move closer together. There was also an acknowledgement that this would require some work.

Now the dust has settled and the exhibition paraphernalia is dismantled for another year, it is possible to take a pragmatic view of where the opportunities may lay.

The AV industry is undoubtedly undergoing change, but the IT sector is by no means static or settled. There has been a significant and ongoing shift towards the utility or ‘as-a-service’ model, which some find unsettling for both job security as well as data security. There has also been the liberation of IT into the hands of consumers. Mobile, wearables and the internet of things (IoT) have seen IT shift from the easily managed desktop into a voracious hydra of access options. Great for users and customers, but adding to the already challenging IT operational burden.

Is now a good time then for IT to work more closely with AV?

Historically, the focus of AV could be characterised as the experience within the room and an increasingly spectacular ability to convey information. For many, that meant presentations and over the years, the technology that this encompasses has grown in capability and usability. It has also become more connected.

This is where the overlap with IT, with its focus ‘beyond’ the room and across the network, becomes more apparent.

AV is all about the user experience and supporting media-rich communication. With recent advances in large touch screen and interactive displays systems – mirroring the advances in mobile IT with tablets and smartphones – this user experience has expanded into the important, but often elusive, area of collaboration.

This is high on the agenda for IT. The word ‘collaboration’ has been added onto the end of the term Unified Communications, and peppered liberally across many PowerPoint presentations. Making it a reality that delivers its anticipated value has proved difficult.

Making collaboration a reality

IT is very used to tackling the challenges of integration, security and resilience. It has also been unifying the communications plumbing with the help of major IT vendors. But turning this into seamless simple experiences that people delight in using every day is rarely a core competency. Here is where a closer relationship with AV would be beneficial to both sectors – collaboration rather than collision.

Tools for enhancing communications, by unifying or incorporating different strands of media such as video is only one of the areas where the AV world is moving away from point products toward solutions and building broader relationships in open ecosystems of partners. The industry is now showcasing integrated systems to specific business problems. This is not just for collaboration, but also with omni-channel commerce solutions for retail, tools for education and smart buildings as well as the more obvious sectors focused around entertainment.

This was evident at ISE2017, not only in the way that halls oriented around these business topics as themes, but also in that the discussions and presentations on stands and in the conference, had moved on from form and features to addressing business needs and challenges. With this positive attitude, the AV industry does not need to fear convergence along with IT, but embrace it as this will be good for both sectors.


February 28, 2017  12:52 PM

Seclore – DRM 2.0 revisited

Bob Tarzey Profile: Bob Tarzey

In October 2016 Quocirca reported on a new breed of digital rights management (DRM) tools which have emerged in recent years. These tools have security built in to their core and are designed to support the growing used of cloud stores and mobile computing (DRM 2.0). The post looked in detail at three vendors; Vera, FinalCode and Fasoo. Some others were mentioned in passing, including Seclore, a California-based vendor, with origins in India and some major European customers.

Perhaps the most striking thing about Seclore is its claimed DRM market share for its Rights Management product which it says is second only to Microsoft (the latter embeds DRM in certain of its other offerings). Seclore says its own directly managed customer base of 470 enterprise customers accounts for 4.5 million end users. However, via OEM partners it claims another ten thousand customers with 8-9 million users.

In many case partners are using Seclore Rights Management to extend the scope of existing content management or productivity products to ensure protection continues beyond the scope of the base product. For example, “Citrix ShareFile enables security to “follow the file” through integrating Seclore”; this was required to extend DRM to cloud and mobile use. Seclore has been integrated with IBM’s FileNet content management for the same reasons.

It is not just content management systems. Data loss prevention (DLP) systems were originally designed to deal with content moving around within an organisation’s network and police what left it. This has become too limited an approach with the growing use of cloud stores and Seclore claims both Symantec and McAfee’s DLPs are being extended to enable the external use of DLP using its product.

As well as being designed to address the need for external sharing, Seclore ensures it remains independent of device types and document formats to support as wide a range of use cases as possible.

Another intriguing initiative is that, wherever possible it aims to inherit rights and policies from the original systems, for example SAP and Microsoft SharePoint, rather than having to re-write them. However, policy can be modified and Seclore Right Management also enables policy to change as documents progress through a work flow, for example as financial results move from confidential to public domain. These capabilities are key to making Seclore’s OEM partnerships work.

If, like many, your organisation has reached the point where the management of rights needs to be extended to cloud stores and mobile users, then Seclore should be added to the list of products for consideration. Better still, it may be possible to simply upgrade some of your existing technology if there is an existing Seclore integration that allows you to do so.


February 26, 2017  5:22 PM

Car transport? There’s an app for that

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

The impact of self-driving technology, whether it be Uber-style driverless ride sharing vehicles, automated long-haul lorry driving or drone transport, will be felt across all transport sectors.

The next 20 years will see the steady uptake in driving automation. It will increase real-time communications in order to minimise travel time and cost. Legislators must grapple with standardisation, liability and security issues; while the industry is adding more and more driver-assistant services under the hood without significantly increasing the price point. But what about connectivity requirements, and will driverless cars actually reduce travel time?

Automation in the works

High-end cars today are more than semi-autonomous. Many hundreds of meters of wiring connect sensors to computers, that directly interface with the engine and steering systems. The development of car automation technology is a multi-billion-dollar race – a mix of competition and co-operation between the IT and automotive industries.

screen-shot-2017-02-24-at-08-54-31

Major players on the IT side include Google with its Waymo driverless car technology, and Amazon, Microsoft and Apple with their navigation technologies. On the car manufacturing side GM, has acquired Getcruise to create a range of driverless cars, and Mercedes is developing its Car-to-X technology that lets the car exchange information with the surrounding infrastructure, like traffic lights, and other connected vehicles. Ford is partnering with Amazon to provide its driverless cars with Alexa, Amazon’s smart voice assistant technology, allowing drivers to voice communicate with the car systems.

Automated traffic infrastructures

In a fully automated road traffic scenario (something the airlines are pretty close to in the sky), the speed and course of driverless vehicles is optimised by a city-wide computing system. That requires fast and secure active-to-active WAN connectivity between cars and traffic management systems. The automated – and ultimately driverless cars, will need network connections capabilities to handle in-car IoT communication between sensors and computers, as well as external wireless 4G LTE and WiFi connectivity. The cars may also need satellite connectivity in rural environments.

Advanced navigation systems already have network connectivity to check weather and traffic conditions ahead. Intelligent mapping systems like HERE, supply information to control self-driving cars equipped with street-scanning sensors to measure traffic and road conditions. This location data can in turn be shared with other map users.

Ultimately, driving cars will be left entirely to computers – in cars without steering wheels. We will all be passengers or freight. Mobile connectivity must be maintained using dedicated roadside Wi-Fi networks as well as the existing mobile data services. The ability to switch, select and bond with constantly changing wireless base stations will be crucial for success. This is where SD-WAN routers from vendors like Peplink, that can handle multiple connections as a single virtual connection, are needed across a wide range of mobile environments.

With the driver gone, next to go may be the privately-owned car. The Singapore government estimates that replacing today’s 700.000 private vehicles with network connected, driverless vehicles would reduce the Singapore car pool to 300.000. It would simultaniously reduce transport times and the need for parking spaces. It would generally lower pollution levels and improve road safety.

Reduced travel time?

The Singapore scenario, and similar assessments of driverless traffic, vector in the advantages of much higher traffic density and the reduced need for parking spaces. With central management of in-city transport, users will buy transportation services – not vehicles. What these scenarios do not vector in, is traffic increases, if transport becomes as easy as using your smart phone. When every child, disabled, elderly or drunk person can order driverless transport, we risk a physical traffic volume explosion. Just look at the traffic increases the smart phone caused. So maybe queuing is not going away, just because we automate it.


Page 4 of 30« First...23456...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: