So the Digital Economy Bill, setting new standards for broadband and mobile provision, data-sharing and more, is law, waved through along with a whole bunch of other stuff the government would rather you forgot about and just in time for the dissolution of Parliament ahead of the General Election.
Among the provisions in the Bill is a longed-for and long-debated broadband universal service obligation (USO) that enables anybody to ask for and receive a broadband connection with a minimum speed of 10Mbps. The government thinks this is a great moment that will guarantee fit-for-purpose broadband connectivity for consumers the length and breadth of the land.
I think that is a steaming load of rubbish.
Earlier in 2017, the House of Lords proposed an amendment that would have more than doubled this USO from a rather lacklustre but serviceable 10Mbps to a fairly nippy 30Mbps.
The Lords said that the proposed 10Mbps USO was so slow it would probably have to be reviewed almost immediately, while the economies of scale associated with an enhanced 30Mbps USO meant that the extra public money that will be needed to fund it can be easily accounted for.
But this clause never made it into the final cut of the Bill. Why is that?
Those excuses in full
Speaking in the House of Commons last week, digital and culture secretary Matt Hancock set out his justification for flinging the amendment out.
At first, he said he feared that a 30Mbps USO would be undeliverable – although on what basis remains unclear – and said that the risk of a legal challenge to a 30Mbps USO from the industry was too much to bear, which suggests to me that someone who works at a major telecoms operator with a two letter name might have popped by the office to have a quiet word, although of course that is spurious tittle-tattle on my part and you should make up your own mind as to whether or not it happened.
Hancock proceeded to perform an Olympic standard mental gymnastics routine, saying that because the USO was being legislated for under the European Union (EU) telecoms framework, which requires a USO to ensure a baseline of services where a “substantial minority has taken up the service but the market has not delivered” the fact that very few people were taking a service of over 300Mbps meant that providing a service of just a tenth of that speed (and remember that 30Mbps is certainly good enough for a high-def Netflix binge) was therefore quite out of the question.
In Hancock’s favour, he did say that a future government would review the 10Mbps USO once take-up of superfast broadband hit 75%. However one might quite reasonably induce that if Openreach is only going to be held to a 10Mbps USO, which is not superfast and therefore cannot count towards the 75% figure, that point will take a while to reach.
Frankly, this ridiculous kind of chicken and egg politics is holding back the UK’s digital economy. And the thing is, it’s not even a chicken or egg situation: it is quite evident, although apparently not to Mr Hancock, that there cannot be a superfast broadband subscriber without a superfast broadband connection!
However you spin it, and Lord knows we try to be positive, it seems clear to me that under both David Cameron and Theresa May, the current government has been totally shambolic in its commitment to the UK’s broadband infrastructure.
Once again for the slower MPs: you. cannot. build. a. digital. economy. without. good. connectivity.
This is even more important in these Brexit-means-Brexit days, for as Computer Weekly has made clear loudly and on several occasions ever since the EU referendum, we are shortly going to be doing business with the world without the benefits of being part of a massive trading bloc with our closest and most valued neighbours. We’re going to need every advantage we can get!
At the end of the day, the Digital Economy Bill is a bad law and Matt Hancock’s failure to stand up to the commercial interests of Big Telco and commit once and for all to take bold action over broadband provision has done the UK a huge disservice.
I wish I could write that we hope for positive change after the General Election, but I fear that would be in vain.
Did you hear the one about the time Openreach CEO Clive Selley visited the countryside and was shocked, shocked I say, to discover that people in rural areas have difficulty accessing fit-for-purpose broadband?
No, it’s not fake news. This happened.
According to the Shropshire Star, Selley was introduced to a number of rural business owners who are struggling with slow broadband, after being invited to take a look around by local MP Owen Paterson.
Among others, Selley met the owner of a health and safety practice, who is struggling to communicate with his clients, the owners of a holiday home who have to keep apologising to guests who can’t get online.
According to Paterson: “It was interesting to see how shocked the head of Openreach was to hear of so many problems.”
On the assumption that Owen Paterson wasn’t telling porkies to get his name in the newspaper, this strikes me as a troubling trend. Here’s why.
A long-time Openreach insider, Selley, who replaced Joe Garner as CEO in early 2016, has established a reputation as something of a technological whizz, and is particularly hot on emerging delivery technologies, such as G.fast, that are helping Openreach deliver faster services to its internet service provider (ISP) customers.
Nobody disputes the commercial realities faced by Openreach, that super- and ultrafast fibre-based connections – by which I mean either fibre-to-the-cabinet (FTTC) or fibre-to-the-premises (FTTP) – mean that the organisation naturally lines up the areas where it can make the most money off its network for an earlier upgrade – that means towns and cities, where the majority of the population now live.
But Openreach is still charged with delivering fit-for-purpose broadband across the whole of the UK, not just those areas where sound commercial sense dictates it will see a better return on investment from ISPs reselling access to its infrastructure.
So the fact that its leader does not appear to have a full grasp of the situation on the ground in some of the UK’s more out-of-the-way spots is deeply worrying.
People who live in the countryside are the ones who have shouted loudest about their often dismal broadband services, and they will shout louder still as they are inevitably once again bypassed by G.fast and FTTP. It’s why the altnets get such strong traction and have so much goodwill in the areas they serve: they understand the concerns of rural folk.
But judging by Clive Selley’s visit to Shropshire, it would appear that Openreach does not.
Thanks to broadband comparison site uSwitch, which first brought the story to our attention.
Almost three out of five CIOs we speak with today tell us that technology providers often seem to be pushing software defined networks (SDN) simply to sell hardware, writes Verizon’s Peter Konings, and they just don’t need more hardware.
Let’s be clear: SDN isn’t a box. It is about enabling better performance and efficiency in the software layer. What all CIOs today really need to understand is how to leverage SDN to improve performance and reinvent their business processes in order to be able to compete more effectively.
While cost reduction will usually be the most compelling benefit for any technology adoption, the more persuasive argument for SDN adoption is that the technology can drive enterprise-wide change. Successful proof-of-concepts are frequently moving to production quicker than planned. Greater network agility and reduced cost allows a CIO to package services differently, which can reduce time to market and reduce opportunity costs. This gives the CIO greater business agility, which in turn offers more freedom to innovate, catalyzing an upward innovation spiral.
SDN for optimising cloud and virtualisation
If you look at network models used across most organizations, they haven’t really evolved much since the 90s. But how much has technology has changed? We’ve applied Moore’s Law to networking in moving from 10Mbps to 10Gbps and beyond, but we have only just started seeing changes in network architecture. As our perimeter dissolves, more applications are being hosted in the cloud, and with application hosting environments sitting outside the traditional internal network, a different, more optimal model is required.
Now, imagine an application that can detect demand and move compute instances and network loads to different server farms based on where the user is located. Bear with me here: SDN helps to fulfil this by decoupling control from the hardware plane. Rather than requiring hardware, physical equipment or significant human intervention to provision for expansion or contraction based on usage needs, SDN enables a CIO to scale up and down as needed via software controls. As a result, SDN is an enabling technology that allows an organisation to drive far greater efficiency and agility from their network and virtualisation environments. It also allows for significantly improved management, increased visibility and better automation. No more over-provisioning!
The same application could change network routes based on revenue projections or data sensitivity within the application.
Protecting against attacks with embedded security
Embedded security isn’t a new concept. A few years ago, the Jericho Forum was started with a view to developing a way of stopping network attacks against application infrastructure. The drive for setting up the forum was the rise in cyber-attacks such as phishing, SQL and distributed denial of service (DDoS) attacks that give attackers access to internal systems.
One such technology is the software defined perimeter (SDP). This technology re-architects the perimeter to provide advanced identity and application-specific access control. It is a far superior security model, and is particularly valuable for companies active in cloud-based environments.
Here’s another benefit: having to manage and secure increasing amounts of data means that full network visibility and transparency are essential. The network automation and orchestration gained via SDN and SDP delivers more data that can itself deliver valuable, timely alerts, enabling IT executives to perform security analytics. When you consider that 25% of all data breaches remain undiscovered by the victim for weeks (or even months), the importance of this becomes obvious.
How do you even start to think about transforming your network?
First and most obvious, you need to clearly define your objectives.
Understand and document what you want to achieve through the implementation of SDN, so that you can measure its success. Remember that while the reporting the financial success of any implementation is important, IT teams may lack the skills to effectively describe business benefits. Don’t let the hardware/software vendors lead your discussions, as they may have vested interests. Look at open systems and tools where available and understand how these can be supported and used across the organization.
You also need to consider SDN’s impact on your support structure. Explore how process and workflow can be improved, as this can often lead to a change in the support structure for operational teams. Instead of having compute, network and application teams, it is now quite common for organisations to move to an application-centric support model that includes staff with skills in server and network technologies. Tooling may need aligning to this support structure, and it’s important to identify these systems up front. A good configuration management database (CMDB) really can help to understand enterprise applications, the uses and value of these and the critical components in their delivery.
In conclusion, SDN really is here to stay. CIO evangelists tell us that SDN enables them to design their network to flex on demand to meet the demands of their business, rather than design to peak – with the added layer of security a bonus. Perhaps most compelling is the fact that, with these new technologies, the time of deployment can in some cases be reduced from 500 days to as few as 65. And this is why very early adopters have tended to include companies undergoing mergers and acquisitions, as SDN allows them to integrate acquisitions onboard faster.
Peter Konings is director of Enterprise Networks and Managed Services at Verizon
Connecting to the internet in public spaces is second nature to most of us. Whether it’s in stores, restaurants, shopping centres, or even railway stations, we take to our mobile devices to compare prices, share our latest purchases via social media, find promotions, or just to pass the time.
The challenge that tends to confront us, of course, is which Wi-Fi network we’ll be able to access with greatest ease. Our phones instantly show us the choice of available networks, and research shows that more people are prioritising speed over safety, with more than 70% of people ranking connection speed a higher priority than the security of the connection.
Security experts recently reported that two thirds of people don’t know if the public Wi-Fi they are using is secure or not, despite 80% understanding the dangers of sharing private data over unsecured networks. But in the event that data from a phone is compromised, how many organisations would genuinely accept responsibility for the breach?
The reality is that the public Wi-Fi provider – be it retailer, hotelier, restauranteur, etc. – is the de facto internet security guard and will inevitably assume the role of villain if anything goes wrong while people are using their network to get connected.
Unfortunately, many of these organisations simply do not have sufficient security measures in place, leaving the public exposed to the possibility of device infection, data theft, unauthorised data sharing, or even financial loss.
Wi-Fi education is vital
Furthermore, those that do have adequate security and compliance measures in place don’t always do enough to promote safe public Wi-Fi usage to their customers. Even if a customer unwittingly compromises their own data, the owner of the Wi-Fi network may still be blamed for the incident.
High street brands offering Wi-Fi need to better promote secure usage to their customers. Having the capacity to provide guests with it is important, but without appropriate security measures in place, businesses may inversely harm their reputation, brand recognition, and, ultimately, revenue.
End users need clear instructions on how to log on to the secured Wi-Fi network, and once online, they need to be notified about how aggregated and personal data will be handled and stored. In the case of personal data, customers should be informed about the type of data collected, how it will be used and with whom will it be shared, and where and for how long this data will be stored.
With regards to device tracking, Wi-Fi providers that have agreed to the Future of Privacy Forum’s Mobile Location Analytics Code of Conduct will honour requests from consumers wanting to opt-out of having their location linked to their mobile device.
Equally, public Wi-Fi providers must ensure they are doing all they can to keep their public Wi-Fi secure from being compromised. Creating encrypted passwords and verifying forgotten log-ins ensure only legitimate users can access the network.
Public Wi-Fi providers can also include an idle timeout, logging the user out if their device has been left unused for a certain amount of time. And since the majority of cyber-attacks are the result of uneducated end-user choices, companies should take advantage of apps or the login portal to highlight security dos and don’ts to customers.
Ultimately, businesses need to maintain their credibility by disclosing as much information to their customers as possible. Providing secure guest Wi-Fi is only one aspect of the solution, and brands offering it have a responsibility to help educate their customers on how to use public Wi-Fi sensibly and securely.
Jeff Abramowitz is president of Cloud4Wi and previously founded cloud network management firm PowerCloud.
With BT Wholesale having announced that from 2020 you will no longer be able to purchase integrated services digital network (ISDN) and public switched telephone network (PSTN) circuits as it targets a 2025 switch off date, questions are naturally being asked. Will BT really flick the switch in 2025? What needs to be in place before that can happen, and what are the options for those currently on ISDN/PSTN circuits? In this guest blog post, Bamboo’s Lorrin White explores some of the next steps for customers.
Last year, BT boldly announced its intention to switch off its PSTN and ISDN networks by 2025. This was a smart move. In a world that is fast embracing IP as the standard protocol for all communications services, it was important for BT to declare its intentions to remove the legacy from its network, while giving customers a whole decade to make the switch (if they haven’t done so already).
What does this announcement really mean?
Let’s start by looking at what PSTN and ISDN really are. PSTN is the same phone line most people have at home, whereby analogue voice data flows over circuit-switched copper phone lines. While it may have evolved over the years, PSTN is a very, very old technology, operating on the same fundamental principles as the very first public phone networks of the late 19th Century. It is worth noting that PSTN does not just power voice, as asymmetric digital subscriber line (ADSL) and fibre-to-the-cabinet (FTTC) both operate on it. As of yet BT have not suggested any replacement technology for these, so one can assume that BT’s planned obsolescence of PSTN applies to voice only in this instance.
ISDN, by contrast, is a spritely young thing from the late 1980s. ISDN allows both voice and data services to be delivered over digital lines simultaneously. When it launched, ISDN was well-suited to businesses, as it could support early video-conferencing systems at the same time as an analogue phone line. For a time, it could also offer the fastest internet access available (128 kbps). Naturally, since ISDN is no longer the place to go for video-conferencing or a fast internet connection, its USP has quickly been eroded.
So, BT is killing old tech. What is the ‘new’ tech that is replacing it?
In a nutshell, BT is moving its entire voice network to voice over IP (VoIP). VoIP is hardly ‘new’. But this is a good thing. VoIP has been a proven platform for voice for some time now. It works. If your business has renewed its telephony sometime in the last few years, you may have been told about it (but don’t be surprised if you haven’t, since IP is a whole new game that has been growing steadily in the background, with more and more businesses realising the benefits demonstrated by the early adopters).
VoIP has many advantages over PSTN and ISDN too; it is much quicker to provision new lines, you can reduce your line rental due to needing fewer physical lines, and it is vastly scalable and flexible – for example you can redirect calls to different parts of the country at the flick of a switch, or have a single phone number follow you around the world irrespective of where you’re working.
Why is BT flicking the switch?
Why do you no longer use your Nokia 3210? Same reason, but on a much bigger scale. Also, maintaining multiple legacy networks is very expensive for BT. By converging all services – voice, data, video, and even broadcasting – to the IP protocol, BT only has to maintain one network, not several.
It is also worth bearing in mind that 2025 might not be Doomsday for ISDN. The date is not set in stone. It is BT’s intention to stop selling PSTN and ISDN by 2020 and shut it down completely by 2025 – but this is assuming it has managed to switch all customers over to IP services before then. This means that a viable alternative must be available to everyone well before 2025. For many businesses today, ISDN is still the best they can get. According to Ofcom, there are 33.2 million fixed landlines in the UK (including ISDN), and approximately 7.6 million of these belong to businesses. BT will not turn them off before they have an alternative firmly in place.
What should you do?
Businesses will no longer be able to buy any systems that use PSTN or ISDN by 2020. While 2025 may seem a long way off, 2020 is only just over three years away. If your current traditional telephony contract is up for renewal within the next few years, now is the time to start exploring the benefits of VoIP and SIP technologies.
Assuming you are in an area that can purchase a VoIP system, there are two things you need to consider:
- Is your internet connection good enough to deliver VoIP?
While VoIP does not use very much data when compared with other services like video, you must ensure you have enough bandwidth to deliver voice on top of everything else your office does. Some say you need 5Mbps down and 2Mbps up as a bare minimum for a small office, but really the bandwidth you need depends on your individual needs and Quality of Service (QoS) priorities. Bottom line; if you don’t have enough bandwidth or a QoS commitment you could experience poor audio quality or intermittent service and miss out on the full benefits.
- Does your office phone system support VoIP?
Most new office phone systems already support VoIP, but if yours doesn’t, you can either replace your entire phone system with an IP one (worthwhile if your handsets are looking tired), or just invest in an IP-enabled on-premise PBX (the box that connects your internal phone system to the external phone network). A hosted telephony system is also a good way to make the switch.
Is it ever worth buying ISDN today?
Given all of the advantages of VoIP over ISDN, in most cases we would recommend investigating whether VoIP is right for your business, and if not now, considering how you will make the move in a few years’ time. And there are still some circumstances where ISDN is a good solution, for example as a disaster recovery or failover option.
Whether or not the 2025 date will stick we’ll wait and see. The final date is dependent on how successful UK wide fibre rollouts are, as without the connectivity to run there is no real alternative to ISDN. Connectivity in the UK is getting faster and faster, so who knows it may happen even sooner than 2025. But while the date may move by a few years here or there, the one certainty is this; ISDN and PSTN are outdated technologies that are simply not as good as modern VoIP. So don’t stay in the past.
Lorrin White is managing director of Bamboo Technology Group
Last month, European Commission (EC) president Jean-Claude Juncker laid out three key objectives for a new telecoms framework, to be met by 2025.
These are: to give schools, universities, research centres, transport hubs, public services and digital enterprises access to ultrafast broadband capable of delivering speeds of at least 1Gbps; to give every household in the European Union (EU) access to broadband capable of delivering speeds of at least 100Mbps, that can be upgraded to gigabit connectivity later; and to give all urban areas, major roads and railways 5G coverage, with a 5G network to be made available in at least one major city in each EU state by 2020.
This is one of the largest reforms of European telecoms framework for years. It sets ambitious targets. It is a key signal that the EC sees access to ultrafast, fibre-to-the-premises (FTTP) broadband as a necessity as we move towards the connected future. In a way, the reforms mirror signals sent up by Ofcom in its market review earlier this year.
Speaking at an Adtran event earlier this week, consultant Tony Shortall, an expert on telecoms policy, said that the EC was clearly pushing regulators towards very high capacity broadband networks. At the same time, he said, it was was narrowing the range of potential technology solutions. In Shortall’s words, “they don’t say it’s fibre, but they do say it’s not VDSL”.
So what is the process from here on out? The EC’s proposals will go before the European parliament this month, which will then review and develop a position. The Council of Europe will also take a common position, and negotiations will go from there. With a favourable wind, the adoption of the proposals into European law could take place in early 2018.
The Brexit problem
But there is a problem: we might finally have a set date for Brexit. Over the weekend prime minister Theresa May laid out plans to trigger Article 50 in March 2017, which means Brexit will become reality in March 2019.
At the same time, May announced key legislation, dubbed the Great Repeal Bill, that will see all EU legislation transposed into UK law. This means that future governments will be able to keep the good laws, and get rid of the bad laws.
Make no mistake, the EC’s telecoms proposals are good laws. They are by no means examples of the sort of Brussels bureaucracy that 52% of us voted to reject. Far from it. They are an excellent example of the sort of proactive legislation that the EC was designed for, and could bring benefits to millions of EU citizens.
Many business leaders expect that the UK economy will to take one hell of a beating once Brexit actually takes place. So as Computer Weekly has argued more than once, it is absolutely vital the government takes action to bolster Britain’s connectivity. We must act now to give our hard-working businesses, the lifeblood of the economy, as competitive an advantage as possible.
Pushing to adopt the EC’s proposals into European law in time for them to be transposed into British law by March 2019 must now be a key objective for this government.
For many consumers, a Wi-Fi router is no more than an ugly but necessary evil to ensure they have untethered access to the internet on all of their devices, says XCellAir co-founder Todd Mersch.
For that reason, the majority – at least 60% according to an IHS Report – are more than happy to let service providers bundle a router with their service. It means one less trip to an overcrowded electronics store to buy an alien-looking, antennae-laden device, boasting baffling features such as “Turbo QAM”.
Have you been to a Currys lately? There are huge displays right as you walk in the door of – drum roll – Wi-Fi routers. These used to be tucked away on some back corner shelf, next to the cables and PC components. So what’s going on?
Consumers are getting smarter. They have realised Wi-Fi is the performance driver for their home network – a network that increasingly must cope with not only multiple devices per resident, but also video streaming and Internet of Things (IoT) applications.
This, combined with a largely unmanaged, inconsistent and therefore frustrating Wi-Fi service from their operator, has them looking elsewhere for a better experience – and potentially switching internet service providers as evidenced in a recent Consumer Reports survey.
In turn, this burgeoning demand means there are growing numbers of new entrants providing direct-to-consumer smart Wi-Fi equipment. Companies like Eero and Luma have been launched with that very aim of fixing Wi-Fi in the home. Most of them focus on covering the whole home (in Eero’s case this comes with a $500 price tag), but when combined with cloud-based management tools consumers are now empowered to manage their own network.
Good for consumers?
Sounds cool – right? As a consumer you look like a tech savvy wireless guru and as a service provider your customers stop calling when their Wi-Fi does not work.
But this is a short-sighted view. For the individual it’s great if you know anything about Wi-Fi, want to own service problems and are happy to shell out a lot of money for the right to do it.
However the service provider quietly forfeits the most critical component of the customer relationship and guarantees its relegation to bit-pipe status.
On the flip side, operators have an opportunity to not only save this relationship but deepen it by offering a managed, consistent, and high performance home Wi-Fi service.
To understand how an operator can move from zero to hero, you first have to have a picture of what drives the erratic and frustrating performance of today’s status quo. Wi-Fi issues can be put into a few categories.
Wi-Fi interference, congestion and coverage
First off is interference and congestion. This is where too many devices and routers are trying to use the same Wi-Fi spectrum at the same time. That challenge is exacerbated by consumers buying a more powerful router or adding unmanaged extenders.
It becomes a bit like shouting above the noise scenario – all it encourages is for everyone to raise their voices, and once everyone is shouting, no one is any better off. A whole bunch of unmanaged APs near each other means that channels are used inefficiently. Research we carried out last year found that in an average area, capacity unavailable thanks to inefficiencies was enough to stream another 25 high-definition videos.
Through intelligent and automated use of unlicensed spectrum, the operator can tap into the latent capacity and deliver better, more reliable performance. And this won’t be possible if consumers are using their own router.
The second big problem is coverage. In many larger homes, it is increasingly difficult to deliver whole home coverage with one router. Additionally, the actual placement of the Wi-Fi access point (AP) is often not ideal.
This does not mean every home needs multiple access points. The key is for the service provider to be able to identify what is driving the coverage issue – placement or the size of the area the AP is trying to cover – and proactively solve the problem. This can happen at installation as well as during operation.
Finally, there is the inherent fragility of the Wi-Fi hardware itself. In most cases, these products are mass produced at low-cost and are not designed to take the punishment we dole out. An operator has a unique skill set developed over decades of delivering highly-reliable service. By automating basic fault avoidance techniques – like resetting the router before your service is impacted – they can provide reliability not currently available.
By delivering reliable, high-performance, broad coverage and, importantly, a managed service – service providers can actively monetise Wi-Fi with services such as Wi-Fi calling, wireless video distribution and in-home IoT.
But if they do not deliver, this new breed of provider will take over the customer relationship, and rob the service provider of this opportunity.
Todd Mersh is co-founder and EVP of sales and marketing at California-based XCellAir
G-PON as we know it today is rapidly approaching the end of the growth phase in its technology lifecycle, writes Adtran’s Ronan Kelly. There is a surge in gigabit broadband service offerings that began in the US and is becoming increasingly prevalent across Europe. From the consumer perspective, there is a wave of new technologies such as 4K and virtual reality, which while still in their early stages, are expected to gain significant traction in the coming years.
Whilst today’s gigabit-enabled consumer is not yet utilising the service to its full capacity, this will change in the next three to five years as applications begin to emerge that take advantage of the capacity on offer. When that happens, we will reach the tipping point where G-PON deployments are no longer prudent, and next generation PON technologies such as NGPON2 and XGS-PON become the de facto standard.
If an average consumers’ available broadband data were quadrupled today their usage patterns would not change for three to four months. This is because their experience when using online services is largely informed by service capabilities and once a better service is available it takes time to find out what they can do with it.
As upgrades are rolled out, consumers tend to continue using their bandwidth as before, albeit with a better experience, whilst slowly exploring services that were not previously usable. With that in mind, everything we are using today is largely created for the bandwidth that has been available for the last four or five years.
What’s behind the change?
Historically, bandwidth always comes before an application; nobody develops an app that needs more bandwidth than is available to the mass market, otherwise it would be useless. This is no great secret.
Look at the technologies we take for granted today, such as FaceTime, video streaming and massive attachments like photographs. We would have a very different experience trying to use them on the 1MB connections we had in 2011. But with exponentially faster bandwidth on the horizon, we are only just beginning to see what developers can create. With the emergence of cloud-first services and entertainment such as virtual reality, high quality video services like Netflix and Amazon, 4K and higher resolutions will be the norm on all screens.
If you walked into Currys two years ago you would have struggled to find more than a handful of 4K televisions available in a small premium suite. Today, however, you’re spoilt for choice on what 4K screens you can purchase, and HD TVs are tucked away in a corner. With PC displays now matching 4K resolutions, and tablets set to follow, it’s obvious that the consumer electronic manufacturers have historically driven uptake in the consumer space.
It’s true to say that it’s always a race in the consumer electronics space. The moment one manufacturer releases a tablet with 4K capabilities, the others quickly follow suit. Before you know it, the market has shifted, leaving the vast majority of consumers with 4K ready technology. Similarly, when a consumer upgrades to the iPhone 6S and starts recording video, that video is now recording in 4K. Other consumers then move to the next phone, start recording and without any conscious thought drive more change.
With screen resolution constantly increasing, and us being at the benchmark level for 4K within those screens, when people stream content via services like Netflix or Amazon they already have a 4K offering. Consumers start using these services on these devices and, by default, again start to change and influence their broadband usage behaviour.
It took HD 10 years from when it first appeared to get to the point when there was a decent amount of content available for consumers, because satellite and cable TV companies dictated its proliferation. Until they upgraded their background infrastructure, users couldn’t watch HD content on anything but a Blu Ray player. 4K is at this early stage; there is currently next to no live broadcast content yet available and all 4K offerings come via streaming.
Back when HD launched, streaming services didn’t exist. Coupled with the flat screen TV revolution, the HD capability became a common feature in lounges across the country faster than HD broadcast content was made available. Due to the absence of competing streaming services, there was no major pressure for content providers to start broadcasting in HD. It’s different now – major pressure is coming from the over-the-top (OTT) providers, which is why we’re starting to see a more rapid push from some of the dominant players, who are gearing up to have 4K content ready even though the consumers are not adopting as quickly as they did when flatscreens first became available.
The agent of change is now different. Me getting rid of the huge TV taking up space in my living room was a compelling argument to move to flatscreen, and having HD as part of that transaction was a nice bonus, but it wasn’t the primary driver for change. Now, the consumers have the flatscreen and a 4K flatscreen coming along isn’t quite as compelling, so the rate of change, unless pushed by the electronics manufacturers, will be slower. Typically TVs have a 10-15 year lifespan before they’re replaced. If we reflect on the timeline of the flatscreen revolution, the early adopters are now well in that window. Will this serve as the catalyst that accelerates 4K adoption?
While some of these factors have long been the cause of change in bandwidth speed, the surge in adoption of services from OTT services is driving demand for even higher speeds. As consumers move to 4K, they will be met with a very different experience; every TV is now smart, so more streaming content will be accessible than ever. Similarly the gaming industry now pushes consumers to download or stream games, so the distribution model for that industry has totally shifted to a cloud-based model, much like the music industry.
Looking ahead there are technologies coming that will put a huge strain on the network, the connected car being at the forefront of that charge. Autonomous vehicles offer countless benefits, the most obvious being the socioeconomic impacts: with cars essentially driving themselves, the ‘driver’ takes back the productive or leisure time which had previously been lost to travel.
However, we still have a long way to go to achieve this. The capacity requirement to implement this is typically up to 27mbps per car. Take 100 square metres of London on any given day and imagine the number of cars using broadband, and you’ll quickly get a sense of the capacity that we’ll need.
With these factors in mind, it’s vital to continue to develop next-generation broadband technologies, such as,NG-PON2, XGS-PON, and G.fast, that use existing infrastructure and enable gigabit speeds in a timeframe that matches demand. As technology advances, so too must our available broadband offering. If the two cannot progress at similar speeds, or if broadband cannot keep up with the digital consumer, then society will suffer.
In the wake of last week’s vote to leave the European Union (EU), if we are indeed going to invoke Article 50 of the Lisbon Treaty later in the year – and despite the lingering hope that there might be some sort of loophole to wiggle through, Brexit now seems very likely – there must be a concerted effort to make the best of it.
As our economy crashes and burns, our banks look to relocate and the last of our manufacturing base flees the country, we are going to need to think strategically in order to remain competitive against a 27 member trading bloc that our country has gravely offended, and that will want to see us fail.
And make no mistake, the EU will take us to the cleaners in the negotiations to come… who can really blame them?
One way that we can compete on a level footing to ensure we are light years, and light speeds ahead of them when it comes. We are going to have to have, hands down, the very best connectivity possible. South Korean-style, if possible.
Yes, ultimately this means universal fibre-to-the-premises (FTTP) to every home, office and factory in the country, regardless of location. But since we now have even less money than we thought we did, this possibility is now even more remote than it was before the referendum. For now, it’s not realistic to push for it.
Yet the technology already exists to go ultrafast at low cost. G.fast technology is not true fibre, but it can deliver speeds of over 100Mbps, in the field, right now, with just a little upgrade in the cabinet. We don’t need to dig the roads up, we don’t need to wait for wayleaves. As the post-Brexit squeeze tightens its grip, G.fast suddenly begins to look much more compelling. I believe it is time for us to get behind it.
We face obstacles, for sure. Just a week before the decisive vote, Computer Weekly met with Openreach’s CEO Clive Selley, who said Brexit would materially damage his ability to invest in new engineers and commercial roll-out of broadband. This is a significant concern, and something that must not be allowed to happen. The onus is on Selley to stick up for Openreach and protect it at all costs.
Equally, whoever inherits the government in the next few months will have to take charge of broadband policy. Ed Vaizey, the current minister in charge, is a true supporter of the digital economy and connectivity, but we have to acknowledge that he may be replaced under new leadership. If a new minister is put in charge at DCMS, they must be a powerful and credible voice for our sector.
Britain faces an uphill battle to stay competitive outside the EU bloc. Ultrafast broadband networks will help to keep the British economy competitive in the post Brexit world. We cannot lose our focus now.
The political momentum behind the connected future enabled by the internet of things (IoT) is clear, writes Jonathan Hewett of Octo Telematics. Increasingly we see insurance positioned at the frontline of this technology shift as players across the auto industry recognise the key role it will play as we transition to the world of the driverless car.
In the Queen’s Speech, the UK became the first country in the world to announce its intention to legislate on insurance requirements for driverless cars. The Modern Transport Bill will “ensure the UK is at the forefront of technology for new forms of transport, including autonomous and electric vehicles”.
While some commentators contend that driverless cars will make motor insurance unnecessary, as summed up by roads minister Andrew Jones, the suggestion “is a lot of pie in the sky.” Insurance will remain fundamental, but policies of the future will look very different to those we buy today.
Currently, insurers have to price their policies based on proxies and assumptions. The risk premium for car insurance, is currently determined by factors including the driver’s age, postcode or the model of car they own, regardless of their actual driving behaviour.
However, by harnessing driver data and applying sophisticated data analytics, insurers can benefit from the crucial insight they need to forecast individual risk. This is an increasingly essential capability in the shift towards connected and driverless cars.
Insight into government thinking around driverless car insurance suggests that future policies would go even further than this, essentially holding vehicles, and by extension their manufacturers, liable for accidents. This is a very different proposition for insurers to consider. The focus is taken away from the driver and their involvement. Instead it falls on the robustness of the algorithms and technology provided by manufacturers that enable vehicles to make decisions and drive.
Accordingly, as autonomous vehicles become more widely adopted, the driverless car industry will undoubtedly require third parties to verify potential insurance claims and determine liability. In these situations, on-board telematics technology will be crucial in enabling an accurate data-informed reconstruction of any incident. For instance, should a Google and Tesla car crash into one another as they simultaneously avoid a pedestrian, the data gathered will be essential to determine which car is at fault and which manufacturer is liable. However for reasons of impartiality a third party will be necessary to hold that data and provide the analytics.
The momentum behind autonomous vehicles has the power to revolutionise the insurance industry, but it could be decades before regulators allow vehicles to be built without manual controls and the fact remains that we are still far from a driverless reality. Nonetheless, some of the technologies integral to the autonomous future are already enabling more equitable insurance and helping to shape better, more self-aware drivers.
Much of the technology that will take the driverless car from concept to reality already exists and is being utilised to assist insurers in processing claims faster and more accurately. And the market is growing fast: by 2020, consultancy Ptolemus estimates that nearly 100 million vehicles will be insured with telematics polices, growing to nearly 50% of world’s vehicles by 2030.
Telematics is providing both manufacturers and insurers with the opportunity to innovate and to develop new products, partnerships and approaches that are grounded in actual data. Not only will data-driven insurance will be a necessary norm for determining crash liability in the driverless future, but it will also be an essential step in making the transition to that future a safe reality.
Jonathan Hewett is global chief marketing officer at Octo Telematics