Cliff Saran’s Enterprise blog


May 20, 2019  2:08 PM

How racist bias is embedded in software systems

Cliff Saran Profile: Cliff Saran
Uncategorized

The issue of racist bias encoded in software made mainstream news last week, with a report on Channel Four news highlighting how software for profiling criminal suspects, tend to have racial biases.

Such software relies on a dataset, which is weighted against non-white individuals. Arrest data collected by US law enforcement tends to shows that, statistically, there is a strong correlation between skin colour  and criminal activities. Law enforcement use this data to stop, search and arrest members of the public, which leads to more arrests of non-white suspects; the dataset of non-white arrests grows and so the data bias becomes self-fulfilling.

During a broadcast on Thursday 16 May, Channel Four covered the issue of racist software in its flagship new programme. During the broadcast, Peter Eckersley, director of research at the Partnership on AI was interviewed about the challenge with biased data. He said: “Many countries and states in the US have started used machine learning system to make predictions that determine if people will reoffend, largely based on arrest data.” Eckersley pointed out that arrest data or conviction data is racially biased: The chance of being stopped, charged, or convicted varies depending on your race and where you are located.

The datasets used are discriminatory. Joy Buolamwini, a computer scientist a at MIT Media Lab, who is exhibiting at the What makes us Human exhibition at London’s Barbican Centre told Channel Four news presenter Jon Snow that some of the larger data sets are mainly based on samples of white men. “They are failing the rest of the world – the under sampled majority are not included.”

Discriminatory design

Computer Weekly recently spoke to Ruha Benjamin, an associate professor of African American studies at Princeton University, about discrimination in algorithms. Her new book, Race After Technology, which is out in June, explores the biases in algorithms. The discriminatory design behind algorithms are being used to automate decisions in the IT systems used in healthcare, law enforcement, financial services and education. They are used by officials to make decisions that affect people’s lives, health, freedom; their ability to get a loan or insurance or even a job. Such algorithmic bias can therefore have a detrimental effect on racial minorities. She said: “I want people to think about how automation allows the propagation of traditional biases – even if the machine seems neutral.”

Diversity in the workforce

The answer is not about hiring more people from diverse racial backgrounds. Benjamin’s research found that people’s backgrounds tend to take a back seat in the race for tech innovation. The values in the tech sector appear incompatible with diversity. Software tends to be released as fast as possible, with little thought given to its broader social impact.

While people generally recognise their own human bias, for Benjamin, outsourcing decisions to objective systems that have biased algorithms, simply shifts the bias to the machine.

Roger Taylor, from the Centre for Data Ethics and Innovation told Channel Four News: “The problem is that AI is like holding a mirror up to the biases in human beings. It is hard to teach [AI algorithms] that the flaws they see are not the future we want to create.”

May 9, 2019  5:32 PM

VDI reimagined

Cliff Saran Profile: Cliff Saran
Uncategorized

A demo at this week’s .Next conference in Anaheim gave a snapshot of how far modern VDI has come. Virtual desktops  should not be considered just as an option for users who only require low computing needs. During the demo, Nikola Bozinovic, vice president and general manager at Nutanix showed how it was possible to mask out the infamous Starbucks cup scene in Game of Thrones, editing a 4k, 60 frames per second video clip in Adobe Premiere Pro streamed to his Chrome browser, via Nutanix’ new Frame product. This enables the user interface on a Windows desktop to appear in an HTML 5-capable browser.

The year of VDI

Bozinovic believes 2019 is the year VDI finally breaks through into the mainstream. Since Citrix pioneered virtual desktop, it has been possible to stream the Windows user interface to thin client devices. Enterprises IT could create a gold image of the user’s desktop environment and provide secure access to it. But while it was certainly secure, and ensured a stable and robust desktop image could be streamed to users, it tended to be deployed to provide access to relatively simple  applications such as providing Excel spreadsheet access. Since the application of VDI tended to be for lightweight user access to PC resources, IT generally did not consider virtual desktops as viable for mainstream desktop computing.

What has now changed? For Bozinovic the light bulb moment happened on March 19, when Google unveiled Stadia, a 4k stream gaming platform. Bozinovic says that Stadia shows how VDI can now make use of GPU acceleration (graphics processing units), to stream computationally and graphics intensive games to players direct to their web browser or via YouTube. Stadia is set to become the platform for next generation VDI, using its open source Vulcan GPU drivers to stream a new generation of graphics-intensive games running on AMD graphics chips. Dov Zimring, Google Stadia developer platform product lead, said: “Google and AMD share a commitment to open-source with expertise in Vulkan, open-source Vulkan GPU drivers, and open-source graphics optimisation tools. We’re humbled by the spirit of innovation and collaboration that exists throughout the gaming industry and look forward to pioneering the future of graphics technology with game developers, in open-source.”

What Stadia means for enterprise IT

For enterprise IT, this announcement represents the consumerisation of virtual desktops. No longer will VDI be regarded as only suitable for low graphics applications, people will be streaming high performance games at home and wonder why high performance graphics application can’t be streamed at work.

In a work environment, end users won’t require ultra high end engineering workstations to run computer aided design, video editing or other GPU-intensive applications. They will, however,  still require ultra high definition, colour accurate displays with fast, flicker-fee refresh rates, to use the graphics applications comfortably. But, the raw processing power is available on the back end, and the user interface can be streamed.

If the gaming industry gets behind the idea of streaming GPU-intensive PC games, consumers will no longer need high-end gaming machines. Anyone, with a fast connection will be able to access gaming streams. For the gamers, their experience will  be measured based on the quality of their display, network latency, audio and input devices.

In the enterprise, the IT industry looks like it is warming to the idea of using a PC as a thin client  for desktop as a service (DaaS). The basic hardware just needs to  run a browser like Chrome really well. Any resources, whether it is CPU, GPU, memory, storage or  network bandwidth required to run resource-heavy PC applications, can simply be dialled up and down as needed in the cloud, or via the admin console on the back-end VDI. The applications are no longer limited by the restrictions imposed by the physical PC hardware.

None of this is new. Thin clients have existed almost since the dawn of computing. One could argue a VT100 terminal provided a means to stream the text-based user interface on a mainframe applications. Today, the reality is that Windows desktop applications are not going away. If Windows application developers start reworking their software to optimise them for VDI, desktop IT could be reimagined as a streamed service.


April 18, 2019  10:07 AM

Banking and finance bigwigs yank climate change chord

Cliff Saran Profile: Cliff Saran

It has been a week of activism and a call for action, with the Extinction Rebellion, causing major disruptions in London.

But alongside grass roots demonstrations, there appears to be a greater awareness in the corporate world, that every business has a role to play in tackling the catastrophic environmental risk the world now faces.

On Monday, Legal and General Investment Management (LGIM),  the UK’s largest institutional investment firm, announced that as part of its Climate Impact Pledge, it would not hold eight large global companies in the Future World funds. LGIM stated: “Where such companies are seen to take insufficient actions on climate risks, LGIM will also vote against the chairs of their boards, across the entire equity holdings.”

Data sharing

On Thursday, Bank of England governor, Mark Carney published an open letter describing the findings of a new report in which global banking, central banks, supervisors and the financial community have signed up to a set of deliverable goals to help to ensure a smooth transition to a low-carbon economy.

From a banking perspective, the focus is to build knowledge and share data  such that the monitoring of climate-related financial risks are integrated in day-to-day supervisory work.

The promise of a “tech solution”

As Computer Weekly recently reported, PwC and Microsoft believe technology has an important role to play in helping businesses and governments to support climate change and environmental issues. PwC’s new How AI can enable a Sustainable Future report, looks at the use of AI-enabled smart technology across agriculture, water, energy and transport.

Examples include AI-infused clean distributed energy grids, precision agriculture, sustainable supply chains, environmental monitoring and enforcement, and enhanced weather and disaster prediction and response.

In the sectors it focused on, PwC estimated that AI applications could reduce global greenhouse gases by 4% – more than the current annual emissions of Australia, Canada and Japan combined.

The experts are in agreement that climate change will not only ruin the planet, kill off the polar bears along with millions of other species and result in unimaginable human suffering across the globe, it will also have a major impact on global business. It may not have the immediate impact of the Extinction Rebellion, but the words from Mark Carney, the LGIM statement and PwC’s findings may actually resonate more with business leaders. And if PwC’s forecasts are accurate, AI  has the potential to enable every country to meet the target to become carbon neutral by 2050, as set out by the Paris Agreement.


April 15, 2019  11:28 AM

Single source ERP fails to match business needs

Cliff Saran Profile: Cliff Saran
Uncategorized

There was a time when the major ERP providers were considered allies of the CIO. They were trusted advisors. From an IT decision-maker perspective, ERP software aimed to encapsulate best-in-class business processes in a single package.

In theory, different enterprise packages from the same ERP company would be able share data; the packages would be pre-integrated and, as for master data management, the customer would have a single version of the truth.

The technique of bundling “free” or discounted enterprise software as part of a sales pitch, meant that organisations were often enticed to buy more products from the same ERP company. On paper, at least, it made sense to invest in a single supplier for core enterprise systems.

Drawbacks of having a single source for ERP

But there are many drawbacks. Organisations standardising on a single ERP provider’s software stack are often relentlessly pursued by account teams at these companies to buy more and more stuff.

These days, that stuff tends to be cloud offerings.

As Computer Weekly has previously reported, Oracle executives referred to “cloud services“ six times in the transcript of the 45-minute, third-quarter 2019 earnings call in March, posted on the Seeking Alpha financial blogging site. Similarly, SAP executives made three long statements regarding “as-a-service” in their fourth-quarter 2018 earnings call in January 2019, according to the transcript on Seeking Alpha.

SaaS increases ERP choice

KBV research’s Global Software as a Service (SaaS) Market, report has forecast that the SaaS market is expected to attain a market size of $185.8 billion by 2024, growing over 21% a year. The ERP providers have spent the last few years fleshing out their SaaS strategies through strategic acquisitions and building out cloud-based ERP software. However, they are no longer the only options available to the CIO. While ERP tends to be able to run a large chunk of a company’s business processes there may be gaps and missing functionality. In the past, so-called “point solutions” filled these gaps. In the era of SaaS, the cloud has enabled companies like Salesforce and WorkDay to establish themselves as dominant players. Often, these SaaS products are best in class.

Multi-sourcing is the future of ERP

It used to be the case that the bulk of enterprise software spending usually went to a single ERP provider. Today, savvy IT decision makers are building out enterprise SaaS portfolios, with products and services from multiple SaaS providers.

Such a strategy breaks the grip the traditional ERP providers have had with their customers. The traditional ERP companies are set up to offer the CIO a one-stop-shop for enterprise software. But businesses no longer standardise solely on a SAP or Oracle suite of products to meet their enterprise software requirements.  As a consequence, the traditional ERP providers have gone shopping for smaller SaaS firms, in an attempt to sell what they would consider “a complete solution”, that they can then claim meets all the requirements a customer needs from enterprise software.

Such is the nature of innovation in the software industry that someone is bound to invent something new and original, which subsequently gains traction. Clearly, it is not going to be particularly realistic for the major ERP providers to buy every SaaS business that has an offering which fills a gap in their product portfolios.

Make integration a key requirement

When shopping for enterprise software, Forrester principal analyst, Duncan Jones, believes IT buyers need to put integration with open APIs, high on their list of priorities. Businesses are told to reduce their reliance on custom code in ERP implementation. This should also apply to the customisations required to integrate the ERP with a third-party product.

If open APIs are made available, third party SaaS companies can create pre-integrated products that fill in the gaps in functionality that exists in the product portfolios from the traditional ERP providers. Assuming the enterprise SaaS landscape becomes more and more fragmented, IT buyers should expect enterprise software companies to provide greater and greater support for integration with third party SaaS products. For the traditional ERP providers, this is likely to be both cost effective and strategically more sustainable long-term, than attempting to acquire every SaaS startup that has an interesting product.


April 5, 2019  1:12 PM

Why are we serving up chips?

Cliff Saran Profile: Cliff Saran
Uncategorized

Custom hardware is usually the only option available to organisations that need to achieve the ultimate level of performance for AI applications. But Nvidia has taken massive strides in flipping the unique selling point of its graphics processor units from the ultimate 2D and 3D rendering demanded by hard core gamers to the world of accelerated machine learning.

While it has been late to the game, Intel, has quickly built out a set of technologies from field programmable gate arrays (FPGA) to processor cores optimised for machine learning.

For the ultimate level of performance, creating a custom application specific integrated circuit (Asic), means that the microelectronics can be engineered to perform a given task with the least amount of latency.

Custom approach: Tensor processing unit

Google has been pioneering this approach for a number of years, using a custom chip called a TPU, as the basis for accelerating its Tensorflow open source machine learning platform.

Its TPU hardware tops the MLPerf v0.5 machine learning benchmarks of December 2018.

Beyond Asics, IBM is now investigating how, in certain, very specific application areas, quantum computing could be applied to accelerate supervised machine learning. It is actively looking to crowd source research that can identify which datasets are well suited to quantum computing accelerated machine learning.

Another option is the FPGA. Since it can be reprogrammed, an FPGA offers a cheaper alternative to an Asic. This is the reason why Microsoft is looking at using FPGAs in its Brainwave initiative for accelerating machine learning on the cloud.

GPUs rule mainstream ML

Nvidia has carved a niche for more mainstream AI acceleration using its GPU chips. According to a transcript of its Q4 2019 earnings call posted on the Seeking Alpha financial blogging site, the company believes deep learning offers a massive growth opportunity.

Nvidia CFO, Colette Kress, said that while deep learning and inference currently drives less than 10% of the company’s datacentre business, it represents a significant expansion of its addressable market opportunity going forward.

In a recent whitepaper, describing the benefits of GPUs, Nvidia stated that neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. “GPUs have thousands of processing cores optimized for matrix math operations, providing tens to hundreds of TFLOPS of performance. GPUs are the obvious computing platform for deep neural network-based artificial intelligence and machine learning applications,” it claimed.

Opimising x86 CPUs

Intel’s chips are CPUs, optimsied for general purpose computing. However the company has begun to expand its Xeon processor with DL Boost (deep learning) capabilities. Intel claims this has been designed to optimises frameworks like TensorFlow, PyTorch, Caffe, MXNet and Paddle Paddle.

It hopes organisations will choose its CPUs over GPUs, because they generally fit in with what businesses already have. For instance, Siemens Heathineers, which is a pioneer in the use of AI for medical applications, decided to build its AI system around Intel technology, rather than GPUs. The healthcare technology provider stated: “Accelerators such as GPUs are often considered for AI workloads, but may add system and operational costs and complexity and prevent backward compatibility. Most systems deployed by Siemens Healthineers are already powered by Intel CPUs.” the company aims to use its existing Intel CPU-based infrastructure to run AI inference workloads.

So it seems developments in hardware is becoming increasingly important. Web giants and the leading tech firms are investing heavily in AI acceleration hardware.  At the recent T3CH conference in Madrid, Gustavo Alonso, systems group at the Department of Computer Science ETH Zürich, noted that AI and ML learning are expensive! “Training large models can cost hundreds of thousands of dollars per model. Access to specialised hardware and the ability to use it will be a competitive advantage,” he said in his presentation.


March 29, 2019  11:33 AM

Complexities of safety critical augmented systems

Cliff Saran Profile: Cliff Saran
Uncategorized

There are no lessons that can be gleaned from the tragic loss of life following the Ethiopian Airlines Flight 302 crash on March 10, 2019. As has been reported across the web, the crash bears remarkable similarities to Indonesia’s Lion Air Crash of October 29, 2018. Both involved Boeing 737 MAX aircraft. To quote from a statement made by Ethiopian Airlines’ group CEO, Tewolde GebreMariam,: “Until we have answers, putting one more life at risk is too much.”

What is known today is that the crash appears to be a side-effect of a software system known as the Maneuvering Characteristics Augmentation System (MCAS). Boeing says MCAS has been designed and certified for the 737 MAX to enhance the pitch stability of the airplane. Across the web there have been reports of how the system got confused during take-off, forcing the nose down to prevent the aircraft from stalling. The plane continued to dive, despite efforts by the pilots to try to regain control of the aircraft. Reporting the preliminary findings of the investigation into the Ethiopian Airlines Flight 302 crash, the Wall Street Journal  noted that a suspect flight-control feature automatically activated before the plane nose-dived into the ground.

Software update

Technically speaking, MCAS is a stall prevention system. According to CNBC, since the crashes of the two 737 Max planes, Boeing has faced fierce criticism for not doing more to tell flight crews about the stall prevention system or alert them when the technology kicks in. It reported that only one Angle of Attack (AOA) sensor for MCAS was fitted as standard. airlines were asked to for additional payment to have a second AOA installed.

Earlier this week Boeing issued a software update. According to Boeing this update has been put through hundreds of hours of analysis, laboratory testing, verification in a simulator and two test flights, including an in-flight certification test with Federal Aviation Administration (FAA) representatives on board as observers.

It said the flight control system will now compare inputs from both AOA sensors. “If the sensors disagree by 5.5 degrees or more with the flaps retracted, MCAS will not activate. An indicator on the flight deck display will alert the pilots.”

Balancing Safety critical  automation with human operators

What is clear from these reports is the complex technical and ethical issues that must be addressed in developing safety critical augmented systems that need to coexist with highly trained individuals. Neither entrusting everything to the computer system nor deferring every decision to a human, are the right approach. While the FAA investigation is likely to conclude that the Ethiopian Airlines Flight 302 crash was down to software, could a tragedy like the Germanwings Flight 9525 crash on 24 March 2015 have been avoided if the flight control software actively prevented the co-pilot from flying the aircraft into the Alps?


March 22, 2019  1:23 PM

Welcome to the new legacy

Cliff Saran Profile: Cliff Saran
Uncategorized

Budgets in IT general do not grow at a rate that can sustain stellar financial performance across the IT industry.

However, Gartner’s latest spending forecast has reported that worldwide software spending projected to grow 8.5% in 2019. It will grow another 8.2% in 2020 to total $466 billion. According to Gartner, organisations are expected to increase spending on enterprise application software in 2019, with more of the budget shifting to software as a service (SaaS).

Among the IT firms hoping to capitalise on this shift is SAP, with S/4Hana, the company’s updated ERP system that runs off an in-memory database.

Those organisations still running SAP’s older ERP Central Component (ECC) system are only guaranteed support until 2025; after this data, SAP has not made a firm commitment to carry on support.

Does SAP see Hana as a cash cow?

Looking at the transcript of SAP’s Q4 2018 earnings call, posted on the Seeking Alpha financial blogging site at the end of January 2019, “Hana” is referenced 29 times. “It’s a Hana world,” CEO Bill McDermott, proclaimed. When asked about the company’s plans for growth, McDermott claimed that Hana is the ultimate platform for a modern enterprise. “You think about what we can do with Hana as database-as-a-service, you think about Leonardo with predictive AI and deep machine learning and IoT, we’re going to double down on these thing.”

Putting it more bluntly, the SAP CFO, Luka Mucic, said: “The S/4Hana upgrade cycle drives potential for substantial further renovation of a company’s IT architecture and gives us multiple cross-selling opportunities.”

However, as Computer Weekly recently reported, a new study from Resulting IT, questioned whether IT decision makers can build a compelling business case to do the upgrade from ECC. Computer Weekly spoke to former Gartner analyst Derek Prior, who co-authored the report.

Prior argued that while SAP S/4Hana has lots of nice stuff, he said: “It is different to ECC. You spent decades bedding in ECC, then it all changes with S/4 Hana, which is quite different.”

Complexity and risk of an ERP migration

S/4Hana is not simply an upgrade to the latest version of SAP. It is entirely different, and has new functionality that may not map easily onto how the business currently operates with ECC. The majority of people who took part in the Resulting IT study are most likely to choose a brownfield deployment than redevelop everything they have built into ECC from scratch. Resulting IT believes that with an estimated 42,000 SAP customers on ECC, upgrading to S/4Hana is going to cost £100bn globally in terms of IT consulting.

This translates into a huge expenditure for organisations. In the case of a brownfield deployment, the potential benefits of upgrading to S/4Hana are minimal – organisations will essentially spend a lot of money and may well experience major disruptions by embarking on a new ERP project that effectively delivers more-or-less what they already run on ECC.

Support beyond 2025

Clearly SAP wants people to buy into S/4Hana, and may well incentivise organisations to purchase it. However, organisations should not be held to ransom by their IT supplier. There is no good reason to spend money unless there is a compelling business case. A vague reference to digital transformation and AI-empowered business processes sounds good on paper. However, organisations will need to construct a watertight business case for upgrading from  ECC to S/4Hana.

Just as in legacy banking systems, by using a third party for support, ECC customers can still take advantage of emerging technology and undergo their digital transformations. If a stable ECC can be maintained and supported, organisations can then focus on adding functionality around this, without the need to migrate to S/4Hana. New products and services can be developed entirely separate to the existing ERP, while still keeping ECC, supported by a third party, as the system of record. The core ERP metamorphosizes into a legacy system.

As SAP’s recent Q4 2018 results show, the IT industry relies almost entirely on organisations upgrading. Often these upgrades either add little value, or, as in the case of S/4Hana, customers end up not using the new functionality. So, given that IT budgets are tight, upgrade if there is a compelling. If there isn’t, now is the time to add SAP ECC to the legacy IT estate.


March 11, 2019  9:09 AM

Web @ 30: Thank you for global connectivity

Cliff Saran Profile: Cliff Saran
Uncategorized

For the last three decades since its introduction, the web has informed, educated and entertained society. It has given musicians, artists and businesses of any size connectivity to a global audience. Anyone with something to say, can express themselves and publish on the web.

Sir Tim Berners-Lee could not have anticipated that his invention would have such a profound impact on society. Who would have thought in 1989 that with just a few taps on a touchscreen-enabled device or the click of a mouse, a global connected web would make it possible for someone to stream music and movies; transfer money and pay for goods instantly;order a pizza; book a foreign holiday and arrange a taxi pickup.

Online replaces high street shopping

Blockbusters, Maplins and many high street retailers have failed to capitalise on the opportunities the web offers. Instead, the likes of Spotify, Netflix, and the behemoth, Amazon, are increasingly taking a bigger and bigger share of people’s wallets. Perhaps Maplin’s next chapter, as an eclectic online bazaar for all things tech and electronic, may turnaround the business.

In the UK, department stores like Debenhams and House of Frasier have failed to stem the decline in sales. People can buy things far easier online than trying to track down something they really want to purchase in the high street. John Lewis, a company renowned for its peerless customer service, is another department store coming under the spotlight. In its financial statement, the retailer attributed the poorer than expected results partially down to Increased IT costs. “Over the last few years we have steadily increased IT investment to set ourselves up for the future. A number of those significant new systems are now operational resulting in incremental maintenance, support and depreciation costs,” the company stated.

This shows that John Lewis Partnership is looking at investing in the future. The only way it can address the online threat is to invest heavily in IT. Similarly, the recent princely sum of £750m Marks & Spencers has paid for half of Ocado, shows that investing in technology is the only sure way to keep up with the likes of Amazon, especially since the e-commerce giant acquired Whole Food in 2017 for a whopping $13.7bn – 18 times as much as what Ocado is receiving from M&S. The acquisition of Whole Foods has put Amazon in direct competition with the likes of Waitrose (part of the John Lewis Partnership) and M&S, which may be the reason behind M&S’ Ocado tie-up.

In 1994, when it was set up, Amazon was just an online bookstore. It quickly killed off Waterstones in the UK, and later music stores began to see sales plummet. Remember Tower Records, Virgin Music? HMV is struggling to remain relevant.

Connectivity creates business opportunities

Thanks to its global reach, the web has enabled companies to connect to one another, creating complex business ecosystems, where organisations can find a niche to add value. Ocado, in fact, could be regarded as Warehouse as a Service business – providing distribution and online deliveries for Waitrose, Asda and through its new business venture, M&S.

Even Royal Mail is not immune. It has finally come round to the idea that there is a business in delivering people’s Amazon purchases. It even handles Amazon returns, without the need for the customer to print out a return label. Numerous newsagents and dry cleaners are official drop-off and click and collect partners for online stores. Argos’ click and collect and drop of service for eBay buyers and sellers shows that the high street can adapt.

Changing trade connectivity

The winners on the web will be the organisations that have agile business models, that can adapt quickly to new opportunities.One can imagine that a hotel group like Hilton would never have contemplated that its business could be disrupted by a web service that owned no hotels – but this is exactly what AirBnB has done. Now thanks to the web, Alibaba can offer a global trading hub, connecting Chinese manufacturing directly to anyone who needs something made. Thanks to the global reach of the web, anyone who feels they can spot a product with potential and is prepared to take a punt can connect with supplies based anywhere in the world and become a distributor.

While bricks and mortar businesses have needed to comply with local laws, pay business rates and need to invest in buildings and hire tax-paying staff, online businesses have used global web connectivity to to flaunt local regulations, get around employee law by not having permanent staff, and relocate their head office in tax havens.

This has meant that traditional firms are at a disadvantage and today’s web appears to be owned by a few, mega businesses.

As the web turns 30, perhaps now is the time to sit back and evaluate how best to curb some of its excesses.

The House of Lords, Select Committee on Communications’ Regulating in a Digital World paper, published on March 9, warns: “The digital world has become dominated by a small number of very large companies. These companies enjoy a substantial advantage, operating with an unprecedented knowledge of users and other businesses. Without intervention the largest tech companies are likely to gain more control”.


March 6, 2019  2:29 PM

WWW @ 30: Why the Web needs to remain open

Cliff Saran Profile: Cliff Saran
Uncategorized

While the internet existed way before the Worldwide Web (WWW), the web changed everything.

Its success has as much to do with the simplicity of using an HTTP web browser, as the fact that it was put into the public domain and the timing of its invention.

A lesson from the past

During the 1980s the TCP/IP protocol evolved to the point where basic command line tools could be used by Unix users and admins to share documents between networked computers.

The internet was predominately used in academia. In the commercial space, proprietary email services offered walled gardens, only available to subscribers. But in 1988, thanks to Vint Cerf, the internet was opened to commercial email services. This opened up the internet to everyone, and laid the foundations for a global communications network connecting HTTPD web servers to users’ HTTP web browsers.

In his March 1989 Information Management: A Proposal paper, Sir Tim Berners-Lee describes the original premise for the WWW as an approach to enable the people at Cern to share documents easily.

While it started at Cern, within three years, the WWW and the HTTP protocol was in the public domain. Then it took off.

No one owns the web

There have been plenty of attempts to make the web proprietary, but in its purest form, the WWW has remained free and open. However, the web represents many more things today compared to 30 years ago. It is the basis of social media platforms, music and video subscription services and global online shopping centres. Every business wants to own their customer’s web experience. But this is not why the web has been so successful.

Last year, Berners-Lee published an open letter in which he explains why the web needs to be more open, rather than users’ experiences being defined by the web giants. He argued that just like a software product, the web itself can be refined, and the “bugs” ironed out.

Just as CompuServe and Aol had walled gardens before the web, now the likes of Amazon,Facebook and Netflix often represent people’s primary experience of the web. They are not public domain. And while they may be built on open source, they are commercial services, effectively closed off from a WWW that offers free access to all. Three decades on since its invention, now is the time for society to consider how the web should evolve and what role commercial exploitation of the web should play.


March 1, 2019  3:41 PM

GDPR: Irish Data Protection Commission may show where the WWW is heading

Cliff Saran Profile: Cliff Saran
Uncategorized

The Irish Data Protection Commission’s (DPC) annual report makes interesting reading, given that the World Wide Web is celebrating its 30th birthday this month.

People regularly give away vast amounts of personal data through social media and instant messaging platforms like Facebook, Instagram, WhatsApp and Twitter.

These web giants need to comply with GDPR. But in its annual report, the DPC said it has 15 statutory inquiries open in relation to multinational technology companies compliance with GDPR.

The firms investigated are: Apple, Facebook, Instagram, LinkedIn, Twitter and WhatsApp.

For Apple, the DPC said it is examining whether the company has discharged its GDPR transparency obligations in respect of the information contained in its privacy policy and online documents regarding the processing of personal data of users of its services.

As for Facebook, the DPC said it  is conducting several investigations on the social media platform’s compliance with GDPR. In relation to a token breach that occurred in September 2018, the DPC said it was looking at whether Facebook Ireland has discharged its GDPR obligations to implement organisational and technical measures to secure and safeguard the personal data of its users. The DPC said it was also looking into at Facebook GDPR’s breach notification obligations.

Facebook LinkedIn, WhatsApp and Twitter are all being investigated in relation to how they process personal data.

GDPR and advanced analytics

The wording in the annual report concerning the  LinkedIn  inquiry is particularly intriguing. In the report the DPC states it is: “Examining whether LinkedIn has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform.”

The fact that the DPC is looking at LinkedIn’s use of behavioural analysis is certainly very interesting. The web giants rely on understanding their users better than they know themselves. This level of AI-enabled advanced analytics and machine learning is now available to more and more organisations, not just the multinational tech companies the DPC is investigating.

The outputs from the DPC’s investigations will very likely influence heavily the way organisations use advanced analytics on web data that can identify individuals.

Ultimately, it may even influence how the WWW evolves and whether today’s web giants as well as those in the making, will be able to sustain business models that see them through for the next 30 years.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: