E-commerce in Asia-Pacific is booming, with 71% of consumers in the region already shopping online. But so is the risk of fraud, with one in five consumers falling victim, according to new report from Experien.
The report, co-authored with IDC, also found that over half of consumers would switch service providers in the event of fraud, proving that consumers are willing to trade convenience and a better customer experience for better fraud protection.
There were also distinct differences in consumer attitudes between countries.
In mobile-led, emerging markets, people were more convenience-driven and less risk-averse, with the more security-conscious individuals tending to come from mature economies.
“We notice that in more mature economies like Hong Kong and Singapore, consumers are largely more aware of fraud risks and act in a more conservative manner,” said Ben Elliott, CEO of Experian APAC.
“This means they may sometimes avoid transacting online should they perceive a potential fraud risk. This is in contrast to emerging economies like Vietnam, where consumers are less fraud aware and more convenience driven.”
While the unfortunate reality is that greater digital convenience is linked to higher fraud exposure – presenting a problem for both consumers and businesses – the report also revealed that there was a silver lining.
It found that as consumers became aware of the risk of fraud, they were more likely to adopt security measures like biometrics, including fingerprint, facial and voice recognition.
In APAC, 13% of consumers are now willing to adopt biometrics, with India (21%), Vietnam and China (both at 18%) leading the charge as early adopters. Australia (9%), Japan and New Zealand (both at 8%) are the least willing to do so.
Interestingly, however, a significant 57% of consumers are already comfortable with biometrics in government/non-commercial applications.
As acceptance extends into the commercial sphere, this new technology will increasingly be able to provide a more efficient customer experience and enhanced fraud protection.
At present, one of the best ways companies can protect their customers is by leveraging high-quality customer data to effectively verify transactions.
However, the report found that this was easier said than done, with consumers often selective in the type of information they were willing to share with companies. They were also clear on how they wanted their personal data to be used.
For example, when asked, 43% of consumers were willing to have their personal data shared with businesses specifically for better fraud detection over convenience or a better customer experience.
Furthermore, 5% of APAC consumers said they had intentionally submitted inaccurate information to avoid disclosing personal data, while 20% had made mistakes in the details they provided to businesses.
Data input errors were highest in Thailand, followed by Vietnam, Indonesia and India, while Japan had the lowest erroneous submissions followed distantly by Singapore and Hong Kong.
Elliott said this indicates a gap in trust between businesses and their consumers, but also provides a significant opportunity for them to make improvements.
“Intentional non-disclosure of information heightens the challenges businesses already face in combatting fraudsters and ascertaining the identity of genuine customers,” added Elliott.
“We believe that this is fundamentally an issue of trust – and companies must do more to communicate the value to consumers about the use of data for fraud protection and that they can be trusted as custodians of personal data.”
A growing number of Asian airlines are looking to the hybrid cloud to reduce IT costs.
In October 2017, Malaysia Airlines (MAS) embarked on a massive project with TCS to migrate its core datacentre infrastructure and applications to a hybrid-cloud model, with 80% of its assets hosted on Microsoft Azure and 20% on a private cloud.
Over the next five years, MAS expects to improve its cost efficiency by 50%, improve business performance by 80% and reduce customer response time from days to hours.
Joining MAS in the move to the hybrid cloud is Cathay Pacific. The Hong Kong-based carrier recently moved from its legacy infrastructure to a hybrid cloud comprising a Red Hat OpenStack-based private cloud and public cloud instances.
With its new hybrid cloud set-up, Cathay Pacific expects to support and move over 50 consumer-facing applications across its hybrid infrastructure, scaling resources up and down when necessary.
Perhaps more importantly, the airline will be able to bring new services to market faster in a highly competitive industry. Already, it has been able to roll out 200 application changes per day, up from just 20 previously. This improved efficiency has translated to a lower total cost of ownership for its production systems.
That the hybrid cloud is fast-becoming the infrastructure of choice is not surprising.
Many enterprises, including airlines, see hybrid IT as having best of both worlds, giving them the ability to tap on public cloud resources to meet new or fluctuating business demands, while continuing to run legacy and certain mission-critical applications in their own private cloud datacentres.
In fact, 24% of ASEAN respondents in the TechTarget IT Priorities 2018 survey cited hybrid IT as a priority this year, representing a growth rate 23% year-on-year.
With more ASEAN organisations looking to implement hybrid IT strategies, about a third of IT professionals who participated in the survey expect to roll out public cloud infrastructure services this year – higher than the APAC average of 25%.
The growth of cloud infrastructure services and the decline of traditional datacentre outsourcing are driving a massive shift towards hybrid infrastructure services. Gartner, a technology research firm has predicted that 90% of organisations will adopt hybrid infrastructure management capabilities by 2020.
A new study by Seagate has revealed that APAC organisations are warming up to artificial intelligence (AI), but a significant number have not invested in the data and technical solutions required to support the technology fully.
According to the study, 96% of senior IT professionals across the region believe AI applications will drive productivity and business performance. However, an almost equal number of respondents (95%) believe further investments in their IT infrastructure are required to enable them to support their use of AI.
Over the next one year, nearly nine in 10 organisations plan to implement AI in areas including IT, supply chain logistics, product innovation and R&D, as well as finance and customer support.
However, two-thirds admitted that they struggled to know where to start. Plus, it doesn’t help that 31% of them don’t think enough is being invested in the necessary infrastructure to support AI initiatives.
In fact, one in five organisations said they weren’t ready or able to handle the increasing data streams from AI applications. Further, while almost all believe there is an increasing need for robust data storage solutions with growing AI applications, 15% said they have not invested sufficiently in data storage to be ready for AI now or in the future.
While it is in the interest of Seagate to highlight attitudes towards storage and infrastructure in the context of AI, the biggest barrier to the success of any AI initiative is the shortage of skills and expertise.
Today, most enterprises are faced with fragmented data without a holistic view of the customer. They have data silos, with no efficient way to put it together, or don’t know how to find the necessary data.
With software and data underpinning the success of AI initiatives, it is far more important for organisations to build up their skills to manage the data science workflow, from data preparation and processing to data modelling.
Most of the technology tools and platforms to support AI applications are already here. The question is whether organisations have the right skills to use them to their full potential – even with all the infrastructure in place.
One of machine learning’s most well-known use cases is fraud detection, an area that has drawn the attention of a growing number of technology suppliers looking to develop the best algorithms and techniques to solve a problem that costs businesses millions of dollars each year.
According to study by Vesta, a global payment service provider, fraud cost businesses an average of 8% of annual revenues in 2017. The biggest impact, however, has been on digital goods suppliers that lost 9.7% of revenue on average to fraud – an increase of 13% from 2016.
The majority of fraud expenditures are for fraud management, which makes up 75% of fraud costs, triple the actual fraud losses themselves.
San Francisco-based Stripe, a payment technology company, believes it has what it takes to detect online fraud from the onset with its technology.
Consider a fraudster who uses credit card information bought off the dark web to buy a laptop from an online merchant. Upon realising that a fraudulent transaction has been made, the rightful cardholder files a dispute with his or her bank, which in turn levies the total cost of the fraudulent transaction on the merchant.
Michael Manapat, Stripe’s head of data and machine learning products, says instead of having merchants review each transaction and write rules in the case of traditional fraud detection, the company uses machine learning to do all the heavy-lifting.
There are several tell-tale signs of fraud that Stripe’s machine learning model looks out for. These include buying an item in multiple sizes, pasting credit card details into order forms rather than typing them out, and the number of distinct cards used by a single person over a period of time.
With historical data on transactions and purchases made across its network of merchants, Stripe is then able to flag up potentially fraudulent transactions with higher confidence levels.
To reduce the number of false positives, Stripe uses human risk analysts to fine-tune and identify the fraud signals used in its machine learning model. Machine learning engineers will also examine false positives by hand to understand why the classifier got something wrong. “All our systems are retrained every day automatically as more data arrives,” Manapat says.
But besides creating better machine learning models, just as important is the need to ensure that insights generated from those models are trusted and understood by users. Manapat claims that Stripe’s model delivers insights with a high degree of human interpretability.
“Users want to know why we think a transaction is fraudulent, so we’ve been providing explanations on all model decisions,” Manapat says. “When we say a transaction is at high risk of fraud, we’ll tell you that we saw a high volume of similar transactions over the past day. Or that it’s medium risk because the card was issued in the US but the user’s IP address was in Singapore.”
So far, Stripe appears to be gaining traction in the market, having blocked $4bn worth of fraudulent transactions in 2017. It recently upgraded its feature set, which now lets merchants block potential fraudsters based on a list of attributes, as well as set custom thresholds at which to block payments, among other enhancements.
At Huawei’s annual analyst summit in Shenzhen this week, the Chinese technology giant unveiled its predictions about what the future holds from a technology perspective in its Global Industry Vision (GIV) 2025 report.
The crystal-ball-gazing exercise involved the use of its own “unique research methodology” that combines a mix of data and trend analysis, honed through years of supplying IT products and services to global markets.
The predictions are not too far off from other reports that we’ve written about at Computer Weekly:
More data, sensors and robots
According to the GIV report, there will be 40 billion personal smart devices and 100 billion connections by 2025, largely driven by the growing footprint of the industrial internet of things (IoT). And with more and better connections, data traffic will grow exponentially, and most of it will be from video. The cloud virtual reality market will reach $292bn by 2025.
Not surprisingly, Huawei expects the penetration rate of smart assistants to rise to 90% by 2025, with 12% of homes having smart service robots. And with help of robots, some 246 million people with impaired vision will live normal lives.
Rise of “intelligent” technology
Essentially about the impact and role of AI in society, this prediction builds on current expectations that AI will speed up industry development through better decision-making and analysis. When applied to autonomous vehicles, AI will enable safer roads and improve last-mile connectivity. By 2025, Huawei expects 60 million vehicles to be connected to 5G networks, and that all new vehicles will be hooked up to the internet.
In manufacturing, the convergence of IT and operational technology (OT) will accelerate, generating returns for innovation and industries. Smart city technologies will also enable urban planners to create sustainable living environments and improve the lives of residents.
A booming digital economy
All of these developments are expected to drive the world’s digital economy to new highs. By 2025, Huawei predicts the global digital economy will be worth $23 trillion. AI will be widely accessible, giving birth to new industries and boosting existing ones.
Now, anyone who has been tracking the industry closely would not be surprised at Huawei’s predictions. What’s interesting is that the predictions are aligned with what Huawei is doing to serve three main groups of customers: carriers, enterprises and consumers.
Think embedding AI capabilities in carrier networks, providing infrastructure and IoT platforms for businesses and developing mobile devices for consumers. It has also been partnering governments to roll out smart city projects around the world.
While I was expecting Huawei to be bolder (perhaps something on how software is going to change the world or affective computing), the report is a good barometer of Huawei’s priorities in the years leading up to 2025 – to continue building on its strengths in the carrier and consumer businesses, and develop new capabilities in the enterprise space.
The farming industry is arguably one of the last frontiers in digital transformation efforts, but not so in Australia, which has one of the biggest agriculture sectors in the world.
Since 2013, more farmers in the country have been turning to Agriwebb, a cloud-based livestock management application that helps farmers keep records of farming activities, including feed consumption and fertilisation, map out their farms, and make data-driven decisions about farms based on cost and performance.
Today, the application, which runs on Amazon Web Services (AWS), is being used to manage 2,000 farms across Australia, representing some 10% of the country’s livestock, according to Justin Webb, Agriwebb’s co-founder and chairman.
Much of Agriwebb’s success today would not have been possible without the use of cloud computing, which Webb said had enabled the start-up company to build an application that can scale with the growth of the business. Running the application on AWS’s infrastructure also ensures data security, which rivals that of enterprise-class datacentres.
Webb acknowledges that Agriwebb has the potential to expand its reach, beyond the farms it manages, to the entire supply chain. For instance, through the use of blockchain, logistics companies, distributors and even consumers will be able to identity the provenance of farm produce.
“We’re building a critical mass of data before we go into blockchain,” he told ComputerWeekly on the sidelines of the AWS Summit in Sydney this week.
In addition, Agriwebb is also looking at enabling farmers to ingest data from internet of things (IoT) devices into the platform, giving farmers a way to combine agronomic data from sensors in the field with livestock data to improve operations and yield.
Agriwebb has already won accolades for its work – it has nabbed at least four awards at the National Australian Information Industry Awards in 2016, making it the startup to watch in the years to come, not only in Australia, but also on the global agri-tech stage.
Asset tracking is arguably the lowest-hanging use case that enterprises can consider in testing the potential of the internet of things (IoT).
From airport operators to mining companies, the use of IoT to track vehicles and equipment has been instrumental in driving more pervasive use of IoT and securing executive buy-in for the technology.
So, it’s not surprising that Sigfox, a supplier of LPWAN (low-power wide area network) connectivity with a global footprint, has developed the Monarch service that lets enterprises track devices that run seamlessly in all parts of the world, by automatically recognising and adapting to local radio frequencies.
Such services are handy to multinational companies, such as global logistics players, that may want to track their fleet and assets across the world through a single platform.
Earlier this week, Sigfox said it has worked with consumer luxury group LVMH to develop a luggage tracker powered by the Monarch service, enabling travelers to track their Louis Vuitton bags in major airports, even while travelling between different countries, using the LV Pass mobile app.
For now, the service is available in the world’s major airports such as London’s Heathrow, New York’s JFK, Tokyo’s Narita and Hong Kong’s Chep Lap Kok.
Sigfox claims this project is the first commercialised application of its kind that will demonstrate its ability to offer a global network, combined with a simple technology based on cost efficiency (cheap sensors and low subscription charges) and low-energy consumption (the tracker has a battery life of six months).
As a showcase project, the luggage tracker will need to demonstrate its ability to withstand travel conditions, including potential extreme weather, and establish a connection to the Monarch service with minimal delay.
Sigfox will also need to fulfil its promise of ensuring high quality of service (QoS) standards, despite the use of unlicensed spectrum which is prone to interference from nearby devices, including equipment used at airports.
After all, asset tracking can be mission-critical in some cases and enterprises will be watching how the Monarch service pans out. Think of the impact on airport operations if a unit load device used to load luggage or cargo is lost. Or, an autonomous vehicle at a sprawling mine that slips out of control due to unforeseen circumstances.
It would take someone over five million years to watch the amount of video carried across IP networks each month in 2021 – that’s how much video content we’ll be consuming over the next few years.
Throw in 4K – or even 8K – quality streaming videos in the mix and you’ll get a staggering amount of video data that broadcasters would have to process while meeting high consumer expectations of streaming and image quality.
Building their own infrastructure to support those needs is not only expensive for broadcasters, especially for one-time events like the Winter Olympics and the World Cup, it is also time-consuming – given that broadcast rights may not be secured until weeks away from the opening ceremony in some cases.
The most obvious path forward would be to turn to the cloud, which is what Australia’s Seven Networks had done. To provide full coverage of the PyeongChang Winter Olympics over 18 days, it used Amazon Web Services (AWS) to encode and stream 140 million minutes of live video to viewers who were using the OlympicsOn7 and 7plus apps.
With AWS Elemental Live L505AE multi-channel H.264 encoders, Seven Networks provided eight live streams simultaneously, letting users follow their chosen sport including snowboarding, alpine skiing and ice hockey. Premium users could also access the live streams in HD 1080p quality.
To ensure redundancy, all live streams were packaged into Apple HTTP Live Streaming (HLS) encrypted format, with the content then being pushed out over Amazon Direct Connect dedicated links into multiple AWS availability zones within the Sydney region.
Live content was then cached in AWS Origin Servers and, from there, served out via Amazon CloudFront and Akamai’s content delivery networks to meet the demand from viewers throughout Australia.
With the use of APIs, the broadcaster was also able to insert advertising triggers into live video streams, as well as overlay graphics into live video. Seven Networks is also expected to provide similar live streams for the Tokyo Olympics in 2020.
Apart from Seven Networks, AWS has amassed a growing list of broadcasters and over-the-top video content providers, such as PBS, Netflix, itv in the UK and NDTV in India. Amazon itself is reportedly eyeing the broadcast rights to the English Premier League in the UK.
The growing use of artificial intelligence and big data has put a strain on hyperscale datacentres, particularly traditional, standardised storage infrastructure that has been unable to adapt to different I/O requirements.
Standardised storage, while offering backward compatibility and portability, use a generic block I/O interface that host software such as an operating system has no control over. That means the host would not be able to manage the physical storage according to varying performance needs.
To solve this problem, open-channel SSDs (solid-state drives) were developed to expose the internal parallelism of SSDs to a host, enabling these devices to support I/O isolation and more predictable latencies.
Chinese cloud service provider Alibaba Cloud has taken things further when it announced that it has developed AliFlash V3, a dual-mode SSD that supports both open-channel mode and native NVMe mode (mainly for compatibility purposes), as part of an new storage platform that closely integrates hardware, firmware, drivers, operating system and applications.
The integration is done via the platform’s user-space open channel I/O stack called Fusion Engine that was released in January 2018.
The platform, which Alibaba claims will reduce read latency by 75%, and improve overall storage performance by as much as five times, also supports all levels of customisation – from generic block devices that require no modification to applications, to highly customised software/hardware integrated systems.
The impetus to develop the new storage platform stemmed from Alibaba’s own experience with running applications on standardised hardware like NVMe SSDs. It found that its e-commerce, financial and logistics applications, for example, required features that were not available in standard SSDs.
Moreover, because the company’s application requirements change frequently, its storage infrastructure must be agile and adapt quickly to changing demands. However, due to long production cycles of standard SSDs, it could take quarters to obtain new product releases from SSD suppliers.
Alibaba has written a specification for its dual-mode SSD platform, and is working with different SSD and firmware suppliers in an effort to build an ecosystem around its platform.
Shu Li, senior staff engineer at Alibaba’s infrastructure services team, says the platform is expected to be widely deployed in Alibaba’s datacentres, serving both internal and external customers in future.
In a world where data is more readily available than ever, having analytical skills that will help you to make sense of data in day-to-day tasks is instrumental in career progression.
But going by a recent survey conducted by Qlik, a data analytics software provider, only 20% of employees in the Asia-Pacific (APAC) region are confident in their data literacy skills, that is, the ability to read, work with and analyse data.
Diving deeper into the study, some 49% admitted to feeling overwhelmed when reading, working with, analysing and challenging data, while 81% of workers don’t think they have adequate training to be data literate.
Not surprisingly, most full-time workers (72%) said they would be willing to invest more time and energy in improving their data literacy skills, if given the chance.
While the overall numbers are worrying, workers in some countries fared better than others. India appears to be the most data-literate nation, where 45% of respondents said they were confident with data.
Business leaders including C-level executives and directors in India (64%), Australia (39%) and Singapore (31%) were also most confident about their data literacy levels.
At the other end of the spectrum was Japan, where just 6% of workers classified themselves as data literate.
One of the reasons for this disparity lies in access to data, according to Paul Mclean, Qlik’s data literacy evangelist in APAC.
For example in APAC, on average 59% of junior level employees said they have sufficient access to data. Comparatively, 82% of senior managers and 85% of directors in APAC have sufficient access to data.
Looking at a country perspective, 88% of Indian workers believed they have all the data sets they need to perform their jobs to the highest possible standard – which is higher compared to other countries in APAC.
The numbers were lower in Australia (60%) and Japan (28%). This inequality is holding people and businesses back.
Employees can only become data literate if they can access data and integrate it into their everyday work lives – basically learning by doing, Mclean said, calling for organisations to level the playing field and empower every employee, across every level of the organisation, the right to use and access data for their respective roles.
But why aren’t employees getting access to all the information they need to do their jobs well? Part of this could be – rightly or wrongly – concerns over employees misusing sensitive information, as well as knowledge hoarding practices that give managers a false sense of superiority over their colleagues.
These managers may think that such practices will offer job security, but the opposite is true. They can be easily replaced or moved to another position, if an enlightened management that sees the benefits of information access can’t force them to release the data they’re hoarding to others.
For a deeper discussion on why information – and data for that matter – wants to be free, check out this seminal work by Professor Polk Wagner where he talks about intellectual property and the mythologies of control.