Eyes on APAC

April 27, 2018  7:37 AM

How machine learning is used to detect fraud

Aaron Tan Aaron Tan Profile: Aaron Tan

One of machine learning’s most well-known use cases is fraud detection, an area that has drawn the attention of a growing number of technology suppliers looking to develop the best algorithms and techniques to solve a problem that costs businesses millions of dollars each year.

According to study by Vesta, a global payment service provider, fraud cost businesses an average of 8% of annual revenues in 2017. The biggest impact, however, has been on digital goods suppliers that lost 9.7% of revenue on average to fraud – an increase of 13% from 2016.

The majority of fraud expenditures are for fraud management, which makes up 75% of fraud costs, triple the actual fraud losses themselves.

San Francisco-based Stripe, a payment technology company, believes it has what it takes to detect online fraud from the onset with its technology.

Consider a fraudster who uses credit card information bought off the dark web to buy a laptop from an online merchant. Upon realising that a fraudulent transaction has been made, the rightful cardholder files a dispute with his or her bank, which in turn levies the total cost of the fraudulent transaction on the merchant.

Michael Manapat, Stripe’s head of data and machine learning products, says instead of having merchants review each transaction and write rules in the case of traditional fraud detection, the company uses machine learning to do all the heavy-lifting.

There are several tell-tale signs of fraud that Stripe’s machine learning model looks out for. These include buying an item in multiple sizes, pasting credit card details into order forms rather than typing them out, and the number of distinct cards used by a single person over a period of time.

With historical data on transactions and purchases made across its network of merchants, Stripe is then able to flag up potentially fraudulent transactions with higher confidence levels.

To reduce the number of false positives, Stripe uses human risk analysts to fine-tune and identify the fraud signals used in its machine learning model. Machine learning engineers will also examine false positives by hand to understand why the classifier got something wrong. “All our systems are retrained every day automatically as more data arrives,” Manapat says.

But besides creating better machine learning models, just as important is the need to ensure that insights generated from those models are trusted and understood by users. Manapat claims that Stripe’s model delivers insights with a high degree of human interpretability.

“Users want to know why we think a transaction is fraudulent, so we’ve been providing explanations on all model decisions,” Manapat says. “When we say a transaction is at high risk of fraud, we’ll tell you that we saw a high volume of similar transactions over the past day. Or that it’s medium risk because the card was issued in the US but the user’s IP address was in Singapore.”

So far, Stripe appears to be gaining traction in the market, having blocked $4bn worth of fraudulent transactions in 2017. It recently upgraded its feature set, which now lets merchants block potential fraudsters based on a list of attributes, as well as set custom thresholds at which to block payments, among other enhancements.

April 20, 2018  1:51 AM

Huawei’s industry vision offers glimpse into its priorities

Aaron Tan Aaron Tan Profile: Aaron Tan

At Huawei’s annual analyst summit in Shenzhen this week, the Chinese technology giant unveiled its predictions about what the future holds from a technology perspective in its Global Industry Vision (GIV) 2025 report.

The crystal-ball-gazing exercise involved the use of its own “unique research methodology” that combines a mix of data and trend analysis, honed through years of supplying IT products and services to global markets.

The predictions are not too far off from other reports that we’ve written about at Computer Weekly:

More data, sensors and robots

According to the GIV report, there will be 40 billion personal smart devices and 100 billion connections by 2025, largely driven by the growing footprint of the industrial internet of things (IoT). And with more and better connections, data traffic will grow exponentially, and most of it will be from video. The cloud virtual reality market will reach $292bn by 2025.

Not surprisingly, Huawei expects the penetration rate of smart assistants to rise to 90% by 2025, with 12% of homes having smart service robots. And with help of robots, some 246 million people with impaired vision will live normal lives.

Rise of “intelligent” technology

Essentially about the impact and role of AI in society, this prediction builds on current expectations that AI will speed up industry development through better decision-making and analysis. When applied to autonomous vehicles, AI will enable safer roads and improve last-mile connectivity. By 2025, Huawei expects 60 million vehicles to be connected to 5G networks, and that all new vehicles will be hooked up to the internet.

In manufacturing, the convergence of IT and operational technology (OT) will accelerate, generating returns for innovation and industries. Smart city technologies will also enable urban planners to create sustainable living environments and improve the lives of residents.

A booming digital economy

All of these developments are expected to drive the world’s digital economy to new highs. By 2025, Huawei predicts the global digital economy will be worth $23 trillion. AI will be widely accessible, giving birth to new industries and boosting existing ones.

Now, anyone who has been tracking the industry closely would not be surprised at Huawei’s predictions. What’s interesting is that the predictions are aligned with what Huawei is doing to serve three main groups of customers: carriers, enterprises and consumers.

Think embedding AI capabilities in carrier networks, providing infrastructure and IoT platforms for businesses and developing mobile devices for consumers. It has also been partnering governments to roll out smart city projects around the world.

While I was expecting Huawei to be bolder (perhaps something on how software is going to change the world or affective computing), the report is a good barometer of Huawei’s priorities in the years leading up to 2025 – to continue building on its strengths in the carrier and consumer businesses, and develop new capabilities in the enterprise space.

April 12, 2018  11:38 AM

Farming goes high tech in Australia

Aaron Tan Aaron Tan Profile: Aaron Tan

The farming industry is arguably one of the last frontiers in digital transformation efforts, but not so in Australia, which has one of the biggest agriculture sectors in the world.

Since 2013, more farmers in the country have been turning to Agriwebb, a cloud-based livestock management application that helps farmers keep records of farming activities, including feed consumption and fertilisation, map out their farms, and make data-driven decisions about farms based on cost and performance.

Today, the application, which runs on Amazon Web Services (AWS), is being used to manage 2,000 farms across Australia, representing some 10% of the country’s livestock, according to Justin Webb, Agriwebb’s co-founder and chairman.

Much of Agriwebb’s success today would not have been possible without the use of cloud computing, which Webb said had enabled the start-up company to build an application that can scale with the growth of the business. Running the application on AWS’s infrastructure also ensures data security, which rivals that of enterprise-class datacentres.

Webb acknowledges that Agriwebb has the potential to expand its reach, beyond the farms it manages, to the entire supply chain. For instance, through the use of blockchain, logistics companies, distributors and even consumers will be able to identity the provenance of farm produce.

“We’re building a critical mass of data before we go into blockchain,” he told ComputerWeekly on the sidelines of the AWS Summit in Sydney this week.

In addition, Agriwebb is also looking at enabling farmers to ingest data from internet of things (IoT) devices into the platform, giving farmers a way to combine agronomic data from sensors in the field with livestock data to improve operations and yield.

Agriwebb has already won accolades for its work – it has nabbed at least four awards at the National Australian Information Industry Awards in 2016, making it the startup to watch in the years to come, not only in Australia, but also on the global agri-tech stage.

April 6, 2018  6:36 AM

Enterprises will be watching how this luggage tracker works

Aaron Tan Aaron Tan Profile: Aaron Tan

Asset tracking is arguably the lowest-hanging use case that enterprises can consider in testing the potential of the internet of things (IoT).

From airport operators to mining companies, the use of IoT to track vehicles and equipment has been instrumental in driving more pervasive use of IoT and securing executive buy-in for the technology.

So, it’s not surprising that Sigfox, a supplier of LPWAN (low-power wide area network) connectivity with a global footprint, has developed the Monarch service that lets enterprises track devices that run seamlessly in all parts of the world, by automatically recognising and adapting to local radio frequencies.

Such services are handy to multinational companies, such as global logistics players, that may want to track their fleet and assets across the world through a single platform.

Earlier this week, Sigfox said it has worked with consumer luxury group LVMH to develop a luggage tracker powered by the Monarch service, enabling travelers to track their Louis Vuitton bags in major airports, even while travelling between different countries, using the LV Pass mobile app.

For now, the service is available in the world’s major airports such as London’s Heathrow, New York’s JFK, Tokyo’s Narita and Hong Kong’s Chep Lap Kok.

Sigfox claims this project is the first commercialised application of its kind that will demonstrate its ability to offer a global network, combined with a simple technology based on cost efficiency (cheap sensors and low subscription charges) and low-energy consumption (the tracker has a battery life of six months).

As a showcase project, the luggage tracker will need to demonstrate its ability to withstand travel conditions, including potential extreme weather, and establish a connection to the Monarch service with minimal delay.

Sigfox will also need to fulfil its promise of ensuring high quality of service (QoS) standards, despite the use of unlicensed spectrum which is prone to interference from nearby devices, including equipment used at airports.

After all, asset tracking can be mission-critical in some cases and enterprises will be watching how the Monarch service pans out. Think of the impact on airport operations if a unit load device used to load luggage or cargo is lost. Or, an autonomous vehicle at a sprawling mine that slips out of control due to unforeseen circumstances.

March 28, 2018  7:01 AM

Australian broadcaster live-streamed Winter Olympics with AWS

Aaron Tan Aaron Tan Profile: Aaron Tan

It would take someone over five million years to watch the amount of video carried across IP networks each month in 2021 – that’s how much video content we’ll be consuming over the next few years.

Throw in 4K – or even 8K – quality streaming videos in the mix and you’ll get a staggering amount of video data that broadcasters would have to process while meeting high consumer expectations of streaming and image quality.

Building their own infrastructure to support those needs is not only expensive for broadcasters, especially for one-time events like the Winter Olympics and the World Cup, it is also time-consuming – given that broadcast rights may not be secured until weeks away from the opening ceremony in some cases.

The most obvious path forward would be to turn to the cloud, which is what Australia’s Seven Networks had done. To provide full coverage of the PyeongChang Winter Olympics over 18 days, it used Amazon Web Services (AWS) to encode and stream 140 million minutes of live video to viewers who were using the OlympicsOn7 and 7plus apps.

With AWS Elemental Live L505AE multi-channel H.264 encoders, Seven Networks provided eight live streams simultaneously, letting users follow their chosen sport including snowboarding, alpine skiing and ice hockey. Premium users could also access the live streams in HD 1080p quality.

To ensure redundancy, all live streams were packaged into Apple HTTP Live Streaming (HLS) encrypted format, with the content then being pushed out over Amazon Direct Connect dedicated links into multiple AWS availability zones within the Sydney region.

Live content was then cached in AWS Origin Servers and, from there, served out via Amazon CloudFront and Akamai’s content delivery networks to meet the demand from viewers throughout Australia.

With the use of APIs, the broadcaster was also able to insert advertising triggers into live video streams, as well as overlay graphics into live video. Seven Networks is also expected to provide similar live streams for the Tokyo Olympics in 2020.

Apart from Seven Networks, AWS has amassed a growing list of broadcasters and over-the-top video content providers, such as PBS, Netflix, itv in the UK and NDTV in India. Amazon itself is reportedly eyeing the broadcast rights to the English Premier League in the UK.

March 22, 2018  8:36 AM

Alibaba’s dual-mode SSD platform raises bar for storage performance

Aaron Tan Aaron Tan Profile: Aaron Tan

The growing use of artificial intelligence and big data has put a strain on hyperscale datacentres, particularly traditional, standardised storage infrastructure that has been unable to adapt to different I/O requirements.

Standardised storage, while offering backward compatibility and portability, use a generic block I/O interface that host software such as an operating system has no control over. That means the host would not be able to manage the physical storage according to varying performance needs.

To solve this problem, open-channel SSDs (solid-state drives) were developed to expose the internal parallelism of SSDs to a host, enabling these devices to support I/O isolation and more predictable latencies.

Chinese cloud service provider Alibaba Cloud has taken things further when it announced that it has developed AliFlash V3, a dual-mode SSD that supports both open-channel mode and native NVMe mode (mainly for compatibility purposes), as part of an new storage platform that closely integrates hardware, firmware, drivers, operating system and applications.

The integration is done via the platform’s user-space open channel I/O stack called Fusion Engine that was released in January 2018.

The platform, which Alibaba claims will reduce read latency by 75%, and improve overall storage performance by as much as five times, also supports all levels of customisation – from generic block devices that require no modification to applications, to highly customised software/hardware integrated systems.

The impetus to develop the new storage platform stemmed from Alibaba’s own experience with running applications on standardised hardware like NVMe SSDs. It found that its e-commerce, financial and logistics applications, for example, required features that were not available in standard SSDs.

Moreover, because the company’s application requirements change frequently, its storage infrastructure must be agile and adapt quickly to changing demands. However, due to long production cycles of standard SSDs, it could take quarters to obtain new product releases from SSD suppliers.

Alibaba has written a specification for its dual-mode SSD platform, and is working with different SSD and firmware suppliers in an effort to build an ecosystem around its platform.

Shu Li, senior staff engineer at Alibaba’s infrastructure services team, says the platform is expected to be widely deployed in Alibaba’s datacentres, serving both internal and external customers in future.

March 16, 2018  8:50 AM

Data wants to be free

Aaron Tan Aaron Tan Profile: Aaron Tan

In a world where data is more readily available than ever, having analytical skills that will help you to make sense of data in day-to-day tasks is instrumental in career progression.

But going by a recent survey conducted by Qlik, a data analytics software provider, only 20% of employees in the Asia-Pacific (APAC) region are confident in their data literacy skills, that is, the ability to read, work with and analyse data.

Diving deeper into the study, some 49% admitted to feeling overwhelmed when reading, working with, analysing and challenging data, while 81% of workers don’t think they have adequate training to be data literate.

Not surprisingly, most full-time workers (72%) said they would be willing to invest more time and energy in improving their data literacy skills, if given the chance.

While the overall numbers are worrying, workers in some countries fared better than others. India appears to be the most data-literate nation, where 45% of respondents said they were confident with data.

Business leaders including C-level executives and directors in India (64%), Australia (39%) and Singapore (31%) were also most confident about their data literacy levels.

At the other end of the spectrum was Japan, where just 6% of workers classified themselves as data literate.

One of the reasons for this disparity lies in access to data, according to Paul Mclean, Qlik’s data literacy evangelist in APAC.

For example in APAC, on average 59% of junior level employees said they have sufficient access to data. Comparatively, 82% of senior managers and 85% of directors in APAC have sufficient access to data.

Looking at a country perspective, 88% of Indian workers believed they have all the data sets they need to perform their jobs to the highest possible standard – which is higher compared to other countries in APAC.

The numbers were lower in Australia (60%) and Japan (28%). This inequality is holding people and businesses back.

Employees can only become data literate if they can access data and integrate it into their everyday work lives – basically learning by doing, Mclean said, calling for organisations to level the playing field and empower every employee, across every level of the organisation, the right to use and access data for their respective roles.

But why aren’t employees getting access to all the information they need to do their jobs well? Part of this could be – rightly or wrongly – concerns over employees misusing sensitive information, as well as knowledge hoarding practices that give managers a false sense of superiority over their colleagues.

These managers may think that such practices will offer job security, but the opposite is true. They can be easily replaced or moved to another position, if an enlightened management that sees the benefits of information access can’t force them to release the data they’re hoarding to others.

For a deeper discussion on why information – and data for that matter – wants to be free, check out this seminal work by Professor Polk Wagner where he talks about intellectual property and the mythologies of control.

March 1, 2018  9:36 AM

GDPR compliance is about risk management and governance, not technology

Aaron Tan Aaron Tan Profile: Aaron Tan

From 25 May this year, organisations across the ASEAN region will have to comply with the General Data Protection Regulation (GDPR), which will apply to any company that collects the personal data of European Union residents.

In the run-up to the looming deadline, a number of technology suppliers have been touting the importance of identifying, managing and protecting the personal data of EU residents, using various data protection and management technologies.

While there’s no doubt that tech suppliers are helping to raise awareness in the market about the GDPR, taking a technologically centric approach to GDPR compliance will further accentuate the dangerous view that data protection is an IT and security issue, and not a business issue.

As we all know by now, data protection and cyber security aren’t merely technology issues. When businesses get fined for data breaches, they are the ones that will draw flak for putting their customers’ personal data at risk, not their legal or IT teams. In some cases, CEOs have even resigned after public backlash over data breaches that took place under their watch.

In a bid to sell their technology tools, some suppliers have over-simplified their messages to suit their offerings, sometimes without having a full understanding of data protection principles and the requirements under the GDPR.

Instead, data protection – and GDPR compliance for that matter – should be approached from a risk management and governance perspective, with technology tools as enablers, not solutions.

Data protection laws such as the GDPR are complex, and can impact a broad range of business roles, including legal, audit, HR and finance, not just IT. In achieving GDPR compliance, organisations should focus on getting these roles to work together in ongoing efforts to ensure governance, risk and compliance (GRC) across an organisation, and not be distracted by the noise in the marketplace.

February 23, 2018  8:25 AM

Computer Weekly welcomes APAC CIO Advisory Panel

Aaron Tan Aaron Tan Profile: Aaron Tan

At Computer Weekly, we strive to provide in-depth coverage of issues, challenges and trends facing today’s IT leaders through original, independent and targeted content.

To ensure that our stories meet the needs of our readers in the APAC region, we’ve formed our inaugural APAC CIO Advisory Panel, an independent body tasked with providing strategic advice to our editorial team.

Please join me in welcoming the founding members of the panel comprising senior executives across leading organisations across the region.

Eugene Yeo, Group CIO, MyRepublic

Eugene is group chief information officer at MyRepublic. His primary focus is on driving customer centricity and operational efficiencies across regional operations of the company, through the use of innovative technology and efficient business processes.

Combining his experience in enterprise software development with a deep understanding of ISP operations, he leads the development of customer-centric, agile OSS/BSS platforms and operational processes that allowed for the stratospheric growth of the company across the Asia-Pacific region.

He is a regular keynote speaker at TM Forum events globally, and sits on the advisory panel of various startups and educational institutions across the region.

Dr Kwong Yuk Wah, CIO, NTUC

Yuk Wah is the chief information officer of Singapore’s National Trades Union Congress (NTUC). She is also the chief data protection officer of NTUC, its affiliated unions, as well as the Ong Teng Cheong Labour Leadership Institute.

Under her leadership, NTUC was a winner of the National Infocomm Awards (NIA) 2014 for the most innovative use of infocomm technology in the private sector. She was awarded the ASEAN CIO Award 2015.

Yuk Wah had also worked in Singapore’s public sector where she started her career at the National Computer Board, and held various management positions at the Infocomm Development Authority. She was also vice president of planning at Singapore Airlines.

Lee Kee Siang, CIO, National Library Board

Kee Siang is the chief information officer and director for resource discovery and management at Singapore’s National Library Board (NLB).

As the CIO of NLB, he provides leadership in formulating IT strategies and work plans to transform NLB’s service capabilities. He also sets direction for the design and implementation of organisation-wide IT policies and standards to ensure alignment of service outcomes, strategies and resources at all levels.

Kee Siang is also a member of the Technology Advisory Committee of the Casino Regulatory Authority of Singapore, NHB Digital Resource Panel and Honorary Auditor of the IT Management Association.

Manik Narayan Saha, CIO, SAP Asia Pacific and Japan (APJ)

Based in Singapore, Manik leads a global multinational and multicultural IT organisation. As part of the senior leadership team in APJ, he is responsible for SAP’s internal IT services to 28,000+ staff in the region.

With 19 years of experience and expertise in technology, Manik is a prominent keynote speaker at events, and provides thought leadership on a wide range of topics ranging from IT Strategy, artificial Intelligence, digitalising operations, process excellence and enterprise innovation.

Manik is a member of the INSEAD Alumni Network and a Regional Ambassador of the INSEAD Directors Network  for Singapore. He was also the founding fellow and is currently serves as a vice-president for Ideation Edge Asia, a non-profit organisation.

Nigel Lim, Regional IT Manager

Nigel is regional IT manager (Asia & Oceania) at one of Japan’s largest trading companies. His division is responsible for managing the regional portfolio of IT programmes and projects as well as governance and compliance. He is also leads the company’s consulting practice.

In previous roles, he has been accountable for various portfolios of IT including service delivery, application support, infrastructure operations and compliance.

Nigel is a Chartered Fellow of the Chartered Management Institute, UK, and has more than a decade of experience managing IT.  An energetic visionary, he is passionate about organisational excellence and delivering sustainable value.

Gary Adler, Chief Digital Officer, MinterEllison

Gary has had 19 years of IT experience, with 10 years in senior management roles. He has a finance and accounting background but made the move to IT in the late 90s, initially focusing on infrastructure. Gary has worked in the investment banking, insurance, mining and professional services sectors in both Australia and the UK.

In recent years, Gary played a lead role in the technology strategy which successfully brought together the global merger of Australian firm Freehills and UK and Asian firm Herbert Smith, before moving to lead Australian firm, MinterEllison in mid-2015.

Over time, his focus in IT has varied from managing technical portfolios to enterprise-wide strategy and planning roles. As Chief Information Officer, and more recently Chief Digital Officer, Gary’s focus at MinterEllison has been on bringing a new legal operations model mindset to ‘Big Law’ via emerging technologies such as data analytics and AI to streamline delivery of legal services to the firm’s clients and workforce.

February 13, 2018  8:20 AM

Extending the shelf-life of enterprise mobile devices

Aaron Tan Aaron Tan Profile: Aaron Tan

With more businesses expecting enterprise-grade mobile devices to last longer than the average consumer smartphone replacement cycle, keeping those devices secure is a growing challenge.

According to a survey by Zebra Technologies, 51% of businesses want their mobile computers to last more than five years, some of which are still powered by legacy “green screen” Telnet-based systems or Windows mobile operating systems.

Getting support for these older operating systems is next to impossible, given that those systems have reached their “end-of-life” where software and security updates are no longer provided.

Even for a modern mobile operating system (OS) such as Android, security updates usually end after three years – well short of the five or more years that enterprises need. This gap between OS and hardware lifecycles can create an exposure to ever-present security risks, said April Shen, director of enterprise visibility and mobility at Zebra Technologies Asia-Pacific.

While some enterprises may look to replace their mobile devices with newer ones to take advantage of the latest – and more secure – versions of operating systems, some may be reluctant to do so, given that many enterprise-grade mobile devices are built to be rugged and hence can last longer.

So what can enterprises do? Like companies such as Rimini Street that provide third-party support services for enterprise software, Zebra Technologies, through a product called LifeGuard, delivers regular security patches on a monthly or quarterly basis.

“All security updates that we release also come with detailed release notes that share guidance on the specific vulnerabilities being addressed as well as detailed installation instruction,” Shen said. “All of this has resulted in a unique, industry-leading level of OS security support.”

But that does not mean that all of LifeGuard’s security patches, which address various threat severity levels, need to be applied all the time. Shen said businesses should evaluate the patches in accordance with their IT policies to determine if the patches are required.

“We also understand that software updates may carry a certain level of functional risk. For example, customers may want to assess the individual vulnerabilities addressed in each release, as they may already have taken steps to mitigate some of these vulnerabilities through measures (such as application white listing and lock task mode).”

Of course, there will come a time when enterprises will need to replace their devices for good. That will set off a chain of tasks such as porting existing apps to the new devices and operating system, and testing the apps before deploying them.

Shen said because LifeGuard continues to provide legacy OS security support for one year in the form of quarterly updates, enterprises will have enough time to migrate to the newer OS smoothly and securely.

The catch is LifeGuard is only available for newer Android-based devices from Zebra. Legacy products may either have LifeGuard support or some lesser security support profile.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: