More countries are rushing to make their cities smarter, by driving the use of technology – though that may not always be fully appreciated by citizens.
Take Singapore, one of the poster boys for smart city developments in Asia, for example. In a project involving the use of sensors to monitor the activities of seniors at home, it was found that the elderly had covered the sensors with towels out of privacy concerns.
In Yinchuan, China, completed residential and commercial buildings remain empty, even as the smart city promises to offer a better quality of life by automating many aspects of urban life, coupled with some good planning.
These are classic examples of what not to do in any smart city development – that is, not engaging citizens well enough to get their buy-in before rolling out a project. This often leads to white elephant projects, wasting resources that could have been put to better use.
Smart city planners could take a leaf from what some European countries have done.
Instead of a top-down approach to implementing smart city projects, Copenhagen in Denmark has residents installing sensors on their bikes to gather data on traffic. Why do they do this? To help fellow bikers avoid crowded areas, as well as the government in optimising road infrastructure.
This was only possible because the residents themselves see the benefits of those sensors, actively contributing to the project, and not because some city planner tells them that something is great.
Ultimately, smart cities, for their technology prowess, are all about improving the lives of people. That should not be forgotten as countries like Malaysia start to test technologies like mobile bus ticketing and cashless payments. Do citizens and businesses see the benefits of those initiatives? Will they be willing to participate in the projects?
Or, will the cost of embracing those technologies surpass any benefits they bring? These are important questions that can only be answered with a more citizen-centric, participatory approach to developing smart cities, and for any smart city project to have a whiff of success.
When StarHub’s residential fibre network went down in October 2016, the Singapore telco initially pinned the blame on distributed denial of service (DDoS) attacks brought on by internet of things (IoT) devices of customers that were compromised by malware.
Subsequent investigations by the authorities, however, revealed that the outage was caused by a surge in legitimate Domain Name System (DNS) traffic and did not point towards a DDoS attack. The flood in traffic eventually overloaded part of StarHub’s home broadband infrastructure.
Notwithstanding, this high-profile incident has underscored the clear and present danger posed by the use of IoT devices to launch DDoS attacks.
According to the findings of the recent Neustar Worldwide DDoS Attacks and Cyber Insights Research Report, more than 80% of surveyed organisations globally have been hit by DDoS attacks in the previous 12 months – an increase of 15% since 2016.
Furthermore, 85% of those attacked were hit more than once. “Worryingly, despite knowing the threats, companies still struggle to detect and respond to DDoS attacks effectively and efficiently,” says Robin Schmitt, Neustar’s general manager in the Asia Pacific (APAC) region.
In APAC, only 17% of organisations were able to detect an attack in less than an hour, compared to 25% in the US and Europe. The results are similar for response times, with APAC lagging behind. Ideally, Schmitt says companies should be identify and mitigate an attack in less than three minutes.
According to Schmitt, the dependence on internal skills and next generation firewalls, as opposed to specialised DDoS services and appliances, is a contributing factor to APAC’s less than stellar record of detecting and mitigating DDoS attacks.
When it comes to mitigating DDoS attacks, the first thing that comes to mind is clean pipe services that “scrub” malicious traffic off an organisation’s internet traffic, while allowing legitimate traffic to pass through.
However, Schmitt contends that clean pipe services delivered by network providers typically have limited scrubbing capacity and are mostly confined to attacks in layers 4 and 5 (in the OSI model), adding that it is common for larger attacks to be black-holed.
A better solution is to implement a specialised DDoS mitigation solution that gives organisations the choice of working with an on-site DDoS defence appliance, a cloud service or a hybrid solution.
“Appliances analyse incoming network traffic, allowing only clean, legitimate traffic to pass. Cloud-based solutions reroute traffic to scrubbing centres that are able to handle a high volume of traffic at both the network and application layers,” Schmitt says.
With DDoS attacks growing in scale and size, Schmitt advises organisations to examine the capacity of their providers’ scrubbing centres and whether they’re capable of handling modern DDoS attacks. Neustar, for one, has expanded its network capacity in APAC with a new 200Gbps node in Singapore, doubling its in-region capacity with additional large nodes soon to follow.
Having large nodes and a wide network of scrubbing centres are necessary for DDoS mitigation service providers to minimise network latency and as what Schmitt says, “redirect traffic to local scrubbing centres at the edge of the network, closer to the source”.
“By scrubbing a customer’s web traffic and redelivering it locally rather than having to be backhauled to a scrubbing centre that may be halfway around the world, we offset latency and restore network performance more quickly and effectively resulting in faster, more efficient in-region mitigation,” he says.
While Neustar’s service may address the limitations of clean pipe offerings, it is not enough. Besides having some common sense and practising basic cyber hygiene, organisations need to develop deeper understanding of cyber threats to defend themselves better.
As the StarHub episode shows, there’s still a lot more work to be done.
Conceived in 2014, Singapore’s National Health IT Masterplan is coming to fruition, with key projects such as the National Electronic Health Record (NEHR) system already in place.
This was revealed by Singapore’s health minister Gan Kim Yong earlier this week at the opening of the National Health IT Summit, a gathering of top medical and IT practitioners in the city-state.
The progress of the masterplan is laudable, given that it normally takes more than a few years to rally an entire industry together on a single mission to harness IT to improve patient care and medical outcomes.
It also helps that Singapore’s healthcare sector is dominated by public healthcare clusters operated by a handful of government-linked service providers, making it easier to tackle challenges such ensuring the portability of healthcare records across otherwise disparate IT systems.
Indeed, the NEHR – and more importantly, the healthcare data it holds – is key to Singapore’s efforts to take its affordable, world-class healthcare system to the next level.
Besides giving physicians a “single version of truth” on each patient’s health condition (thus enabling better care), the NEHR’s repository of medical information is a gold mine for uncovering treatments for medical conditions that affect Singaporeans.
The Singapore National Eye Centre, for one, has started looking into the dropout rates for the use of glaucoma medications among Singapore patients, which could in turn lead to further investigations on the efficacy of those medications.
The National University Hospital System has also successfully made use of data to improve clinical practices, leading to a significant reduction in the number of patients requiring blood transfusions after a knee replacement surgery.
Another notable development in Singapore’s healthcare IT landscape is the use of robots to care for patients.
Prototypes are currently being developed and could ease the manpower crunch that has been plaguing Singapore’s healthcare industry for years. While most people don’t expect robots to replace nurses, they could help with tasks like administering medications (thus eliminating human errors) and lifting heavy patients.
The Singapore government’s pragmatic approach in harnessing technology has always stood out for its laser focus on execution and outcomes, even if it means losing political brownie points from those displaced by technology disruptions.
As Gan said in his address: “Disruptions are often painful, but if the disruptions have the potential to bring about meaningful benefits to patients and their families, and to our healthcare system, we must not be afraid to allow them to take place. Better still, we should disrupt ourselves proactively before we are forced to do so”.
The lack of a major cyber security event is seen as one of the reasons for the indifference towards cyber security in many organisations. That event took place recently, with the WannaCry ransomware affecting organisations in over 150 countries. While basic cyber hygiene could prevent similar attacks, many individuals and organisations are still making the same old security mistakes.
Here’s a look at some of these mistakes highlighted by researchers from ESET, a cyber security firm, and what you can do to void them.
Too trusting of emails
Social engineering tactics are as old as the day is long, yet people keep falling for them. Today, phishing via email has become commonplace. Although criminals are improving the ‘quality’ of these emails, with some targeted emails looking incredibly authentic, most do not.
Keep yourself safe by carefully checking the recipient, the request and use some common sense. Also, be cautious of attachments, as they may be malware-infected. It’s important to check file extensions and to only open files deemed safe and from legitimate sources.
It won’t happen to me
Culture is arguably the biggest issue with security right now, and this has been the case for 20 years. CEOs think they won’t be targeted and citizens think much the same.
This complacency is misguided, and often results in poor security habits, with individuals and organisations treating, for example, password and Wi-Fi security not as seriously as they should.
This is despite the fact that good cyber security can be achieved easily, through good password hygiene, regular software updates, anti-virus and even password managers, VPNs and secure encrypted messaging apps.
Generic, guessable passwords can be easily cracked, opening a can of worms if the same password is used across several accounts. Brute-forcing passwords is increasingly fast and easy for criminals today equipped with either huge computing power, or access to buy such expertise on the dark web.
Weak passwords, such as 123456; password; 12345678; and qwerty remain commonplace, with many people failing to see how these ‘low-hanging fruit’ are an entry point for cyber criminals. According to Forrester, 80% of all attacks involve a weak or stolen password .
Dismissing software updates
Whether on desktop, laptop or mobile, there’s always another software update for our apps, operating systems or cyber security software. Interestingly, the constant pop-ups irritate us, with many people failing to understand just how important they are.
If we fail to update, we’re effectively leaving our software and devices vulnerable to attack, as cyber criminals look to exploit out-of-date flaws. Had the organisations affected by WannaCry properly configured automatic operating system updates, they might not have been featured on the victim list.
The cyber attack on the computer networks of the National University of Singapore (NUS) and Nanyang Technological University (NTU) last week has once again cast the state of Singapore’s cyber security into the spotlight.
According to the Cyber Security Agency, the attack appeared to be the work of advanced persistent threat actors who were looking to steal information related to government or research. The two universities have close research links with Singapore government agencies through projects such as self-driving buses.
The attack should come as no surprise. With the removal of Internet access from the work computers of civil servants, it was only a matter of time before hackers find creative ways to access government-related information through so-called supply chain vulnerabilities.
What this means is that instead of targeting victim networks directly, cyber attackers simply exploit any software or network loophole of a victim’s suppliers or partners to get to the victim itself.
This has long been a concern in cyber security circles, since it can be difficult for organisations to enforce or prescribe specific cyber security measures for suppliers and partners – beyond broad service level agreements. Prior to the NTU and NUS incident, groups such as APT10 have already launched campaigns to steal data from organisations via their managed service providers (MSPs).
Besides MSPs, SMEs (small and medium sized enterprises) that provide services to large enterprises are also prone to supply chain attacks. Many SMEs do not have dedicated IT departments, let alone security teams to fend off potential attacks.
So what can organisations do? For now, there are few standards that address cyber security issues related to the supply chain. The Payment Card Industry Data Security Standard (PCI DSS) is one of them. It not only offers vendor management guidelines, but also specifies safeguards such as the use of encryption.
More importantly, organisations should put in place a vendor management programme that includes identifying the most important vendors and requiring strict documentation of controls and processes. The programme should also be integrated with an organisation’s compliance practices. You can find out more in this guide by SANS Institute.
As for SMEs, the Singapore government has been working with industry bodies to promote awareness of cyber security among smaller firms in recent years. But it is uncertain if these awareness programmes have the intended effect, going by the data breaches that continue to make headlines.
Beyond awareness programmes, more tangible support is needed to improve the cyber hygiene of SMEs. This could take the form of a shared service where experts conduct annual cyber security audits for SMEs to determine areas that can be improved. This will also address the shortage of cyber security expertise that many SMEs are facing today.
A new survey by Gartner has found that CEOs in the Asia-Pacific region believe that “conventional technologies” such as cloud, ERP, analytics and CRM will help them to improve productivity, rather than “technologies that support digital transformation such as artificial intelligence (AI), internet of things (IoT) and robotics”.
“Asia-Pacific CEOs want to increase profit margins while maintaining sales growth, and they expect IT to play a strong role in this,” says Partha Iyengar, vice president and Gartner fellow.
“The problem is that Asia-Pacific firms aren’t moving fast enough to capitalise on this potential. Their focus on conventional technologies will likely have less of a transformative effect than more innovative technologies.”
This is despite their awareness and understanding of the major impact that these key digital business technologies will have on their industry, Iyengar adds.
I beg to differ. While cloud, analytics, ERP and CRM may be deemed by Gartner as conventional technologies, they form the backbone of any modern enterprise that powers the so-called innovative technologies like IoT and AI.
After all, IoT – at its core – is all about collecting data from sensors and devices to generate insights to solve a problem, improve a business process or predict when a milling machine will break down.
How else can this be done without the use of big data analytics – and even AI in more advanced applications? Plus, most of this data sits on ERP systems (including cloud-based ones) that are increasingly being augmented with machine learning capabilities.
Making the distinction between conventional and innovative technologies will not help organisations in crafting their digital strategy. Instead, the focus should be on identifying business problems and applying a suitable combination of technologies to solve those problems.
After all, who is to say that implementing cloud-based business applications will have less of a transformative effect if those investments can translate to improvements in customer experience, employee productivity and, in some cases, open new markets?
At the Red Hat Summit this week, scores of developers and business executives descended in Boston in one of the biggest gatherings of its kind in the global open source community.
The message to the thousands of participants was clear: the open source development model that brings together creators and users of software to solve business and societal problems is winning.
From Singapore’s myResponder app that activates volunteers within the vicinity of those suffering from heart attacks to the transformation of government services in Mexico, open source software has sparked some of the world’s most inspiring innovations.
While these open source powered initiatives are laudable, will they still accomplish their goals if the underlying technologies they are using aren’t open source?
In fact, most CIOs I’ve spoken with over the years don’t really care if a piece of software is open source or not. What matters more is that the software works as promised and solves their problems at a price that they are willing to pay.
During the summit, I asked Dirk-Peter van Leeuwen, Red Hat’s senior vice president and general manager in the Asia-Pacific region if open source still matters.
His take was that more than ever, companies can’t ignore the cutting edge innovations delivered at a rapid rate by open source communities if they were to compete in a fast-changing business landscape.
I agree to some extent, but I’d argue that those innovations aren’t necessarily exclusive to open source projects. With enough market demand, there will be suppliers – open source or not – that will develop what the market needs and at a pace that’s necessary to fulfil that demand.
I think the real strength of open source lays not so much in the final product, but the values of collaboration and participation espoused by the open source community in software development.
Even if businesses pick a piece of proprietary technology over an open source one, they would do well to embrace those values in tackling the challenges that come their way.
Do you think open source still matters? Tell us more in the comments!
In Singapore, where labour is scarce and expensive, automating work processes to improve our productivity and free up time for higher-value work seems like a no-brainer.
Yet, it is only in recent years that more companies, especially small and medium-sized enterprises, in the city-state are heeding the call to automate, due in part to efforts by the government to encourage organisations to improve the productivity of their workers.
Automation, however, is not just limited to factory floors, restaurant kitchens and airport terminals. There’s room for automation in the office, too.
In Singapore, 60% of managers still spend three hours a day processing work e-mails, according to ServiceNow, a supplier of cloud-based automation tools. Many of these e-mail requests, especially internal ones, simply involve the basic processing of straightforward items, like purchase orders or invoices, says ServiceNow’s senior director Chris Pope.
Globally, employees spend 40% of their time on administration work, going by ServiceNow’s 2016 State of Work Survey. That’s equivalent to two full days per week which could be used for high value work.
But why aren’t we automating these inefficiencies out of our organisations? Corporate culture plays a part – some people may be resistant to change. Others might not know that certain manual processes can be automated.
In fact, Pope says the biggest challenge in automating work processes is in understanding all the manual processes that take place within an organisation. “Since they are so ingrained within organisations, often, people don’t have an awareness of every single function.
“Today, with the advancement of IoT technology, there is the ability to build sensors into so many more areas of working life, allowing real time monitoring of systems and then seamless automation of response processes, without human intervention,” he says.
While there’s no doubt that automation will do us good, it also needs to make sense for companies from a cost and technology perspective.
I remember talking to the CEO of a Singapore-based laundry service that switched from bar codes to RFID (radio frequency identification) chips to automate the tracking of laundry items, only to revert to using bar codes again within six months.
The CEO was blindsided by the allure of tracking items in one fell swoop, which made sense from a productivity point of view, but not so much from a cost perspective as RFID chips are still more expensive than printing bar codes.
When you’re only making a couple of dollars from washing a shirt, tagging each item with an RFID chip that costs about 50 cents clearly does not make sense despite the improvements in productivity.
So, yes – go ahead and automate, but not for the sake of it. More importantly, know what you’re getting into. The resources wasted in implementing a bad decision for your company could have been put into more productive use.
Unless you have been living under a rock, you would have heard of the term “digital transformation” that has been bandied around over the past few years by technology suppliers big and small.
And why does it make a difference now, given that organisations, including governments, have been embracing technology to improve work processes and meet other business goals since the invention of the computer?
While digital transformation has been happening for decades – Singapore’s national computerisation programme, for one, started in the 1980s – I think what has been driving the resurgence of the term in recent years is the democratisation of technology, including artificial intelligence, and cheaper access to computing power in the form of cloud services.
Together, these factors have enabled a wider pool of organisations – beyond large enterprises with deep pockets – to tap into the so-called SMAC (social, mobile, analytics and cloud) technologies that have the potential to transform industries in orders of magnitude compared to the last two decades.
Stories of such organisations abound. We recently reported on how Mojo Power is using cloud-based microservices to shake up the energy market in Australia, by delivering power consumption data in real-time to households. Another upstart, Ninja Van, is using automation and algorithms to improve customer service and optimise delivery routes, disrupting the logistics industry in Southeast Asia.
Sure, those companies were born in the digital age and would undoubtedly take to digital transformation like a duck to water. But it isn’t just upstarts that have been making waves in digital transformation. Big boys like DBS Bank in Singapore and oil and gas giant PTT in Thailand are harnessing big data analytics to reduce trade anomalies and better understand customer needs, respectively.
One question that is often overlooked when we read about these success stories is: how did they do it? Perhaps more importantly, how did they overcome any resistance to change? And for companies such as traditional general insurance firms that have built their businesses around agents, how did they walk the tightrope between selling direct to consumers over the internet and ensuring their agents continue to keep their rice bowls?
Oftentimes, it’s never about the technology that determines whether an organisation is successful with any digital transformation effort. Building a culture of change is just as important – if not the most important ingredient – for success. After all, if people are resistant to change, you’re never going to get any buy-in, even if the benefits are obvious.
I remember driving a digital transformation pilot project years ago at an organisation that wanted to reach out to young consumers who were adept at using instant messaging (IM) as a form of communication. The idea was to roll out a live chat widget on the organisation’s website so customers can connect with service reps, some of whom had not used IM tools before.
The project was supported by the CEO, and I had put in place a change management plan that included training service reps on IM lingo and working with affected staff on the operational changes required to support an additional service channel.
While almost everything went according to plan and the pilot service was well-received by young customers, the culture of maintaining the status quo among some service reps and senior decision makers meant that the project did not go beyond its experimental phase.
These anecdotes hardly make it into the stuff we read on digital transformation, which tend to focus on success stories. Yet, I’d argue that such ‘failures’ should be celebrated, and are just as important in helping organisations understand what it takes to succeed in digital transformation.
In the science fiction drama series Humans, Laura feels displaced by Anita, a humanoid robot known as a synth that was bought by her husband to help with the household chores.
Imbued with human-like intelligence, Anita – and the rogue code in her that powers her alter ego Mia – does a great job at being a nanny to the kids and washing dishes. But unlike her fellow synths, she has a sense of consciousness, to the point of being manipulative at times.
While synths may appear to be far from reality, they don’t seem too far-fetched, given the rate at which artificial intelligence (AI) is advancing.
Consider Erica, the well-known human-like autonomous android that says she (or it?) wants to explore the outside world in a recent documentary produced by The Guardian.
Erica even cracks robot jokes and desires to move her arms and legs – which her creators at ATR Institute in Kyoto say will happen someday.
The thing is, Erica’s main creator Hiroshi Ishiguro created her not only for the oft-cited purpose of automating mundane tasks for humans, but to understand what makes us human, including what it means to interact with others, be creative and have a personality.
Assuming Ishiguro gets his way in representing humanity in robots, how should we interact with humanoids? What impact would they have on society, besides making us more efficient?
Should we treat them as humans, partnering us on factory floors and trading floors? Will they succumb to failings such as greed that have plagued mankind for thousands of years? Can we trust them to take care of our children?
Over the weekend, I had a conversation with my wife about whether she would trust a humanoid like Anita to take care of our son, like a nanny or domestic helper would. She couldn’t give me a clear answer, although I suspect she is wary of the bond that may form between the humanoid and our son – like how Laura felt in Humans.
And how should we interact with them? Do we bark out orders because they are machines and hence there’s no need for pleasantries? A friend recently recalled an executive who was criticised by his business associates for passing curt instructions to his personal assistant (PA) who was copied in his emails. The thing is, that executive’s PA is an AI programme and not a human being.
Today, discussions on human-machine interactions are mostly centred on feedback loops that enable us to correct the mistakes of machines, like how we would with a child. The social, cultural and psychological impact of AI is often not discussed enough, primarily because most of the rhetoric on AI today is shaped by technologists and business people, not anthropologists.
I believe that impact will be intertwined with how we view a robot, which may not be a person or a machine after all. As Dylan Glas, a robotics researcher at ATR, puts it, a robot may be a new oncological category for which we can’t describe at present. The Japanese believe robots could even have souls like all things do.
Lest you think we still have time to think things through, or that humanoids like Erica will never see the light of day, remember it was just over 20 years ago that the idea of a tablet computer was conceived by the creators of Star Trek. Before you know it, the Ericas and Anitas could well be in our midst. The question is, how will we cope with them?