Data Matters

Page 1 of 1012345...10...Last »

June 19, 2017  11:52 AM

Forensic financial analysis software used to combat fraud

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Ian Watson, CEO of Altia-ABM. It reflects his experience and judgement.

Specialist financial analysis software is now a crucial tool in the investigation and prosecution of crime. It not only increases the speed of criminal investigations but also enables prosecutions in circumstances where they would have been very difficult and time consuming previously.

The following two significant cases of fraud highlight where prosecutions have been made possible due to the fact that the use of specialist software can aggregate huge volumes of paper-based and electronic data into a standard format and demonstrate links between separate items of evidence in complex cases.

Housing fraud in Southwark

The first case involves a homelessness case officer, employed by Southwark Council, who was found to have committed serial housing fraud over a period of three years. Ibrahim Bundu, a Southwark Council housing officer, fraudulently obtained council property for his mother, his ex-wife, his estranged wife and her aunt, and others who paid him in cash, totalling 23 properties allotted to people who should never have received them, some of whom were legally entitled to be in the UK.

Southwark Council, HMRC and the UK Borders Agency worked in partnership to establish the circumstances of the fraud.

Bundu had created reams of bogus documents including fake identification papers, references and forged medical certificates claiming that some of the applicants were pregnant and therefore should be considered a priority for accommodation.

The investigating officers used specialist software to bring together all of Bundu’s case notes and supporting documents over a three-year period. All of this documentation was combined with information held by HMRC and the UK Borders Agency and from this vast volume of data, investigators and prosecutors were able to pinpoint key evidence which demonstrated Bundu’s involvement in each suspicious case and ultimately led to his conviction.

The software was also used to establish the immigration status of individuals he had assisted, to enable UK Borders Agency to take action against individuals who were in the country illegally.

Investigators from all three agencies reported that they would not have been able to construct a viable case against Ibrahim Bundu without the ability to use technology to compile this amount of data into a format where investigators could begin their forensic analysis that demonstrated his guilt.

Southwark has one of the longest council house waiting lists in the UK and, it could be said, this fraud meant that people in genuine need were pushed further down the queue.

He was sentenced to four years in prison, later increased to six years after he failed to make reasonable efforts to pay back the £100,000 for which he was personally liable.

NHS fraud

The second case was a brazen £3.5m fraud siphoning money from NHS funds over a seven-year period before the perpetrators were brought to justice.

Neil Wood was a senior manager at Leeds and York Partnership NHS Trust and also worked with Leeds Community Healthcare Trust before moving to NHS England. Wood was responsible for the awarding of training contracts for NHS staff in all three roles. He awarded the vast majority of these contracts to a company called The Learning Grove, which was run by his friend Huw Grove. The Learning Grove gradually transferred a total of £1.8m to LW Learning Ltd, a company registered in the name of Lisa Wood – Neil Wood’s wife. While at NHS England, Wood awarded a training contract worth £231,495 to a company in Canada called Multi-Health Systems, which was run by Terry Dixon. Dixon was a contact of Neil Wood. He kept £18,000 of the money and transferred the rest back to LW Learning Ltd.

The investigation was conducted jointly by Police North East, HMRC and NHS Protect. Seven years of financial data had to be compiled into a format whereby investigators from the three organisations could identify the relevant financial transfers between the many bank accounts used in the UK and Canada and follow the money trail to show, beyond doubt, that these individuals had knowingly and intentionally committed the fraud.

Neil Wood, Huw Grove and Terry Dixon were sentenced to a total of 9 years, 8 months in prison, for fraud, abuse of position and money laundering. Lisa Wood was given a suspended sentence for money laundering.

Investigative teams from NHS Protect, HMRC and police forces nationally and internationally are increasingly relying on investigation software as a key weapon in their fight against crime.

My own company Altia-ABM has, I would say, been at the forefront of developing software to enable investigators to achieve more in a shorter time and to assist in the development of cases against criminals.

Our technology automates much of the data mapping and cross-reference process, allowing trained and experienced investigative staff to home in on the key transactions that prove wrongdoing. It has been estimated that complex investigations take more than 10 times the man-hours to complete if the data was cross-referenced manually. In addition, our  software also generates documentation which is accurate enough to stand up in court and withstand close scrutiny.

June 12, 2017  10:55 AM

12 Months to GDPR: the year of metadata

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Ciaran Dynes, senior vice president of Product, Talend.

The General Data Protection Regulation (GDPR) is a bit like Brexit for some: you secretly hoped the day was never going to arrive, but GDPR is coming and there will be major penalties if companies don’t have a strategy for how to address it.

Just under a year from now, on 25 May 2018, GDPR will go into effect. That means all businesses and organisations that handle EU customer, citizen or employee data, must comply with the guidelines imposed by GDPR. It forces organisations to implement appropriate technical and organisational measures that ensure data privacy and usage is no longer an after-thought.

GDPR applies to your organisation, regardless of the country in which it’s based, if it does any processing of personal data from European citizens or residents. So, depending on how your organization manages personal data on behalf of its customers, such as “opt-in” clauses, GDPR could become your worst nightmare in the coming year if you aren’t properly prepared.

As an industry, we talk about a lot about digital transformation, being data-driven, data being the new oil, and any other turn of phrase you might consider, but for a moment spare a thought for metadata. Metadata is your friend when it comes to addressing the many requirements stipulated by GDPR. Of course, metadata has been in the news for different reasons in the recent past, but I would reiterate that it is critical to solving GDPR.

The regulation applies if the data controller (organisation that collects data from EU residents) or processor (organisation that processes data on behalf of data controller e.g. cloud service providers) or the data subject (person) is based in the EU.

Does GDPR apply to your company?

If the answer is ‘yes’ to any of the following questions, then GDPR should be a high priority for your company:

  • Do you store or process information about EU customer, citizens or employees?
  • Do you provide a service to the EU or persons based there?
  • Do you have an “establishment” in the EU, regardless of whether or not your store data in the EU?

Where to begin when addressing GDPR for your customers

First, you need to understand the rights that your customers have in regards to their personal data. When it comes to GDPR there are many regulations around personal data privacy.

For example, perhaps you implement the following GDPR data privacy guidelines in your systems:

  • Customer has the right to be forgotten
  • Customer has the right to data portability across service providers
  • Customer has the right to accountability and redress
  • Customer has the right to request proof that they opted in
  • Customer is entitled to rectification of errors
  • Customer has the right of explanation for automated decision-making that relates to their profile

In a world where customer data is ‘king’, being captured by the terabyte, you need a controlled way to collect, reconcile, and recall data from multiple, disparate sources in order to truly comply with GDPR regulations. It should be stated that GDPR impacts all lines of business, not just marketing, so a holistic approach is fundamentally required in order to be compliant with the regulations. That’s where metadata comes in.

The value of metadata

In order to have a complete view of all the data you have about a person, you need to have access to the associated metadata.

Metadata sets the foundation for compliance as it brings clarity to your information supply chain, for example:

  • Where does data come from?
  • Who captures or processes it?
  • Who publishes or consumes it?

This critical information is the backbone to establishing a data governance practice capable of addressing GDPR. Your organization needs to define the policies, such as anonymization, ownership, data privacy, throughout your organizations, including an audit trial for proof of evidence should an auditor arrive at your door. 

Stephen Cobb of welivesecurity.com has published a great article on GDPR where he compiles the following list that highlights the key implications of the forthcoming GDPR regulations —including financial consequences and costs. I strongly recommended reading the article in full. 

11 things GDPR does

  1. Increases an individual’s expectation of data privacy and the organization’s obligation to follow established cybersecurity practices.
  2. Establishes hefty fines for non-compliance. An egregious violation of GDPR, such as poor data security leading to public exposure of sensitive personal information, could result in a fine of millions or even billions of dollars (there are two tiers of violations and the higher tier is subject to fines of over 20 million euros or 4% of the company’s net income).
  3. Imposes detailed and demanding breach notification requirements. Both the authorities and affected customers need to be notified “without undue delay and, where feasible, not later than 72 hours after having become aware of [the breach]”. Affected companies in America that are accustomed to US state data breach reporting may need to adjust their breach notification policies and procedures to avoid violating GDPR.
  4. Requires many organizations to appoint a data protection officer (DPO). You will need to designate a DPO if your core activities, as either a data controller or data processor, involve “regular and systematic monitoring of data subjects on a large scale.” For firms who already have a chief privacy officer (CPO), making that person the DPO would make sense, but if there is no CPO or similar position in the organization, then a DPO role will need to be created.
  5. Tightens the definition of consent. Data subjects must confirm their consent to your use of their personal data through a freely given, specific, informed, and unambiguous statement or a clear affirmative action. In other words: silence, pre-ticked boxes, or inactivity no longer constitute consent.
  6. Takes a broad view of what constitutes personal data, potentially encompassing cookies, IP addresses, and other tracking data.
  7. Codifies a right to be forgotten so individuals can ask your organization to delete their personal data. Organisations that do not yet have a process for accommodating such requests will need to establish one.
  8. Gives data subjects the right to receive data in a common format and ask that their data be transferred to another controller. Organisations that do not yet have a process for accommodating such requests will need to establish one.
  9. Makes it clear that data controllers are liable for the actions of the data processors they choose. (The controller-processor relationship should be governed by a contract that details the type of data involved, its purpose, use, retention, disposal, and protective security measures. For US companies, think Covered Entities and Business Associates under HIPAA.)
  10. Increases parental consent requirements for children under 16.
  11. Enshrines “privacy-by-design” as a required standard practice for all activities involving protected personal data. For example, in the area of app development, GDPR implies that “security and privacy experts should sit with the marketing team to build the business requirements and development plan for any new app to make sure it complies with the new regulation”.

But there is much more…

All of the above points are noteworthy, but as a parent of three children, #10 is worth a special callout. If organisations are gathering data from underage people, they must have systems in place to verify ages and gain consent from guardians.

Article 8 of the GDPR requires that companies:

  • Identify who is, or is not, a child
  • Identify who the parents or guardians of those children are.

So as you see, GDPR puts an enormous onus on any organisation that collects, processes and stores personal data for EU citizens. I’ve got feeling that 2018 will be the year metadata becomes even more important than we ever previously considered. For more on metadata management, see how Air France-KLM is using Talend Metadata Management to implement data governance with data stewards and data owners to document data and processes.


June 9, 2017  11:12 AM

Humans and machines: the partnership

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Yasmeen Ahmad , Director of Think Big Analytics, Teradata

Will you be superseded by an intelligent algorithm? Despite the fear mongering of robots taking over the world (let alone stealing our jobs), it will be a significant length of time before machines are versatile enough to do the breadth of tasks that humans can. Automating the human mind is out of reach for now but we can leverage intelligent algorithms to support and automate certain levels of decision making. Our focus is thus on augmentation – machines and humans working collaboratively.

Here are three predictions for the ever-developing relationship of the human analyst and the algorithmic machine:

  1. The ongoing need for human expertise

In the future, there will be many jobs that exist alongside smart machines, either working directly with them or doing things they cannot. But which jobs are going to get displaced, and which enhanced?

Although algorithms and AI can automate more and more labour-intensive roles, they are not yet able to complete complicated tasks such as persuading or negotiating, and are unable to generate new ideas as efficiently as solving problems. As a result, jobs that require a certain level of creativity and emotional/social intelligence are not likely to be superseded by algorithms any time soon. It’s likely job titles such as entrepreneur, illustrator, leader and doctor will stay human for now.

In addition, acting upon the intelligence of machines will still require a human in many cases. The sophisticated algorithms may predict high risk of a cancer, but it is the doctor who will relay that information to a patient. Self-driving cars may move us from point A to point B, but it is the human that will be the ultimate navigational influence deciding the destination of the journey and changes along the way.

As these applications of intelligent machines develop, the most advanced technology companies have kept their human support teams. When there is an issue with automated processes, the fixing is often carried out by a human. The need for onsite human expertise dealing with smart machines is not being eliminated: the new systems require updates, corrections, ongoing maintenance and fixes. The more we rely on automation, the more we will need individuals with the relevant skills to deal with the complex code, systems and hardware. This creates a raft of new careers, disciplines and areas of expertise not existing today.

  1. Humans will adapt skillsets to sync with machines

The jobs of tomorrow do not exist in the job ads of today. We will need to change human skillsets, and become digital-industrial people. But what exactly does this mean?

In the future machines will automate a range of tasks and survival will belong to those who are most adaptable and learning agile. Digitalisation and algorithms are creating a new interface between the worker and end task. Humans are now faced with dashboards providing indicators of machine performance. Interpreting, understanding and acting upon the data in this dashboard becomes the task of the future. A new interface is emerging for the human.

This new interface drives a change in the skillsets required. In order to adapt to the possibilities that Artificial Intelligence creates, businesses globally will have to hire a multitude of individuals who are data and digital savvy, as well as understand how to interact with machine interfaces. We will see the continued rise of new teams with data and analytical expertise to create the intelligent algorithms of the future.

Not only has technology opened up new jobs and departments within businesses, but it’s also created the requirement for completely new organisations and business models. Siemens is an example of a traditional rail industry transforming from no longer selling trains to now providing an on-time transportation service. The need for data and analytical expertise is only likely to increase as analytical automation grows: autonomous vehicles will still need mechanics, as will the self-driving systems within the vehicle.

  1. Humans in the loop: a new role will be established for analysts and business users

As we embrace AI and deep learning algorithms that automate the detection of insights – we must not lose sight of the importance of the analyst who deploys the algorithm and the user who consumes insights.

Analysts explore data, generating new ideas, being creative and solving problems by using algorithms. Machine learning and AI models will be able to harness complex data and make more accurate predictions, but it is still the human analyst that will make the decisions on what type of data to feed the algorithm, which algorithms to deploy and how best to interpret the results.

As algorithms create more and more predictions, can we leave all decision making to automated algorithms? Is there a danger that this automation will become a crutch for business users – allowing human judgement to be overlooked? It is crucial that business users are equipped to understand the value of human judgement and how to manage algorithms making questionable decisions.

If CIOs want to take the lead in introducing AI to their organisations, they should begin to identify which business processes have cognitive bottlenecks, need fast and accurate decisions or involve too much data for humans to analyse. These are the areas that can be positively impacted by human analysts leveraging algorithmic machines.

When it comes to the next step for businesses globally, augmentation with smart humans alongside smart machines is the most likely future.


May 22, 2017  10:35 AM

Multi-model databases as way to tame data management complexity

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Luca Olivari, president, ArangoDB.

Data is one of today’s biggest assets but organizations need the right tools to manage it. That’s why, during the last decade, we’ve seen new data-management technologies emerging and getting past the initial hype. The so-called “big data” products, Hadoop and its surrounding ecosystem first and NoSQL immediately thereafter, promised to help developers to go to market faster, administrators to reduce operational overhead and devote less time to repetitive tasks and ultimately companies to innovate faster.

The need for change

One could argue there’s been some success, but the craving for new and different approaches has given birth to hundreds of products all addressing a specific niche. Key value stores are blazingly fast with extremely simple data, document stores are brilliant for complex data and graph solutions shine with highly interconnected data.

Every product solves a part of the problem in modern applications. But besides having a steep learning curve (in truth, many steep learning curves), keeping your data consistent, your application fault-tolerant and your architecture lean is rather impossible.

Teams were forced to adopt way too many technologies resulting in the same issues faced at first: complexity and inelasticity. NoSQL companies realized they were narrowing their use cases too much, and started to add new features in the direction of relational data models.

Relational incumbents reacted and vendors added document-based or graph-based structures and features, removing schemas and generally trying to mimic the characteristics that made NoSQL successful at first, and semi-successful at last.

Relational databases have been so successful for one main reason: broad applicability and adaptability. All developers on earth know relational and it comes to mind as the first underlying data-management technology when building something new, regardless of the application itself… True, relational are almost good for everything, but they never excel.

There’s something wrong with two worlds colliding

Of course, you can take a petrol car, find a way to put a battery plus electric motor in and call it an electric car. You’ll end up with low range, low performance and no users. The huge success of Tesla is by design. A Tesla is superior because it is designed from scratch for e-mobility.

The reality is, the underlying architecture is so important that something that’s not built from the ground up, will never be as effective as a product conceived with the end in mind. Exponential innovations are architected in a different way and that’s why they are disruptive.

This is happening in the database world as well. There’s a new category of products that’s solving old issues in a completely different way.

Native multi-model databases

Native multi-model databases, like my own company’s ArangoDB, are built to process data in different shapes: key/value pairs, documents and graphs. They allow developers to naturally use all of them with a simple query language that feels like coding. That’s one language to learn, one core to know and operate, one product to support, thus an easier life for everyone.

Let’s quantify the benefits for a fictitious Fortune 500 customer. When you have an army of tens of thousands of developers and so many different on-premise or cloud based databases to administer, even a small improvement in productivity means a lot. An approach like ours allows you to build more things with fewer things and simplify your stack in the process. You could say the mission is to improve the productivity of every developer on earth.


April 21, 2017  10:24 AM

Beware the AI black box

Brian McKenna Profile: Brian McKenna

A guest blog by Matt Jones, Analytics Strategist at Tessella 

Big tech vendors have been piling into analytics and now AI. IBM’s Watson has been charming the media with disease diagnosis and Jeopardy prowess, and Palantir has been finding terrorists. Other major vendors such as Oracle and SAP have also been joining the scene, albeit with slightly less fanfare.

These companies have been leading tech innovation for years; one would expect them to offer a credible AI solution, and indeed their technology is good.

But the black box approach that many such companies currently offer presents problems. The first is that analytics is not a plug and play solution; it needs to be built with an understanding of data and context.

The second is that, in buying a black box solution, you lose control of your data. This means you are not sure what it’s telling you, you allow others to benefit from it for free, and you may not be able to access it in future.

And now you may even be in for a big bill for the privilege. A recent court case ruled that drinks company Diageo had to pay SAP additional licences for all customers that indirectly benefitted from SAP software used in their organisation, costs that could run to £55m.

If upheld, this could have a chilling effect on the analytics platform industry. Could every beneficiary of data insights across and beyond your organisation now need a licence? Companies should now think a lot harder before committing to a platform on which their whole business relies.

Your data is your company’s lifeblood – value it

Even before this case, it’s stunning how much control of data companies were willing to give up to vendors, and how little thought goes into the consequences of over reliance on one technology.

Vendor lock-in is nothing new of course, consumers have been allowing Apple and Google to track their every move, and global businesses putting everything in the Microsoft cloud, for years.

But data and AI represent this problem on steroids. Data projects done right are embedded throughout the entire organisation. Some solutions will suck in all your data from every system, lock it away, and even refuse you access to it. So these platforms have all your data, the context, and the insights it provides. And now they have even more power to charge you to benefit from it.

How to plan for an analytics solutions

So, how can you benefit from data analytics without storing up future problems?

Before you even think about technology, get the right mix of technical and business people to look at what your business needs to achieve and how data can help you. Then get people in with data science expertise who can explore how your data can be used to support those needs.

Only then should you look at what platforms you need to achieve this. The most powerful is not necessarily the most suitable, find one suited to your need. In doing so, consider licensing models and demands from your data – does it leave your site? Can you access it when you need to? Look at the company’s overall culture – are they transparent or opaque? This will guide you in how they are likely to handle your data.

Perhaps more important is to consider whether you need a black box at all. Google, Microsoft and Facebook, amongst others, all offer openly available Artificial Intelligence (AI) APIs on which anyone can build bespoke AI or machine learning platforms – which are as sophisticated as any black box on the market. Furthermore this allows you complete control and transparency of how the data is fed in, processed and presented, so you can identify causal links between data and outcomes, rather than having to trust someone else’s insights into your business are correct.

If you do need a black box solution – and there are times when they are the right option – you should ask whether the vendor is a partner or just a platform. Do they understand your business context? Do they integrate with your particular data setup? Do they leave you with control of your data? Do they make the data analysis process clear, so you can understand whether your business insight is based on a causal links or just an unsupported pattern spotted in the data.

The approach you take should be driven by the most appropriate approach to solving your challenge or finding the insight you require to make better decisions – not by the platform itself – and it should consider what level of data control and oversight you are willing to give up. Once you have properly defined that, then you can make the best decision about how to use your data to meet those goals.


March 23, 2017  1:59 PM

Optimisation challenges in manufacturing: avoiding swamps and siloes

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Stuart Wells, Chief Technology Officer, FICO.

Recent research by Industry Week found that 73% of manufacturing companies recognise the potential benefits of successful supply chain optimisation (SCO) projects. The same report highlights that while manufacturers have been investing heavily in SCO projects for a long time, and continue to do so, not all of these projects achieve the return on investment they potentially could.

Even though SCO has been around for years, manufacturing firms still struggle to implement optimisation effectively. Each organisation faces it own challenges carrying out those projects, but there are two key issues that most manufacturers share: data swamps and siloed legacy systems.

Data and integration: the key challenges

The ‘Big Data’ hype pushed firms to invest in storage tools and solutions that quickly collected as much data as possible. However, very few businesses thought ahead, and having a lot of data without a plan for how to use it does not solve any issues, or add any business value.

Manufacturing firms are also struggling to find off-the-shelf applications that fit their existing environment. Ripping and replacing all current tools and platforms is not an option, as many contain critical business data. Many manufacturers struggle with data dispersed across disconnected systems and platforms. This limits their ability to optimise business and process decisions.

Over 85% of organisations tell Gartner that they are unable to exploit Big Data for competitive advantage, while at the same time 95% of IT decision makers expect Big Data volumes and the number of data sources they use to grow further. There is a critical industry need to evolve from classic data management to an approach that leverages information and knowledge across all platforms and systems.

Shell and Honeywell: how optimisation paved the way to success

One company that has taken this productive approach and implemented a successful optimisation project is Honeywell Process Solutions. In many oil-refineries, and other companies in the continuous process industries, the production schedule is created through a surprisingly low-tech approach; humans working with spreadsheets. Manual scheduling is restricted by the analytic limitations of the human brain, which is prone to error, so Honeywell Process Solutions developed an analytics-powered optimisation tool.

The company used mathematical algorithms to analyse hundreds of variables in a short time to determine the best solution out of many thousands of possible scheduling scenarios. This has driven significant economic impact. For example, the downstream effect of scheduling demand-driven production of 100,000 barrels equates to an annual profit increase of more than £2.3 million.

Optimisation is not only useful in production scheduling. Shell, a global group of energy and petrochemicals companies, deployed powerful optimisation tools to improve asset utilisation and maintenance requirements of the chemical plants while improving plant stability and profitability. Shell has nearly 600 advanced-process control applications around the globe, with each processing plant using its own set to run plant operations. It needed a solution that would provide plant-wide control, and help maximise the economic operating benefits for each plant.

For Shell, it was all about making the plants as safe as possible. The team at Shell deployed a platform that runs calculations in real-time by balancing numerous constraints and objectives. After the system calculates the best actions, the onsite team interacts with a visual tool that gives them the control and flexibility to explore trade-offs and make the best possible decisions for the plant. The tool provides recommended actions for every situation, as well as the optimal values for flows and temperatures, which keeps the plant and its crew safe.

Innovate to stay in the game

Manufacturers who use data and analytics in innovative ways will be ahead of their competitors, according to the Centre for Data Innovation. The application of analytics can help many business areas gain significant advantage, in particular when it is used for optimisation as analytic models help organisations to reduce inefficiencies.

Optimisation can help to enhance your workforce and line scheduling, better mix products to meet quality standards, improve production planning, enhance truck loading, optimise production across plants, and rebalance inventory. Like Honeywell and Shell, it is important that manufacturers embrace advances in technology and innovate if they are to thrive in the data-driven marketplace.


March 15, 2017  2:44 PM

Team messengers: productivity enhancer or productivity killer?

Brian McKenna Profile: Brian McKenna

This is a guest blog post by Bhavin Turakhia, Founder and CEO of Flock

Today, collaboration is a key to a company’s success. It is therefore critical to understand that collaboration itself can be a roadblock to optimum productivity. This has always been the case. For years, we have spent our time and energy on not-so-productive collaborative activities such as emails and meetings. For example, meetings have been known to consume a large share of our working hours, in many cases 40-50% of the workday.

In an environment where organisations are increasingly going global and teams are dispersed across time zones, teammates are required to be far more agile and quick when responding to one another to complete tasks in a timely manner. Email isn’t the right tool for employees needing instant responses from their global teammates, and thus, team messaging apps have emerged as a collaboration tool that’s much more conducive to productivity. Team messengers deliver exactly what most organisations need to stay relevant and competitive in today’s business landscape — faster sharing of information, increased efficiency, and the ability to make quicker decisions.

In fact, team messengers are touted as the collaboration tool of choice for the workplace of the future. No wonder some of the biggest companies in the world have jumped into the enterprise and team messaging chat space.

However, one of the most common concerns around the use of team messengers is whether they are really helping us become more productive or doing just the opposite by way of regular interruptions to our workflow. So should we stop collaborating to keep ourselves more productive at work? Of course not! Without collaboration, a business will stop working like a well-oiled machine. The answer lies in collaborating more effectively and ensuring that team messengers are used in a way that optimises productivity.

Eliminating workplace communication challenges

One of the commonly cited problems with the use of team messengers is that when conversations flow in real time, it’s difficult to pretend that a message went unnoticed (and therefore, un-responded to) as opposed to emails, which most people do not feel obligated to respond to immediately. In the world of fast, real-time communications, not responding immediately is unacceptable, and worse, it can even be perceived as rudeness — just like ending a face-to-face conversation abruptly. However, team messengers help workers tackle this problem by allowing them to create a time blocking schedule through a simple ‘Do Not Disturb’ feature and notify teams they’re part of that they do not wish to be disturbed at that time. This way, if they receive a message during this ‘blocked out’ time, they can delay their response without offending anyone.

One of the main reasons why people remain glued to their messaging apps or emails— even when messages do not involve them directly —  is the fear of missing something important. Doing so hurts their productivity to a great extent. This is another area in which certain team messengers emerge as a better way to coordinate and communicate with colleagues.

Enterprise messaging apps allow users to simultaneously be part of multiple teams. They can access all conversations that concern them from a single interface and set notifications for critical projects or discussions. At the same time, they can also mute less important conversations or groups that they can respond to at leisure, for example the office football group. Or they can simply activate the ‘do not disturb’ mode. These features stop unnecessary messages diverting users’ attention and allow them to focus on their most critical tasks.

Blocking distractions: using team messengers effectively

Unlike emails, that are both interruptive and slow in response, team messengers not only help users increase their productivity, but also help users work more efficiently by offering time-saving features such as sharable to-do lists, file sharing, team polls and so much more.

Team messaging apps also offer an effective platform for teams to share knowledge that often gets lost among big teams. This is a major plus considering that the average knowledge worker experiences a 20% productivity loss while looking for company information or colleagues who can help with specific tasks.

Unlike other traditional forms of collaboration such as email, team messengers allow various app integrations. For example, Google Drive can be integrated within a team messenger, allowing users to collaborate on documents while accessing their messages within one place. Interestingly, businesses can also build and integrate customised apps into their team messenger, to suit their unique needs. For example, companies can customise the Meeting scheduler app that helps users schedule meetings, invite participants, get participant feedback on meeting slots, and view team calendars at a glance – all from within their team messenger.

Thus, when used properly, team messengers are not a productivity killer. Their features enable users to disconnect from the world at large when they want to focus on work, and to be notified if a message concerns something urgent and collaborate in real time when they need to. Some tools also allow users to pin their most important conversations to the top of their chat window, so that they can filter chats by level of importance. This way, nothing important gets missed.

As most users will attest to, interruptions are a necessary evil when it comes to collaboration. Therefore, it is important for employees across teams and functions to learn to balance collaboration with task management. When used properly, team messengers enable employees to manage their time while collaborating with their teammates seamlessly, in real time. In this way, collaboration tools can be seen as a solution for the productivity vs. collaboration conundrum.


March 14, 2017  4:04 PM

Customer experience driving the convergence of AI and BPM

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Gal Horvitz, CEO, PNMsoft

Artificial Intelligence (AI) refers to a wide variety of algorithms and methodologies that enable software to improve its performance over time as it obtains more data. This technology is currently the hottest thing in the business process management (BPM) industry and it is continuing to heat up. In fact, it’s on fire!

But, the funny thing is that although we see a lot of innovation with AI algorithms and methods, the concept of AI isn’t new. What is really new is how businesses are using AI. We like to call it the new generation of AI — one example is deep reasoning —which breaks AI’s traditional dependency on known datasets. Deep reasoning performs unsupervised learning from large unlabeled datasets to reason in a way that can be applied much more broadly. In other words, AI can “learn to learn” for itself.

Forrester Analyst Rob Koplowitz’s February 2017 report, “Artificial Intelligence Revitalizes BPM,” – where PNMsoft was one of the companies interviewed – explains that:

“The primary driver for BPM investments just two years ago was cost reduction through process optimization. Today it is customer experience, with enterprises expecting to put top priority on digital automation in two years.”

AI has the ability to take human cost and latency out of processes, as well as provide new interfaces that customers enjoy. With faster and more user-friendly operations, customers are happier, stay loyal to the business and are more favorable to buy more products and/or services from their current provider.

That’s why it’s no surprise that customer experience (CX) and business transformation are expected to skyrocket to the top two primary focuses of businesses looking to improve their processes.

But, in order to progress the practice of AI, companies must first feed the system initial data for the AI algorithms to analyze and suggest data-driven improvements from deep reasoning. Forrester reports 74% of firms say they want to be “data-driven,” but only 29% are actually successful at connecting analytics to action. If these companies want to drive positive business outcomes from their data, they must have actionable insights. Enterprises are realizing this is the missing link and have begun to invest in and grow large sandboxes of data sets that will ultimately help build the AI algorithms that can inspire significant digital transformation for their businesses.

We agree with Brent Dykes’, director of data strategy at Domo, theory that there are many key attributes of actionable insight. We will break them down for you so that you can put them into motion and begin to build the infrastructure needed to put AI and BPM to work.

  • Alignment – Make sure the data you’re gathering directly feeds into the success metrics and key performance indicators (KPIs) you desire.
  • Context – Determine why you need the data in the first place. Do you need it to make comparisons or benchmark your success? Context will enable your AI to make more accurate predictions.
  • Relevance – Pulling the right data at the right time will help narrow down which actions need to occur and when.
  • Specificity – The more specific the data is, the better sense of the next action to take will be.
  • Novelty – When analyzing customer behavior, it is easier to spot a one-time occurrence over something that has repeatedly happened. Novelty occurrences are areas to pay close attention to and AI will be able to pinpoint them quickly.
  • Clarity – How the data is communicated can say whether it can be acted on or not. If the data is not communicated well, it can be lost in translation.

AI or machine-learning technology can bring huge benefits to any industry. Here’s an example of a healthcare organization that has adopted data-driven AI and has connected analytics to action.

  • By analyzing large amounts of medical data, the healthcare organization’s AI is helping clinicians give faster and more accurate treatment to their patients, and has the ability to learn to make better decisions going forward. For patients, the AI-driven healthcare system alleviates some of the burdens on a system struggling to keep up with ever-growing demand. By implementing these technologies, the organization can make better health decisions, diagnose disease and other health risks earlier, avoid expensive procedures, and help their patients live longer- which are all actionable insights driven from data and analytics.

Moving forward, we will continue to see companies adopt new technologies, like AI, as a means to improve their bottom lines and their efficiencies.


March 3, 2017  11:41 AM

Practical Machine Learning: let’s demystify the tech revolution

Brian McKenna Profile: Brian McKenna

This is a guest blogpost by Kirk Krappe, author, CEO and chairman of Apttus

The thing about technology evolution that the movies don’t quite tell you is that it desperately relies on adoption. The most amazing invention on the planet could be created and if there’s not widespread consumer demand, or if it’s prohibitively expensive for businesses, that breakthrough isn’t going to actually break through. The old Henry Ford quote is more relevant than ever: “If I had asked people what they wanted, they would have said faster horses.” However, by demonstrating the value and efficiency of new tools, we can push modern business forward exponentially –  unignorably.

To wit, the most exciting thing going on right now is the rapid acceleration of the machine learning space, which often gets associated with artificial intelligence and advances in bot technologies. Here we have a topic that has been researched, built out, and implemented for many years now, but which is finally breaking through into mainstream usage. Why is this? We can look principally to two related advancements in the field.

  1. Machine learning is more practical, accessible and applicable than ever before.
  2. As a direct result, it is more affordable and useful than it’s ever been, increasing its adoption and making it part of mainstream business.

If you’re looking to demonstrate value, the most direct example is that of sales. Particularly in the enterprise world, where many transactions rank in the six, seven, eight figure range, getting an extra bit of speed and efficiency into the equation makes a tremendous difference. Now consider this: CRM systems and other financial tools employed by enterprise companies are still relying on manual input from their sales teams to pick out the best products, the most appropriate discounts, and perfect pricing. Machine learning has changed the game here. Accessing historical and predictive data, the tools to create a perfect sale, optimized for both the salesperson and their customer, are at their fingertips. Product recommendations aren’t necessarily relying on either party knowing everything about their business – that’s how opportunities are missed on both sides. Geo-specific, industry-specific, and even future needs can be acknowledged and addressed in moments, creating a better experience for everyone involved.

Think about how that might affect a business – if every deal had a safety net ensuring that it addressed every need of the customer, and maximized the deal for a company, all while speeding up the entire cycle, that makes the entire sales process significantly faster and cheaper. It creates business advantages faster. In the B2B world, it takes a notoriously slow-moving operation and turns it into an asset – then passes along that advantage to an entire new ecosystem of partners and customers. One adoption benefits everyone who touches the business. Slowly but surely, the technology revolution continues on until it’s unquestionably part of the mainstream.

This is how we experience breakthroughs on a global scale; they blatantly improve the way we, and our businesses, perform in our daily lives. It’s not a flying car or a lightsaber (yet), but it’s incredibly exciting all the same.


February 15, 2017  4:28 PM

Movement and data analytics are at the heart of the driverless vehicle revolution

Brian McKenna Profile: Brian McKenna

This is a guest blog by Ben Calnan, head of the smart cities practice at people movement consultancy Movement Strategies

Autonomous vehicles will soon be a reality – in fact, industry commentators believe that the first fully autonomous cars will be available to the UK public in five years’ time and perhaps sooner in other countries. However, for car manufacturers, the transition from prototype development to mainstream deployment is full of challenges. While navigating the issues surrounding data ownership will prove difficult, for those who successfully establish a role in the AV data eco-system, commercial opportunity awaits.

For driverless cars to work effectively in the real world, they’ll have to integrate with existing infrastructure and transport modes. Modelling the potential demand for AV ownership vs. rental, the effect on public transport usage, the requirement and location of parking and charging facilities, and capacity for cars to ‘circulate’ are all essential. However, while the datasets required to unlock this insight are available, in practice, accessing the relevant information to inform this modelling will not be easy.

Integration and sharing of data is crucial. One of the challenges for smart cities in recent years has been the interface between information and services provided by the public and private sectors. Guaranteeing the quality of data from different parties, as well as navigating the issues surrounding data ownership is a challenge. For example, mapping AV demand would require accessing data from in-vehicle, public transport usage and cellular data tracking. No one organisation can access this information without purchasing from or collaborating with others.

There are now a series of projects and organisations seeking to address these issues such as the Fiware consortium  for interoperability standards, and major technology companies are building online data brokerages and promoting API integration. These projects are the real enablers of effective collaborations, as competing automotive industry stakeholders bring their products to market, and cities attempt to facilitate the introduction of this new transport mode.

As well as supporting the predictive analyses needed to accelerate the mainstreaming of AVs in our cities, data collected by AVs themselves will also prove a valuable commodity. Driverless cars need and therefore collect, analyse and combine vast quantities of data as they navigate the road network and the hazards that entails, constantly sending back information to servers, helping to improve their algorithms. In this sense, they are the ultimate environmental data collectors, frequently updating a virtual picture of our world. The quantity of data being collected will increase significantly, with gigabytes of lidar, radar and camera footage acquired every second.

The information generated by a fleet of AVs will present numerous opportunities – real-time journey information, not just speed data, might enable improved network management, which will benefit all road users, including emergency services. Alternatively, live in-dash cameras could be remotely accessed to detect and monitor crime, or inform emergency services as to the required level and scope of assistance. However, a key risk is that the size and nature of the data collected by AVs will be too difficult to interrogate securely and expensive to manage. Organisations looking to extract value from this information must invest in analytics tools and skill sets and also in information security processes and awareness to efficiently capture and process this data for applied use.

We’re on the cusp of a hugely disruptive technology, the impacts of which will permeate our society – movement data and analytics will be at the heart of this innovation.


Page 1 of 1012345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: