CW Developer Network

Page 10 of 107« First...89101112...203040...Last »

September 5, 2017  3:34 AM

Cloudy DevOps optics? Should have gone to CloudBees

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprise Jenkins and DevOps outfit CloudBees has stuck a bit of extra Ops vision in DevOps with what it calls DevOptics (did you see what they did there?) as its latest play in the application delivery stream tools market.

We’re all aiming for a holistic view of the software delivery process, so what does this new offering promise?

Live data pipeline

CloudBees DevOptics aggregates live data from software pipelines to derive metrics (as a live view) that can help steer application delivery. Essentially, it helps reduce project status meetings and checklists.

It allows developers to drill down to individual commits, allowing them to identify failed jobs, bottlenecks in the process and critical downstream dependencies. It also captures build events directly from development systems.

“DevOps has clearly gained tremendous adoption as the way to deliver software faster and align IT to the needs of the business. However, despite the investments made in DevOps, enterprises do not fully experience the benefits. This is because there is a lack of visibility in software delivery processes that inhibits making informed decisions,” said Harpreet Singh, vice president of products, CloudBees.

Users can also create benchmarks to determine the best performing teams while identifying poor resource allocations.

The firm has also announced a new free service, CloudBees Jenkins Advisor, which analyzes Jenkins environments continuously, identifies potential issues and advises on corrective actions before they impact business-critical software delivery, ensuring improved uptime, performance and productivity.

1chythythty

September 5, 2017  3:02 AM

Huawei Connect 华为全联接大会 2017, ‘applied’ AI is key for cloud

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Chinese telecommunications giant Huawei has hosted its Huawei Connect 2017 conference at the International Expo Centre in Shanghai this September 2017.

After a breakfast of black eggs boiled in tea, fermented bean curd in chilli and an appropriately bracing portion of duck blood potstickers and pork buns, the telco-cloud-data cognoscenti were welcomed to the main stage event itself.

Grow with the cloud

Under a tagline banner of ‘grow with the cloud’ Huawei rotating CEO Guo Ping has explained that what this really means is the application of cloud computing inside more devices, more applications and more business scenarios. Specifically, Huawei wants cloud to be applied within the context of Artificial Intelligence (AI) in many use cases.

Huawei insists that today, it can work with large enterprises in cloud development precisely because it is itself a large enterprise.

This grow with the cloud line might be more accurately put as ‘grow with the applied AI-centric enterprise cloud in business specific application scenarios’ … but that would be too long.

“We are designing hybrid cloud solutions because we are aware of the fact that many companies will not be able to (for example) rip and replace their complete ERP systems quickly. Flexibility in deployment will be key,” said Ping.

Now the focus for Huawei is across verticals that include: gaming, transportation, manufacturing, medicine, finance, government and (very importantly for Huawei) smart city. The firm insists that its ‘platform’ will be an open one and the company operates 20 of what it calls OpenLabs around the world. It also has a Developer Enablement Plan to work with ground level app creation as well as being a principal contributor to open source communities.

From the firm’s newstream, the latest AI offering is the Kirin 970 — powered by an 8-core Central Processing Unit (CPU) and a new generation 12-core Graphical Processing Unit (GPU). Built using a 10nm advanced process, the chipset packs 5.5 billion transistors into an area of only one cm².

The Kirin product is described as a ‘platform’ – although it is undeniably hardware not software in the first instance. In fairness, this is hardware with a considerable degree of embedded intelligence and Huawei is opening its use to developers in the same way that we might more usually refer to a software platform with its related toolsets and functions.

Missing children solution

As an example of Huawei cloud applications in motion — the application here has been used to find missing children through a public video network.

A child was taken from a holiday party in Longgang Shenzen on Jan 26 2017 and so the police managed to find fuzzy video images of the child being upducted – they then were able to cross reference the fuzzy image with an image database of clearer images, this drove the police to be able to track the woman’s hotel, they got her identity and were able to track her journey out of the city to a train journey where she was intercepted and the child was rescued and all this happened within 15 hours.

This again is perhaps some validation and justification for how an ‘applied’ cloud can be applied to specific use cases with real world data streams.

Value-driven cloudification

But its no good just going to cloud, it has be a question of what Huawei calls value-driven cloudification.

A key part of this will be the simple application of a single AI technique such as text recognition, image recognition, dumb device identification (labelling up stock items and knowing where they are) or speech synthesis. This is AI that can create more value for enterprises as it can be applied at a specific point of business focused need.

“We believe that AI can be applied to more areas of business – this is what Huawei calls its Enterprise Intelligence services,” said Ping. “We then move to a point where AI becomes more ubiquitous inside organisations and then becomes used in scenario specific business solutions.”

Traffic management

Intelligent traffic management is key to Huawei’s work in smart cities. As a part of the opening keynote session. The firm explains that smart city traffic management has to be characterised by four elements — it must be:

  • law based,
  • precise,
  • intelligent and
  • standardised

Building what the firm calls the ‘traffic brain’ for these functions, Huawei says that it is now working to capture urban traffic data using an ultra-high bandwidth network. It will use AI to help assist law enforcement.

Let’s remember that ‘policing’ the streets has been a largely manual process in years gone by. Now we see the firm looking to use image recognition to distinguish between lighter cars and heavy duty trucks — if a heavy truck is not supposed to be on a particular road, then fines can be applied through AI systems using smart cameras.

Additional layers here will see traffic being analysed so that central systems are aware of which cars are going where — this data can then be fed into the central traffic brain to then control traffic lights so that the city is kept flowing more fluidly.

The Shenzhen Traffic Brain Project was officially announced as part of the wider presentation at this show — the project consumes 1 Petabyte of data per second.

CERN elements

Jan van Eldik from CERN European Organisation for Nuclear Research also spoke during this event’s day 1 keynote.

Eldik explained the core operations at CERN and told the audience how the particle collider has always worked with what was initially just the (rather massive) CERN private cloud and all the application servers and databases that it serves.

At 8,500 servers and this was enough back in 2013, but the organisation has known for some time now that it would need a huge amount of additional storage and processing power.  Huawei and CERN have worked together on the OpenStack cloud to create its public cloud layers for the years ahead.

Huawei in the future

Huawei has ambitions to become of of ‘just five’ major cloud players in the world over the years that come. The firm summarises its position as Mobile AI = On-Device AI + Cloud AI.

The company insists that it is committed to turning smart devices into intelligent devices by building end-to-end capabilities that support coordinated development of chips, devices and the cloud… and, going forward, that’s always an applied AI-centric cloud for enterprise.

1videovigilancia-681x356

Image: Wikipedia

Image: Wikipedia


August 30, 2017  4:57 PM

Okta to developers: get identity out of shadow IT

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud identity authentication specialist Okta has used its Oktane 2017 conference and exhibition to attempt to explain how software application developers should approach identity as a key element of user (and device) authentication control in the way modern applications are being built.

The firm is aiming to make its Okta Identity Cloud the ‘linchpin’ that brings together the tools and services needed to bring these controls forward into developers’ toolsets and application lifecycle controls.

Out of shadow

It is, if you will, a call to bring identity out of so-called ‘shadow IT’ and bring it forward into all development.

As a means of underlining the importance of secure access to data, devices, datsets and data channels Okta featured Megan Smith as a keynote speaker. Smith was the USA’s 3rd working chief technology officer under President Obama. Smith spoke volubly on the challenges she had seen working inside the administration and presented a view on how identity could/should form part of the way IT systems (in particular those that might be deployed in contemporary smart city environments) are now developed.

The assertion (and yes, it’s a big claim) from Okta is that properly controlled identity authentication functions can help with eight key technology zones (comments in bullets from CWDN):

  • Modernize enterprise IT (yeah, that thing everybody says)
  • Reduce IT friction (systems not interconnecting)
  • Be agile during M&A (i.e. the ability to keep working securely if one happens)
  • Build 100% cloud and mobile IT (a given, but it has to be said)
  • Secure workforce (what good identity authentication should lead to)
  • Work with partners (when identity controls have to sit inside other apps)
  • Enable mobile workforce (cheeky, we’ve already had mobile above)
  • Protect against data breaches (an all encompassing comment on what identity is there for)

As a tangential note here, it is interesting to look at who (in the media) picks up on Okta stories, that is – is it ‘cloud computing’ media or ‘security’ press. Interestingly, it is a mix of the two. Given that Okta never describes itself as a security company per se, this is perhaps logical.

A digital front door

If we look at the application of identity more widely, it can be applied to all objects that we own and interact with. What happens now, in the digital world, is that our ‘things’ have an identity stamp to denote that we own them and they are part of our authenticated identity sphere.

Look back at your physical front door key and you can see that this is an identity pass, in a sense, because it opens access, but it does not have the ability to know who its owner is — in the future, with digital front doors and digital keys, each devices will carry data to denote the identity of its owner.

By way of further clarification on this point, an Oktane 2017 speaker from Dignity Health (Dr Shez Partovi) referred to the ‘rotary circular dial’ phones of yesteryear.

“You really can’t optimise a rotary phone, you have to [digitally] transform it,” said Partovi.

We can see perhaps from this comment that the application of identity authentication for developers is something that will need to be a process of significant re-engineering. Okta will tell us that it’s platform makes things easy in the face of this complexity, but we know that’s the corporate mantra by now.

A common identity core

Looking forward then, what Okta of course hopes is that software application developers now start to engineer-in and architect-in enough of these technologies to build what could be called a ‘common identity core’ in the future.

The suggestion here from Okta is that now is the time for identity authentication to come out of wider systems design (and its existence in shadow IT) and become a formal dedicated control that all developers understand, use and implement.

Okta’s position as a dedicated identity player in the identity space has drawn comparatively little questioning at the moment, it will be pleasing to see how the rest of the industry reacts to its technology proposition and interacts with this technology.

With show partners including Fuze, Zylo, ServiceNow, Google, Box, Palo Alto Networks and F5 (and more) all signed up to drink the identity Kool-Aid, the industry appears (for now at least) to really be listening.

1111dibyosduiaad9as

Okta Oktane: who knew ‘identity’ was so much fun?

 

Girls Who Code: speaker at Okta Oktane 2017

Girls Who Code: speaker at Okta Oktane 2017

“We have to close the gender gap so we don’t leave powerful technical solutions on the sidelines,” said Reshma Saujani on #womenintech at Oktane.


August 29, 2017  4:56 PM

Okta drives a new trajectory for directories

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Okta

Cloud identity authentication specialist Okta has used its Oktane 2017 conference and exhibition to extend the capabilities of its Okta Identity Cloud platform.

To clarify, the firm now brings forward advancements designed to power a new independent directory standard and integration ecosystem.

Before we explain what these things mean, let’s also note that the updates here include a Lightweight Directory Access Protocol (LDAP) interface for Okta Universal Directory, an expanded, richer Okta Integration Network, automated access for customers and partners and end-to-end auditing.

What’s all this directory direction?

So hang on, the ‘problem’ here is that firms who specialise in directory technologies are often so keen to tell you about updates, they forget to explain what role they play.

A directory in this sense (and Okta Universal Directory (UD) specifically here) is a platform that delivers user profiles and fine-grained control over how attributes flow between applications. This enhancement is supposed to make it easier for organisations to create and maintain a single source of truth for its users, enabling secure authentication and provisioning scenarios.

Deeper here then as defined by TechTarget, LDAP (Lightweight Directory Access Protocol) is a software protocol for enabling anyone to locate organisations, individuals and other resources (such as files and devices in a network), whether on the public Internet or on a corporate intranet.

“While the benefits of the cloud have been well established for years, many organisations are still unable to take full advantage of new services due to their reliance on legacy infrastructure, which adds complexity and cost to implement and use,” said Eric Berg, chief product officer at Okta. “Modern IT requires a dynamic system that can help [it] match the racecar pace of technology innovation.”

These extensions are intended to make it easier for software application developers and IT administrators to manage the breadth of on-premises and cloud-based applications, devices… and people, all involved in modern business.

Updates to the Okta Identity Cloud include the news that Okta now supports LDAP-enabled applications to directly authenticate against Okta Universal Directory. This (so says Okta) eliminates the need for on-premises directories for small and mid-sized organisations.

According to Okta, “Cloud and mobile IT teams can authenticate developer tools, databases, or other legacy apps and can effectively use Okta Universal Directory as their core directory. Enterprises can accelerate their move off legacy on-prem directories, replacing them with Okta Universal Directory as the connection point to traditional LDAP-enabled applications such as Atlassian on-prem, Github on-prem, and popular VPNs.”

Solar energy company and Okta customer Vivint Solar has explained that it uses the Okta Identity Cloud to push in-house and third-party developed tools through its cloud-hosted environment to 4,500 employees throughout the company.

Universal Directory provided us with one place to manage our users, groups and devices from any number of sources. Since starting with Okta, we have added more than 20 apps,” said Mark Trout, CIO and CTO at Vivint Solar.

Okta has also expanded and deepened the set of integrations to the Okta Identity Cloud, providing a unified identity layer across diverse business networks and systems and new solutions around workflow management, business analytics, security automation and hybrid IT.

Through integrations with technology partners such as Palo Alto Networks, F5, IBM QRadar, and Splunk, the Okta Integration Network offers joint solutions to solve the breadth of challenges that IT departments face moving to the cloud.

The company is also extending the Okta Lifecycle Management service with self-service registration and lifecycle policies that enable IT to automate access for external users such as customers or partners, from registration to audit.

Automation advancements

Essentially this is all about enhancing the amount of automation in the product itself… if a new set of partners (or any other kind of user) were to be starting use of Okta, then automation takes the form of self-service registration which is very time consuming.

Along with rogue account detection, these are the layers that put the A in Adaptive for the Okta Adaptive Multi-Factor Authentication product claimed CEO Todd McKinnon at the firm’s Okta Oktane 2017 conference this year.

This is story that is still crystallising and developing i.e. developers don’t traditionally care about identity and access authentication issues too much, but specialists in this space are trying hard to change that.


August 29, 2017  4:56 PM

HDS aims to unify compute for hybrid clouds

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hitachi Data Systems (HDS), is a wholly owned subsidiary of guess who? Yes, Hitachi, Ltd. But the fine folks at HDS do like to remind us of this fact.

Now very much a software company (even if you do have a Hitachi television), HDS is now focused on its Hitachi Unified Compute Platform (UCP) RS series Software-Defined Data Centre (SDDC) Rack-Scale platform, powered by VMware Cloud Foundation.

Surely that should be the HDS-UCP-SDDC-RS on VMware, right?

Overloaded acronyms notwithstanding, Hitachi UCP RS (for short) claims to be able to enable firms to embrace hybrid cloud faster.

Pay-as-you-go economics

On the path to what HDS likes to call cloud-powered pay-as-you-go economics, this technology aims to provide the option to either deploy an integrated SDDC stack or give firms the option to build their own using Hitachi’s vSAN ready node and VMware software.

Alongside the launch of Hitachi UCP RS, the company has enhanced its hyper converged system Unified Compute Platform (UCP) HC.

Questions over Software-Defined Infrastructure (SDI)

 “Open questions exist regarding the extent to which advances in SDI technology can support a common virtual datacentre foundation upon which improved end-to-end IT service interoperability, resilience, elasticity and agility across a hybrid cloud infrastructure can be effectively realized,” according to Market Trends: Software-Defined Infrastructure — Who Can Benefit? (Gartner, June 2017).

In addition, the report states that, given the current state of public cloud service deployment and the maturity state of SDI, one of two options is possible.

The first is an application-independent, common virtual datacentre infrastructure that can run on top of an existing private datacentre infrastructure, a public cloud service or a combination of the two.

This variation is called infrastructure-upward – the second option was not clarified in any depth here.

“Our deep, collaborative partnership with VMware has led to the creation of powerful systems and innovative solutions that help our customers modernize their IT environments and put data at the center of their business,” said Bob Madaio, vice president of integrated solution marketing at Hitachi.

This is deep infrastructure technology for sure. Essentially HDS is working as hard as it can (with help from VMware) to attempt to make hybrid cloud easier. Has it worked yet? It may be too soon to say.


August 29, 2017  3:52 PM

Okta Oktane 2017: keynote noteworthies

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Okta

Enterprise identity specialist Okta kicked off its Oktane 2017 conference and exhibition this year with an appropriately charged women in technology breakfast.

Welcoming female executives from KcKinsey & Company, experion, Catholic Health Initiatives, Google and Symantec… introductions and stories were tabled to help share confidence, understand how to overcome obstacles and help women in technology understand how to create a path (in any company) for themselves.

Women in tech, the male role

The audience was made up of perhaps 60% females and 40% males — and this was a (arguably) a good mix given that much of the discussion gravitated towards what kind of networks and mentorship programme people tend to engage in.

  • Men’s networks = mostly men.
  • Women’s networks = mostly women… and this ends up developing into a network that is ultimately narrower with less access to senior leadership.

Other key topics covered in this session included the need to combat ‘unconscious bias’ and the need for women to get involved in areas of work practice that moved outside of their central comfort zones.

Main keynote

Into the main keynote then, CEO Todd McKinnon explained that it has been a big year for Okta with the firm’s IPO and the rising understanding of what identity is in technology circles.

Looking at the contemporary cloud-centric services-driven world of computing, McKinnon says that, “Integration is everything, but the perimeter of our networks has been redefined. Given the sheer volume of users now interacting with our networks [we can say that] people are the new perimeter.”

McKinnon also explained the firm’s new ‘Business @ Work Dashboard’ product. This software allows firms to see which applications are the most accessed (and therefore the most popular) inside any given organisation.

The software is capable of breaking down ‘different types’ of applications so that firms can start to focus on a) which apps will need the most identity access provisioning and b) which apps will need the most work in terms of getting them to the point where they can integrate with other pieces of software.

The extended enterprise

Okta execs spent some time running demos sessions during the opening keynote… the firm is attempting to try and show how its software could become the identity layer for any application on any device.

“The rapid adoption of mobile devices and cloud services, together with a multitude of new partnerships and customer-facing applications, has extended the identity boundary of today’s enterprise,” states the September 2012 Forrester Research, Inc. report titled Evolve Your Identity Strategy For The Extended Enterprise.

Legacy approaches to IAM are failing us because they can’t manage access from consumer endpoints, they don’t support rapid adoption of cloud services and they can’t provide secure data exchange across user populations claims Forrester Research, Inc., principal analyst Eve Maler, in a September 2012 blog post.

All the products on the Okta Identity Cloud are built using the same set of core services.

Developer empathy

But can’t any developer build a password page into any app? Not so much says CEO McKinnon and this is because it quickly becomes a ‘high stakes component’ that has to be able to have richer layers such as reset functionality and more complex authorisation capabilities and so on.

Okta rounded out its first morning of this two day show by admitting that when it first opened up its platform four-years ago it failed to put enough focus on the developer proposition.

“If we want to become the identity layer for every developer to be able to build identity [and secure sign in] for every application, then we know we need to embrace the needs of software engineers [of all disciplines] very clearly,” said McKinnon.

Deeper dive developer demos finalised designed to showcase how the Okta platform worked closed what was a an undeniably deep-geek keynote for a conference that isn’t actually specifically billed as a developer event.

i111w13mg_20170828_222252


August 26, 2017  8:43 PM

What to expect from Splunk .conf 2017

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Okay so yes, we’ve heard of so-called digital transformation, whatever that is supposed to mean.

Ah alright then, digital transformation is the movement towards cloud-centric services-driven analytics-empowered mobile-enabled IoT-aware always-on continuously-delivered automation-enriched computing power, right?

If that is so, then exactly who is behind all those layers of the technology stack?

In truth, it’s everybody, every player, every technology protagonist and every firm who seeks to get high on a slice of the new digital-transformation pie.

Machine data specialist Splunk (yes really, it’s a caving reference) thinks that its own particular position and trajectory on the digital transformation vortex is somewhat special. But then doesn’t everybody?

So can Splunk substantiate, validate and  calibrate its claims?

Machine data backbone

Last year at Splunk .conf 2016 the firm claimed that, “Machine data is the backbone of digital transformation,” no less.

Now, a year on, at the eighth staging of this event, Splunk .conf 2017 sees the firm welcome machine data fans to Washington DC to examine issues relating to real time ‘operational intelligence’ (i.e. search, monitoring and analytics functions) from machine-generated big data.

One year on the message is still there, that is – Splunk wants to ‘weave a machine data fabric’ into every aspect of a firm’s business.

What is machine data? This is data created by the activity of computers, mobile phones, embedded systems and other networked devices.

As defined by TechTarget, “Application, server and business process logs, call detail records and sensor data are prime examples of machine data. Internet clickstream data and website activity logs also factor into discussions of machine data.”

Talk tracks

Enough background then, so what can we expect from .conf 2017? We know that Splunk has an (arguably) solid women in technology track, but what else?

In fact, talk tracks include:

  • Business analytics
  • Developing
  • IoT
  • IT Ops
  • Security / Compliance / Fraud
  • Foundations

A whole lotta Splunk

As many as 5000 Splunk Splunker (no, that’s what they call them, honest) are expected to attend Splunk .conf 2017 and the firm promises over 200 technical sessions in total.

Keynote speakers will include: Doug Merritt, president and CEO, Splunk; Nathaniel McKervey, director of technical marketing, Splunk; Billy Beane, exec VP of Baseball Ops for Oakland A’s; and Michael Ibbitson, exec VP for business and technology at Dubai Airports.

The firm promises new T-shirt slogans this year. If anyone can improve on “I like big data and I can not lie” then this could be a good event.

The hashtag for this event is 

1dblczlfwsauk_qy


August 22, 2017  10:48 AM

What to expect from Huawei Connect ??????? 2017

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Chinese telecommunications company Huawei (pronounced ‘wah-way’) is about to stage what is billed to be one of its biggest ever exhibition, symposium, conference events to date.

Combining what was three shows into one mega show (as was also the case last year in 2016) the firm will now host its Huawei Connect 2017 event at the International Expo Centre in Shanghai this September.

Vegas, Barcelona… Shanghai?

While the perfume bottles in duty free will still read Paris, New York, London, Milan… could the global tech conference circuit now be about to add Shanghai to its ‘usual suspects’ list of locations which we know and love (or hate) to be a seemingly endless stream of industry get-togethers staged in Las Vegas & Orlando & San Francisco, Cannes & Paris, Barcelona, Berlin and London?

Huawei certainly hopes so and will no doubt be following up on some of the market and technology positioning it has openly stated already this year.

Grow with the cloud

This year’s event is themed around a ‘grow with the cloud’ tag, but that doesn’t tell us much. Looking more closely, while Huawei will obviously have a clear message for its carrier partners — it wants to help them achieve so-called ‘data dividends’ from high value services where it can.

But, pleasingly, the firm is also kicking off day with a ‘Developer Enablement Plan’, so there is (arguably) some diversity and ground level technology in its total message set here.

Looking at the industry scope being targeted, Huawei breaks out its tracks into key zones which include: smart city, public safety, energy, manufacturing. finance, transportation, carrier, ISP, media, eduction, retail and government.

More than a box seller

Can we expect new device launches? Well, probably, it would be rude not to… but carrier business group president at Huawei Mr. Zou Zhilei has said before now that the firm is “so much more than a box seller” and is now looking at how it can help support video on mobile as the next trillion dollar market.

As previously detailed here, Huawei says that all connected worlds are being impacted by 5 bullets: (ROADS)… an idea that the firm has actually been talking about since 2015:

  • R – Real time connectivity
  • O – On-demand i.e. customize services based on actual user needs
  • A – All-online applications
  • D – DIY service development and optimization
  • S – Social platforms for sharing

Global Industry Vision (GIV)

So what else can we expect? Huawei will talk about it what it calls its Global Industry Vision (GIV) which encompasses all technologies driving what we now understand to be the process of so-called digital transformation.

The firm’s vision (and/or prediction) is that that most of the business processes of the enterprise will be digitised by 2025. Well, in those firms that survive, at least.

China now realises that the information and communications industry is the most dynamic in its national economy. It is working to help firm’s adapt to what has been called the “Internet +” revolution where cloud, Internet of Things and intelligent devices become a part of all business processes.

Huawei Connect 2017 ????????????????

You can read a Chinese (English translation on page) description of some of Huawei’s current work across a range of industrial zones here.

Image: Wikipedia

Image: Wikipedia


August 22, 2017  9:45 AM

Clover: payments are the next developer frontier

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software application development engineers (in all forms and guises) will always be interested in monetising the commercial aspects of their creations when and if they want to take them to market, obviously.

Moreover, commercial development projects with a market-facing proposition will also need to incorporate a payments route, function, engine, platform (call it what you will) in order to reap customer/user finances where appropriate, obviously.

A road less traveled?

Given the obvious importance of this sub-discipline or function, is it perhaps surprising to find that we so rarely discuss this aspect of the total developer universe?

Mark Schulze thinks it is, but then he would do… Schulze heads up the app marketplace at Clover, a payments startup working closely with Apple, Google, Samsung and Alipay to bring new payment technology to market.

Clover’s open app marketplace already includes 250+ apps.

The legacy payments hangover

According to Clover’s Schulze, payments are an integral part of every business — but an astounding amount of today’s payment transactions still take place via legacy systems.

“Despite all the hype about Bitcoin (and now, Bitcoin Cash), 85% of all transactions globally are still carried out using cash. The typical American credit card relies on a magnetic strip that was invented in the 1960’s and is notorious for how easy it is to hack,” said Schulze.

He contends that, from an engineering perspective, the Point-of-Sale (PoS) space poses a massive untapped opportunity to reach huge numbers of consumers and shape how we pay in the years to come.

“Payment app marketplaces will continue to emerge and grow and the opportunity will proliferate, allowing more developers to reach large audiences of small and medium sized businesses with a direct to market route, unlike that of enterprise ecosystems,” said Schulze.

As a company, Clover thinks that the needs of merchants will drive this change. The firm insists that the developers building for the payments space now will determine how technologies like blockchain, cashless payments, biometrics and more are incorporated into our day-to-day lives.

Other examples of payments apps include:

  • Davo – automatically collects, files and pays merchants’ sales tax.
  • Homebase Schedule – pulls in sales info to schedule employees based on what’s happening in the business.
  • SimpleOrder – reduces inventory with each transaction and orders replenishment in real time based on sales.

You can read more on this topic at the Clover blog.

Image source: Clover

Image source: Clover


August 21, 2017  12:08 PM

How to make software changes, properly

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Anson Kelly in his role as senior developer for independent car buying website carwow.

Kelly discusses how software can be upgraded and changed with the least amount of disruption to the business and ultimately customer experience.

Kelly writes as follows…

Software is a living entity and it is inevitable that it needs to change as businesses expand and mature. A solution that worked well six months ago may no longer be the best solution today, especially in a startup environment where things change at a rapid rate.

Every so often we need to change part of our infrastructure to meet new requirements from both our internal and external users.

It’s hard to gain users, easy to lose them. All the marketing and sales effort that goes into attracting and retaining users can easily be for nought if we deliver a bad experience. When we are upgrading components and making changes our users don’t want to see a maintenance page, an error page or, even worse, lose data.

This means it’s important to plan changes out, trial run them and then script them to be easily reproduced. Sometimes unforeseen things can happen when updating software, but that doesn’t mean we should not attempt to control the risk as much as we can.

Here are some principles that we use at carwow to manage the disruption caused by changes to our tech stack:

Don’t do it, unless you have to

This might seem counter-intuitive but it is important to consider.

Any change to infrastructure is risky no matter how well you plan and prepare.

If it is something that can be deferred until a time where the risk is lower, great?—?you have solved the problem and it’s time to move onto the next one.

Limit the scope

Don’t change everything at once.

Modern tech stacks consist of many different components and the more components that change at once, the greater the risk of services going down, data being lost and users having a bad experience.

All of which can result in the loss of customers.

Script all the things

Every step / command / action must be written down. Don’t rely on your memory to help you out – steps that are written down are much harder to skip or be forgotten about.

Go one step better and create a script that runs the changes?—?copying and pasting commands or clicking a user interface is prone to human error, especially when there are more than a few steps involved.

Automating the process removes the possibility of human error as the steps are always run in the same way, in the same order.

Writing it all down has another advantage as the process is kept for future use – saving much needed time and hassle. After all, there is no need to reinvent what you have already done.

We use bash scripts and rake tasks for smaller changes, while Ansible and/or Terraform are used to manage larger changes.

Split the change into stages

If your changes are applied in multiple steps, break them up. Verify that each step has been successful before continuing on.

You don’t want to get to the end of the change process before realising that an earlier step hasn’t done what it was meant to do. This can take the form of performing counts, checking that a web request returns an expected response, or that a service has started (and stayed) running.

Log every step, even if it is something simple, including output from API calls and results from database queries. The more information you have, the better prepared you are to diagnose any problems that occur along the way.

Give yourself a way out

You need to think about what happens when you are half way through a change process and realise it cannot be completed. This could be due to a step not working properly, the process taking too long for a maintenance window or something totally unexpected happening.

Being able to reverse the changes means you can get your environment back to where you started (read: undo the damage). With some changes however, this is not always easy to do. For instance, database migrations where large datasets are being manipulated. So it is important to think about how something like this can be fixed before it’s too late.

Practice makes perfect

Rather than applying a change straight away, test it first.

Have a duplicate environment (cloud services make this easy) set up like your production environment so that if something goes wrong it’s not the end of the world, simply a case of try again.

Run in parallel / phased changeover

Flicking a switch from one service to another might work but it’s less risky to have a phased changeover (aka soft launch).

If it is possible, run both old and new services side-by-side so you can switch between them to test that the new service / component works.

Be retrospective

After the change has happened, it’s important to reflect on the process and share your experience with team members. What worked? What didn’t work? How can the process be improved? After all, there’s no point in you or anyone else making the same mistakes again.

1ihiuguyfg2screen-shot-2017-08-21-at-12-58-57


Page 10 of 107« First...89101112...203040...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: