CW Developer Network

Page 1 of 10312345...102030...Last »

January 16, 2018  9:17 AM

Sage CTO: Rise of the ‘application cloud’ & other stories

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Klaus-Michael Vogelberg in his capacity as chief technology officer at financially-focused Enterprise Resource Planning (ERP) company Sage.

Vogelberg is perhaps somewhat unimpressed by the gadgets and gizmos being displayed at the Consumer Electronics Show (CES) in Las Vegas this month.

He insists that it is vitally important that we aren’t fooled into thinking that technology innovation in 2018 is all connected cars, talking robots and face-recognition smartphones.

Instead, Vogelberg is rather more impressed by the code that goes into these shiny new devices.

He insists that the ‘constant evolution’ will be in the software developed to connect those devices and the new forms of software engineering that will underpin the devices launched in 2019, 2020, 2021 and beyond.

So with that in mind, here’s a snapshot of Vogelberg’s software trends to watch out for in 2018 — Vogelberg writes from this point forward.

The rise of the application cloud

Cloud computing has been a game changer for consumers and businesses across the globe over the past decade. However, this year we will see the market for cloud platforms compete on customer benefits rather than technology capability.

Few cloud platforms are pure technology platforms and could be more accurately described as application clouds delivering app-centric user experiences. The Apple iPhone pioneered this concept of an application cloud and Salesforce adopted it for business with its Lightning com platform (aka Force.com) and AppExchange. Microsoft is taking Office 365 and elements of Azure in a similar direction, while Facebook and Google remain customer experience platform providers to watch.

The implication of this shift in 2018 is that platform choices in the cloud will only be partly driven by technology considerations; application clouds provide access to customers and markets. This becomes just as important a consideration for an ISV as the technological merits any given platform may provide.

End of the architecture monoliths

Software used to be designed for a given technology platform, leveraging and extending the architectural features of the ‘stack’, resulting quite literally in monolithic software products.

The shift to customer-centric ‘experience’ platforms makes the monolithic approach less attractive as it creates a dilemma.

Implement an application on a particular platform and it will only ever work in that environment; design in a monolithic fashion, and it will lack the architectural flexibility to embrace customer-experience platforms.

Expect to see this debate played-out in software labs across the world this year.

Serverless event-driven programming

Microservices require infrastructure to operate in a layer typically referred to as ‘platform as a service’ (PaaS). 2018 will see a generational shift in PaaS to ‘serverless’ environments, a technology in which the cloud provider dynamically manages the allocation of machine resources.

Serverless applications do not require the provisioning, scaling and management of any servers and pricing is based on the actual processing consumed not on capacity provisioned. Amazon Lambda and Microsoft Azure Functions are two leading examples of this technology. Serverless, event-driven programming models are set to revolutionise software architecture; it is the secret sauce behind many of the headline-grabbing technology exhibits in Vegas.

Yet this move is not without controversy; one market observer noted that “serverless is one of the worst forms of proprietary lock-in that we’ve ever seen in the history of humanity” — which are perhaps rather strong words, but it illustrates the force of change sweeping through the software world.

All of this helps make technology smarter, more connected and of greater value to the users.

When we talk about invisible accounting, taking advantage of artificial intelligence, machine learning and neuro-linguistic programming – it is because of the innovation that is happening in software architecture and application programming that is making it all possible.

January 15, 2018  8:42 AM

Chicken & waffles for data scientists, hello DataOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

It had to happen.

DevOps consolidated the developer and operations functions into one ‘workplace culture’ and so came together the previously separately plated chicken & waffles of the software engineering world into a new (apparently harmonious) union.

Some even talk of DevSecOps, with security as the special sauce… such has been the apparent industry acceptance of the DevOps term.

But this is the age of data analytics, big data, data governance, data compliance, data deduplication, log data, machine data (Ed – we get the point) and so on… shouldn’t data itself get a departmentally unifying portmanteau all of its own?

The answer is yes, it should… and when you add operational excellence to data you (obviously) get DataOps.

DataOps is here

Data convergence player MapR has come forward with its DataOps Governance Framework.

The ‘framework’ (well, company initiative with custom-tuned software services relating to the core MapR stack) will integrate the MapR Converged Data Platform with selected partner technologies.

It aims is to help companies meet compliance requirements for data governance beyond traditional big data environments such as Apache Hadoop.

According to a MapR press statement, this technology is tailored for organisational data transformation and data lineage requirements — further, it focuses on data quality and integrity to help meet obligatory compliance, including data privacy requirements.  

“By providing a comprehensive open approach to data governance, organisations can operate a DataOps-first methodology where teams of data scientists, developers and other data-focused roles can train machine learning models and deploy them to production. DataOps development environments foster agile, cross-functional collaboration and fast time-to-value that benefits the entire enterprise,” said Mitesh Shah, senior technologist at MapR.  

Shah finishes by claiming that the MapR DataOps Governance Framework lets organisations extend required data lineage and other governance needs across clouds, on-premises and to the edge with all data in any application – even those considered outside of the big data realm.


January 10, 2018  10:48 AM

Chef: The DevOps tools arms race is over, time for application-centrism

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Chef, DevOps

The Computer Weekly Developer Network asked, is 2018 the year of the DevOps backlash?

Well, it was only meant to be an open question, a hypothesising supposition, a point of informed speculation that might lead to an informal pub discussion at best.

But, as is often the way with these things, the industry has taken it as a clarion call for commentary and deeper analysis… and who are we to turn down the opportunity for deeper inspection of the DevOps state of the nation?

It’s time to hear from Chef, the DevOps-focused automation infrastructure specialist. 

Mainstream monetisation

Technical community manager for the EMEA at Chef Software Mandi Walls thinks that DevOps itself is approaching the point of mainstream monetisation. So much is this the case that she suggests we are at the point of seeing the monetisation of [DevOps] buzzwords and certifications, which happens when a fashionable technology or culture gains speed.

But the road ahead is not just DevOps simples.

“There are still industries which are dependent on technology yet haven’t embraced it as a primary strategy for growth and improvement, such as sectors heavily reliant on outsourcing for software development, who may consider it to be a back office function. These kinds of businesses are not yet perceiving or applying many of the most valuable changes possible through DevOps,” argues Walls.

DevOps tools arms race

Walls describes the current state of the ‘DevOps tools arms race’ and says that (in purer, cleaner, arguably better times) we were more focused on technological tooling and workflow – to deliver customer and user benefits, as a way to boost the bottom line.

She says that while this new era helps companies deploy a wider range of features and fixes for customers, it doesn’t account for what tools and methods the development and operations teams use. So, improving the staff experience is a massive benefit they’re missing out on.

“Overall, we’ve seen technology take an increasingly important role in everything from banking and healthcare to education and construction, while lagging industries include insurance and utilities – partly because they’re heavily regulated environments where the constraints have bred an ecosystem of specialised practitioners. These industries may eventually move en masse once their baseline concerns have been satisfied by specific, custom solutions,” said Chef’s Walls.

Also keen to throw opinions into this discussion is Chef VP of marketing Mark Holmes.

Infrastructure to app-centrism

Holmes contends that the current growth in containerisation is creating a new era of what he calls ‘application-centrism’.

Backed by the general shift to the cloud and the increasing distribution and composition of applications, [the current growth in containerisation] means that there is a gradual shift from ‘infrastructure-centrism’ — where the unit of value is a server and the unit of work is a configuration — to ‘application-centrism’, where the unit of value is a service and the unit of work is a deployment. This modal shift also requires automation at scale, though with different jobs to be done,” said Holmes.

Chef Software’s wider position states that DevOps tools need to match to these new modes, so that we can do the new things the right way, vs trying to force the old way.

The company says that this current change is constant (and so will be long term) and that the rise of serverless, or ‘service-centrism’ will again adjust the mode and requirements for automation.


January 8, 2018  9:31 AM

Qualys: How to be cool with DevOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network asked, is 2018 the year of the DevOps backlash?

Well, it was only meant to be an open question, a hypothesising supposition, a point of informed speculation that might lead to an informal pub discussion at best.

But, as is often the way with these things, the industry has taken it as a clarion call for commentary and deeper analysis… and who are we to turn down the opportunity for deeper inspection of the DevOps state of the nation?

First among a small group of spokespeople invited to deconstruct modern DevOps is Chris Carlson in his role as vice president of product management at cloud security, compliance and information services company Qualys.

Carlson writes as follows…

All processes and technologies have a hype cycle, growing pains and detractors even as they become mainstream and common place – DevOps is no exception.

DevOps drives much more value than software development methodologies like Agile, because DevOps extends beyond and spans much more than just the development function in any given organisation.

DevOps extends very much into operations, of course it does [that’s why it’s called DevOps], but also and just as importantly, DevOps extends its influence into business strategy, competitive strategy, financial strategy, product management, release planning, customer service, even employee recruitment and retention.

As with any (new) process utilising (new) technologies in a different fashion, the ‘cultural transformation’ is as important as the tool or process transformation.

Being cool with DevOps

All stakeholders, constituents and consumers need to have bought into the approach, process, tool usage, metrics, monitoring, feedback and continual improvement. Doing DevOps because it’s new or cool will create more failures cases that don’t necessarily prove that DevOps is not successful or valuable.

Successful DevOps – and DevSecOps – implementations aren’t only driven top-down by executives, but also can be bottom-up driven by practitioners to create incremental and continual improvements in existing development and operational processes. Even top-down initiatives need a successful cultural transformation for successful DevOps/DevSecOps projects.

While an organisation can have a successful initial DevOps project without cultural transformation, the likelihood that the success and benefits are sustaining becomes much lower.

Tactics, strategy, objectives

The metrics of successful DevOps projects are more tangible if they are driven by [intelligent tactics that lead to well composed] business strategy and objectives.

Development teams becoming more efficient in a vacuum might save some costs within that one department, but successful business initiatives fulfilled by a successful DevOps implementation drives revenue and market share increases for the organisation as a whole.

Cyber issues (information risk management) becomes even more important in DevOps than improvements in isolated standalone development methodologies. If a development organisation goes Agile but still uses waterfall methods to build, package and release its business applications, there are still check points for IT security to assess, evaluate, and approve new code prior to implementation to production.

DevOps accelerates the release of applications into production that can completely bypass IT security assessments.

This is where DevSecOps becomes even more important – not as a way to slow down DevOps to force in or bolt on security – but rather a way to seamlessly build in security into the fabric of the DevOps people, process, and tools.

Dude, be cool, this is DevOps. Image: Qualys


January 3, 2018  7:14 PM

Blockchain for developers, where do we start?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network wants to know what’s next for software engineers, that much you already know.

In that regard then, we want to know how developer-programmers should be thinking about blockchain technologies in relation to the builds they are currently working on and the projects they are about to be involved in in the near future.

Why? Because blockchain is widely lauded as one of the key driving factors influencing tech in 2018 and beyond.

For those that need a reminder, blockchain is a type of distributed ledger for maintaining a permanent and tamper-proof (we often use the term ‘immutable’) record of transactional data.

A blockchain functions as a decentralised database that is managed by computers belonging to a peer-to-peer (P2P) network. Each of the computers in the distributed network maintains a copy of the ledger to prevent a single point of failure (SPOF) and all copies are updated and validated simultaneously.

Blockchain for developers

With countries like the United Arab Emirates working to migrate their entire public sector records bases to blockchain over the next decade (or much sooner), what should software application development professionals be cognisant of in relation to this fast-growing technology standard?

Sandy Carielli is security technologies director at Entrust Datacard, a company specialising in trusted identity and secure transaction technologies.

Carielli points out that today, in 2018, we are at the stage where dozens of possible blockchain applications are being thrown against the wall… and we’re not yet sure what will stick.

“When you’re building a blockchain application, the first step is ensure there is a clear understanding of how blockchain adds value to the application. Like PKI 25 years ago, blockchain is a hyped technology that investors and technologists apply to almost every problem that they see,” said Carielli.

She insists that it’s important for developers to take a step back and ask themselves how solving their problem with blockchain make things better.

The big blockchain developer question

The question to ask is: does ‘solving the problem with blockchain’ actually introduce new complications that didn’t exist before?

“Additionally, developers must consider how to address some of the risks and limitations that blockchain introduces. It’s well understood that blockchain has a scalability problem, when every node has a replica of the entire blockchain, it starts to get unwieldy at a higher scale. Scalability is a focus of many researchers and start-ups, but until they solve the problem, developers still have an application to build. In order to have a useful and cost-effective app, they must assess the amount of scalability their application requires, and reconcile it with blockchain’s current scalability limitations,” said Carielli.

She adds another consideration for developers — they should have a disaster recovery plan if things go wrong.

Carielli reminds us that blockchain is notoriously inflexible, so developers must be able to make tough decisions when problems arise.

“For example, back in November, Parity wallet owners found themselves locked out of their ETH wallets due to a code flaw that was accidentally triggered. Parity now finds itself in discussions with its user community on the possibility of a hard fork to recover those funds. If developers’ applications include the use of blockchain for cryptocurrency (or anything else of value), they need to consider all worst-case scenarios up front and develop a policy for how handle them should the need arise,” said Carielli.

The guiding comment to go away with from Carielli and the Entrust Datacard team is that developers building blockchain applications should think about sharing their contingency plans with the user community and user base — this way everyone (in theory) knows what they are signing up for.

Image: Entrust Datacard


January 2, 2018  9:21 AM

Is 2018 the year of the DevOps backlash?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The DevOps honeymoon is over – well, it could be… and here’s one reason why.

As we know by now, DevOps is a portmanteau term used to describe the notion of more connected, cyclical, integrated and holistically-aware way of working between Developers (the Dev in DevOps, obviously) and the Operations team, which could encompass sysadmins, DBAs, configuration specialists, testers and other key supporting operational staff.

DevOps origins

The term itself [arguably] arose not because developers decided they needed to get more friendly with operations (that would never really happen anyway}, but because the software industry saw a tier into which it could feed new tools that would attempt to connect the Dev function to the Ops function and produce a more polished, more cost effective, more functional, more robust, less flaky end product.

But DevOps (as a term) has been around for a decade now… popular science seems to agree that the term was coined in 2008, so what happens next?

Sources are murmuring on this topic and some suggest that a DevOps backlash is imminent – but why?

Backlash clouds form

The reason DevOps itself could implode is because of DevOps, that is – in order to embrace DevOps, developers need to use DevOps… but hang on, that’s not quite as tautological as it sounds.

In order to benefit from the Continuous Delivery (CD) dream that DevOps promises, software application development professionals need to use a) their core development platform and environment of choice and b) DevOps tools.

That’s development tools, plus also DevOps tools, just in case you weren’t counting.

What developers would like to use is a more singular integrated toolset that removes the frustrations they feel when they have to change spanners several times to complete an entire development life cycle.

Nails in the DevOps coffin

Could this multi-tooling issue be one of the signs that signals the death of DevOps?

Some of the so-called ‘digital transformation’ [yawn!] projects that we heard so much about in 2017 and before will now logically start to fail — they have to, not everything can work — so will that add another nail in the DevOps coffin? Will we hear people say that not even DevOps can carry you into digitally transformed bliss?

Perhaps a new breed of more competent ‘full stack’ developers will rise up that can handle operations functions and this too will dampen the DevOps furore?

There was DevOps before DevOps anyway i.e. elements of IBM Rational tooling were tackling the issue of ‘code being thrown over the wall’ before the turn of the millennium.

The honeymoon might be over, or, at least, some serious marriage guidance counselling might be needed this year.

Free image: Wikipedia


December 19, 2017  6:56 AM

Okta and others host ‘Iterate’ developer event

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Has the role of ‘identity developer’ now been formalised?

Perhaps not, identity and user authentication controls will probably fall at the feet of specialist security developers and systems architects seeking to place a higher level lockdown on software applications in production.

Regardless of this truism or suggestion, cloud identity and authentication specialist Okta is hosting a dedicated developer conference named Iterate to augment its core Okta Oktane 2018 event, which will again be held in Las Vegas.

Over and above Okta’s core expertise in identity, the event itself will focus on a wide spectrum of interrelated technologies.

Scheduled for Feb 27 2018  as a one day event, Okta’s decision to host a dedicated developer event backs up its core identity developer story that the Computer Weekly Developer Network covered earlier this year here.

Iterate is a joint effort between Okta, Twilio, the JS FoundationAtlassian and Algolia.

“Our new developer conference is named Iterate. [The event] is split across two tracks: Build and Evolve. In Build, we’ll explore the ever-changing field of technical best practices (backend, security, front-end, etc.) and in Evolve we’ll talk developer culture: how to automate and improve your tooling, improve your productivity, stay passionate, etc,” said Okta developer Randall Degges – @rdegges.

Okta insists that Iterate is not a vendor conference – that is, there will be no vendor talks and Iterate isn’t about promoting the work being carried out at Okta.


December 18, 2017  9:42 AM

WiFi will kill mobile, developers to focus on ‘no voice’ apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

CEO of WiFi analytics platform Purple Gavin Wheeldon has joined futurist Mike Ryan in an assertive prediction to state that by 2025, a surge in data use stimulated by unlimited mobile contract plans and the roll-out of 5G, will push the networks to breaking point.

The duo predict that cellular networks, which already depend on WiFi to cope with increasing data consumption, may fail to hold up under the pressure of unlimited plans and 5G.

The upshot, for software application developers working to produce mobile apps (if the predictions ring true) could be that the connectivity and data consumption aspect of building apps is less of an issue.

There is also a ‘no voice’ driver for developers, but we’ll get to that in a moment.

Building for mobile is typically a space where developers have to accommodate for smaller screen sizes, battery life considerations, lighter processing power and a defined degree of data Input/Output (I/O) connectivity due to the nature of the device — in a world of WiFi only (or at least WiFi first even), although the core I/O capability of the device and the size of its data pipe remains the same, the flow of data is always there and is more accessible.

The developer’s app, therefore, can always eat more data this way.

Mobile’s dependency on WiFi

“Mobile networks rely on being able to offload data onto WiFi,” says Wheeldon, “More traffic was offloaded from cellular networks on to WiFi than remained on them in 2016. As we enter the Zettabyte era, with annual global IP traffic expected to reach 3.3 ZB per year by 2021, it will be increasingly necessary for WiFi to take more of the strain.”

People in urban areas are already using WiFi on their mobile devices even when they think they are using 4G. In the next few years, wearable devices, such as digital contact lenses, could replace mobile phones and data usage will increase.

“WiFi, or an evolution of the current technology with unlimited speed, will be the go-to-choice for data delivery, potentially removing 4G and 5G networks altogether by 2025,” said Ryan.

Purple says that the number of UK venues using its guest WiFi service increasing year-on-year (2016-2017) by 36%, while the number of unique users logging in to the WiFi at these venues is up by 57%.

“We are at a tipping point,” says Wheeldon, “If growth of WiFi continues at this rate, people in urban areas will soon be able to use fast, free WiFi exclusively for Internet access and data downloads.”

Purple has found that there are already almost 12 million commercial and community WiFi hotspots in the UK, with the majority offering free access. With mobile habits changing, it begs the question: could WiFi serve all our connectivity needs?

Developers, focus on ‘no voice’ apps

A report by Deloitte in 2016 found that 31% of smartphone users make no voice calls in a given week – compared to 4% in 2012. Similarly, following a peak in 2011, the use of texts in the UK has plummeted by almost half.

Calls and texts have been overtaken by instant messaging through apps like Facetime and Whatsapp.

Purple expects WiFi to outlive mobile networks — the implications (if this is true) for developers and the applications they are building could be interesting. The takeaway here is (arguably) to focus on so-called ‘no voice’ apps and prepare for a big WiFi data pipe to serve the mobile app of the future.


December 15, 2017  8:28 AM

YugaByte: 7 core IoT developer skills

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

YugaByte is a newly established company that sets out to deliver what it describes as turnkey distributed consistent and highly available database delivering data access with cache level performance.

The core YugaByte database offering (logically called YugaByte DB) aims to reduce the learning element associated with big brand well known databases to combine what essentially becomes a combo of the best of the SQL and NoSQL paradigms, but in one unified platform.

In essence, YugaByte says it is purpose built for agility inside cloud-native infrastructures — the firm’s founders have suggested that this product represents the new breed of ‘distributed’ systems

Recently emerged from stealth mode [as in corporate launch, not as in video game], Yugabyte is co-founded by ex-Facebook engineers Kannan Muthukkaruppan, Karthik Ranganathan and Mikhail Bautin.

7 core IoT developer skills

Providing the Computer Weekly Developer Network with some insight into its views on software application development for the Internet of Things (a key potential use case for YugaByte, claims the company), the co-founders have suggested 7 core IoT developer skills that programmers need to embrace if they choose to work in the IoT space.

Muthukkaruppan, Ranganathan and Bautin write from this point onwards…

1 – Data Collection:

Typically, data agents are deployed on various devices which can preprocess the raw data if necessary. These agents then send the data to a well-known endpoint (which is a load-balancer) using a persistent queue. These persistent queues, with the store and forward functionality, are often implemented using an “emitter” component of a messaging bus solution such as Apache Kafka.

2- Data Ingestion:  

The data received by the load-balancer is sent to the “receiver” component of the messaging bus, again Apache Kafka being a popular choice. Very often, these massive streams of data coming from edge are written to a database for persistence, and sent to real-time data processing pipelines.

3 – Data Processing & Analytics:

The data processing and analytics stage derives useful information from the raw data stream. The data processing may range from simple aggregations to machine-learning. Examples of applications these data processors may power include recommendation systems, user-personalization, fraud-alert, etc. Common choices for tools here include Apache Spark and TensorFlow.

4 – Data Storage:

A transanalytic (hybrid transaction/analytical) database is needed to store data in serveable form as well as for deriving business intelligence from the collected data. The database needs to be efficient at storing large amounts of data over many servers, and highly elastic to meet the growing demands of the data sets. The database must be capable of powering user-facing low-latency requests, web-applications/dashboards, etc. while simultaneously being well-integrated with real-time analytics tools (such as Apache Spark). Databases such as YugaByte DB and Apache Cassandra are good choices for this tier.

5 – Data Visualisation:

Mobile and web applications need to be built to power the end-user applications such as a performance indicator dashboard or a customized music playlist for a logged in user. Frameworks such as node.js or Spring Boot along with websockets, jQuery and bootstrap.js are some popular options here.

6 – Data Lifecycle Management:

Some use cases need to retain historical data forever and hence,  need to automatically tier older data to cheaper store. Others need an easy, intent-based way to expire older data such as specifying a Time-To-Live (aka TTL). And last but not the least, for business-critical data sets, it is essential to have data protection/replication for disaster recovery and compliance requirements. The database tier should be capable of supporting these. YugaByte DB is a good option for some of these requirements.

7 – Data Infrastructure Management:

The number of deployed devices and the ingest rate can vary rapidly, requiring the data processing tier and the database to scale out (or shrink) reliably and efficiently. Orchestration systems such as Kubernetes and Mesos are great choices for automating deployment, management and scaling up and down of infrastructure as a function of business growth.


December 15, 2017  7:33 AM

Stream feed platform APIs reflect Features-as-a-Service trend

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Stream is an activity feed platform for developers and product owners – it is used by programmers to build newsfeeds (think: how your Twitter or Instagram feed populates, or how YouTube recommends videos to watch).

Stream 2.0 is built on Google’s Go programming language (Python is still used to power the machine learning for Stream’s personalised feeds).

Stream offers an alternative to building feed functionality from scratch, by simplifying implementation and maintenance.

Features-as-a-Service

The new APIs currently coming out of Stream have been to reflect the trend for so-called Features-as-a-Service, that is – the process of integration and maintenance of common application functionality, often brought about via APIs.

“We’re helping our customers [developers] focus on what makes their app unique instead of wasting dev cycles reinventing feed technology. Our platform improvements allows us to continue enhancing our feed technology, specifically around performance and machine learning,” said Thierry Schellenbach, Stream CEO and co-founder.

With the announcement of Stream 2.0 also comes complete multi-region support. Developers can now select from four geographical locations from which to run their feed functionality on Stream’s API: US East; Ireland; Tokyo; Singapore.

This, enables developers to optimise for network latency by mapping their usage to the region closest to their users.


Page 1 of 10312345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: