CW Developer Network


April 16, 2019  7:17 AM

Deeper into DataOps: NetApp & PagerDuty

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network decided to cover the emerging area of DataOps… and we were overloaded.

After an initial couple of stories to define the term here and another exploring DataOps further here, we embarked upon a full feature for Computer Weekly itself.

After all that, the industry still wanted to comment more, so this post includes some of the additional commentary that we think is valuable and worth voicing.

What is DataOps?

As a reminder, DataOps is a close relative of data science where we use machine learning to build more cohesive software applications — DataOps concentrates on the creation & curation of a central data hub, repository and management zone designed to collect, collate and then onwardly distribute application data.

The concept here hinges around the proposition that an almost metadata type level of application data analytics can be more widely propagated and democratised across an entire organisation’s IT stack. Then, subsequently, more sophisticated layers of analytics can be brought to bear such as built-for-purpose analytics engines designed to help track application performance in an altogether more granular fashion.

NetApp UK

Grant Caley is chief technologist for UK and Ireland at hybrid cloud data services and data management company NetApp UK.

Caley suggests that we’re seeing a continued of blurring the lines between job functions, which has been driven by the significant development of technology… and the rise of value in data.

“DataOps addresses the unique challenges of enterprise data workflows – with one being Software Defined Data Centre strategies. As organisations and employees at all levels have become more digitally proficient, the birth of DataOps derived from the growth in disruptive technologies. This has meant the need for closer collaboration from job roles including software developers, architects and security and data governance professionals in order to evolve the people and process paradigm,” said Caley.

He further states that accelerating DataOps will need to use cloud, as well as on-premises IT.

“To deliver and protect data across this essentially hybrid landscape, organisations will need to develop a data fabric to ensure enterprise hybrid cloud data management,” he added.

PagerDuty

George Miranda is DevOps advocate at PagerDuty, a provider of digital operations management technologies.

Miranda says that similar to DevOps, the goal of DataOps is to accelerate time to value where a “throw it over the wall” approach existed previously. For DataOps, that means setting up a data pipeline where you continuously feed data into one side and churn that into useful results (models, views, etc) on the other.

“This is essentially the same concept used by developers continuously delivering new features to production. The keys in both of these models are reproducibility and automation. Properly validating every new development before it goes into the hands of users requires a lot of stringent analysis and governance,” said Miranda.

He continues, “The myth of DevOps is that teams simply don’t have to meet the same governance requirements as teams operating in more traditional models. But we’ve seen from years of data that this simply isn’t true. What development teams have learned to do is codify those stringent requirements into automated tests that are automatically applied every time a new development is submitted.”

Miranda concludes by saying that similarly, when it comes to managing data, continuous testing must be applied to any new data intended for use by your users.

He thinks that making that process easier for developers can mean using containers to provide a simple to use method for applying consistent tests from local development on their own workstations through making that data available in production.

April 15, 2019  7:11 AM

Deeper into DataOps: Morpheus & Talend

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network decided to cover the emerging area of DataOps… and we were overloaded.

After an initial couple of stories to define the term here and another exploring DataOps further here, we embarked upon a full feature for Computer Weekly itself.

After all that, the industry still wanted to comment more, so this post includes some of the additional commentary that we think is valuable and worth voicing.

What is DataOps?

As a reminder, DataOps is a  close relative of data science where we use machine learning to build more cohesive software applications — DataOps concentrates on the creation & curation of a central data hub, repository and management zone designed to collect, collate and then onwardly distribute application data.

The concept here hinges around the proposition that an almost metadata type level of application data analytics can be more widely propagated and democratised across an entire organisation’s IT stack. Then, subsequently, more sophisticated layers of analytics can be brought to bear such as built-for-purpose analytics engines designed to help track application performance in an altogether more granular fashion.

Morpheus Data

Brad Parks is VP of business development at Morpheus Data, a unified Ops orchestration tool company.

Parks says that the requirements of a developer requesting a new web application environment are not that different from that of a data scientist requesting a new database.

This is so he thinks because the flow of work is similar and in both cases i.e. the elimination of workflow bottlenecks is a key consideration

“In a DataOps context, enabling the rapid creation and destruction of environments for the collection, modeling and curation of data requires automation and must acknowledge that just like developers, data scientists are not infrastructure admins. The right automation and orchestration platform can enable DataOps self-service, whereby data scientists can request a data set, stand up the environment to utilise that data set… and then tear-down that environment without ever having to talk to IT Ops,” said Parks.

At the same time, he suggests that the Ops side of DataOps should that assure proper data governance and data protection policies are in place to manage security and risk.

This is the core use case that EUMETSAT had for its meteorological data when it selected Morpheus as their next-gen cloud automation platform.

Talend

Thibaut Gourdel is technical product manager at data integration tools company Talend.

“The race to the cloud among enterprise companies has been putting pressure on DevOps teams for some time now… and DataOps is a variant of this, but much more. Multi-cloud raises that pressure to a whole new level, the emergence of DataOps is demanding more from developers managing data infrastructure.  

Gourdel suggests that as a new approach, DataOps is driven by the advent of Machine Learning (ML) and Artificial Intelligence (AI) specifically. He says that the growing complexity of data and the rise of needs for data governance and ownership are huge drivers in the emergence of DataOps.

“[In the context of DataOps], it is important to have the right people (engineers, IT staff, scientists etc.) to get the most value from technologies like ML and AI, but also to have these people responsible for the data,” said Gourdel.

Coming next as our final comment here, we hear from NetApp and PagerDuty.

 


April 12, 2019  9:40 AM

Deeper into DataOps: Altran, Moogsoft & Puppet

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network decided to cover the emerging area of DataOps… and we were overloaded.

After an initial couple of stories to define the term here and another exploring DataOps further here, we embarked upon a full feature for Computer Weekly itself.

After all that, the industry still wanted to comment more, so this post includes some of the additional commentary that we think is valuable and worth voicing.

What is DataOps?

As a reminder, DataOps is a close relative of data science where we use machine learning to build more cohesive software applications — DataOps concentrates on the creation & curation of a central data hub, repository and management zone designed to collect, collate and then onwardly distribute application data.

The concept here hinges around the proposition that an almost metadata type level of application data analytics can be more widely propagated and democratised across an entire organisation’s IT stack. Then, subsequently, more sophisticated layers of analytics can be brought to bear such as built-for-purpose analytics engines designed to help track application performance in an altogether more granular fashion.

Altran

“The fact is, DevOps enables developers and operation teams to be efficient in managing the software lifecycle by using automation – this is valuable for DataOps. This is done using models (expressed as configuration or code), that are maintained in source code systems,” said Jitendra Thethi, AVP of Technology at Altran.

Thethi says that patterns of such implementations can be seen as Pipeline as Code, Infrastructure as Code, Deployment Playbooks or Automated Test Suites — and that this is why and how data scientists and data managers can use DevOps practices.

“They need to do this by moving to model-driven approaches for data governance, data ingestion and data analysis so that it can be managed by version control systems and enforced by an automated database system. Placing them into containers provides an environment where it is easy to test and deploy these models. For example, it can be launched over container cluster infrastructure when in production,” said Thethi.

Moogsoft

Will Cappelli, CTO EMEA and global VP of product strategy at Moogsoft argues that the crux of this topic it is less a question of DevOps processes generating varying data sets at sufficient velocity to speed the model learning process, than it is a question of DevOps teams and data scientists learning how to work together more effectively.

“DevOps professionals are all too often impatient. They don’t want to wait for the results of a rigorous analysis whether that analysis is carried out by humans or by algorithms. Of course, data scientists can be overly fastidious – particularly those coming from maths as opposed to a computer science. The truth is, though, that DevOps needs the results of data science delivered rapidly but effectively so both communities need to overcome some of their bad habits. Perhaps it is time for an agile take on data science itself,” said Cappelli.

Puppet

Nigel Kersten is VP of ecosystem engineering at Puppet. Kersten says that while DataOps is more than just DevOps applied to data, they do share the same methodology around agile processes, automated pipelines, automated testing and lifecycle optimisation.

“What I’m most heartened by however is seeing the DataOps movement focus on the people in addition to processes and tools, as this is more critical than ever in a world of automated data collection and analysis at a massive scale. If we don’t focus on people, as well as ensuring that we include a diverse range of perspectives and examine our own biases before we go all out encoding them in the form of algorithms and then weaponising them using automation, we’re going to end up amplifying and ossifying some of the very worst aspects of human society,” said Kersten.

Coming next, we hear from Morpheus and Talend… and then finally from NetApp and PagerDuty.

 


April 11, 2019  11:37 AM

Pluralsight on course for more Google Cloud courses, of course

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

You can love or hate Google Cloud.

Some users have adopted the full suite of online apps, use a gmail address as their mail client of choice and even sign up as regular contributors to Google Maps.

Others find Google’s ‘ownership’ of a user’s estate a little overbearing… and so only use the bare minimum.

But love or hate ‘em, you can’t avoid Google Cloud.

Enterprise technology skills platform company Pluralsight knows which side its bread is buttered and has this month announced a new partnership with Google Cloud to provide companies with an on-demand and role-based approach to skills development on Google Cloud technologies.

Already partnering to reskill developers in India and Africa, the companies are now joining forces to increase Google Cloud expertise worldwide.

“Google Cloud’s suite of products and services are critical to companies across all industries around the world,” said Nate Walkingshaw, chief experience officer at Pluralsight.

Pluralsight will now grow its Google Cloud course collection by adding some 50 or so Google Cloud-authored courses to its education library.

Walkingshaw has explained that Pluralsight and Google Cloud plan to create Google Cloud Role IQs to provide technology pros with a way to measure their expertise level across the skills they need to be successful in their roles.

Through analytics, CIOs and CTOs will then be able to see their teams’ technical abilities on Google Cloud technologies to ensure they have the right people on the right projects.

“Our new partnership with Pluralsight will provide enterprises with a role-based, measurable approach to skill development. This approach goes one step beyond providing training, and delivers insight into individuals’ expertise levels, letting companies know where they are excelling and where more professional development is required,” said Jason Martin, VP, professional services, training and support, Google Cloud.

According to an Indeed survey, Google Cloud is the skill that is seeing the fastest-growing demand in job listings with a 66.74% increase over the past year.

 

 


April 4, 2019  2:24 PM

Tanium: IT needs to get ‘don’t mess with Texas’ serious on critical updates

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The do say… if it’s not broke, don’t fix it.

But that mentality doesn’t really cut it in the ‘let’s make everything perfect’ world of the software application developer, right?

How frustrating is this reality when the IT bosses fail to carry through with implementing critical updates leaving developer’s applications at risk of corruption or attack by some form of malware?

Answer: very.

Endpoint visibility and control platform company Tanium thinks that chief information officers (CIOs) have held back from implementing critical measures that keep them resilient against disruption and cyber threats.

Tanium makes this assertion on the back of some recent research.

The company says that over eight out of ten (84%) respondents said that they have refrained from adopting an important security update or patch due to concerns about the impact it might have on business.

Which research study was this? We’re glad you asked.

The Global Resilience Gap study of 500 CIOs and CISOs across the United States, United Kingdom, Germany, France and Japan, in companies of 1000+ employees explores the challenges and trade-offs that IT operations and security leaders face in protecting their business from a growing number of cyber threats and disruptions.

What really is an endpoint?

The study discovered that a lack of visibility across endpoints laptops, servers, virtual machines, containers, or cloud infrastructure is preventing organisations from making confident decisions, operating efficiently and remaining resilient against disruptions.

Why is all this happening then?

Tanium says that its because too many departments work in silos, leaving them with a lack of visibility and control over IT operations.

Managing director for EMEA at Tanium Matt Ellard says that IT chiefs today must maintain compliance with an evolving set of regulatory standards — and, at the same time — track and secure sensitive data across computing devices while they also manage a dynamic inventory of physical and cloud-based assets.

“But, in fragmented environments, where organisations use a range of point products for IT security and operations, there are regular compromises taking place among these priorities,” said Ellard.

Ellard says that as organisations look to build a strong compliance and security culture, it is essential that IT operations and security teams unite around a common set of actionable data for true visibility and control over all of their computing devices.

Keepin’ the lights on 

When asked about the key reasons for making compromises, 35% of the IT people questions cited pressure to keep the lights on, with almost a third (31%) suggesting that being hamstrung by legacy IT commitments restricted their security efforts.

Additionally, nearly a third (30%) said that a focus on implementing new systems takes precedence over protecting existing business assets, and over a quarter (28%) stressed that inconsistent and incomplete datasets was a key driver.

The end result of all this?

Tanium says that IT needs to get ‘don’t mess with Texas’ serious on critical updates. Yee-haw partner.


April 1, 2019  11:35 AM

Finding the cure for application downtime

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Being able to present a fully operational and bug-free app experience is, obviously, just as important as having a website for any business, be it a small coffee shop, or a fledgeling e-commerce site selling speciality equipment.  

While design and UI are not necessarily the most difficult part of app development, what developers oftentimes have to contend with is building an app that performs as expected, every single time it is opened.

Security patches, regular updates and adding features to apps is also necessary, considering how much the ‘design language’ of both iOS and Android in the mobile space changes year after year.

This is a guest post for the Computer Weekly Developer Network written by Jordan Piepkow, the original founder of AppDoctor.

Piepkow writes as follows

Why test an app after release?

This comes as a no-brainer to many, but there are developers out there that seem to not see the point of testing an app after release.

Firstly let’s talk variety: there are but two major phone operating systems out in the market — Android and iOS.

Consider developing an app for the Google Play Store, there are strict rules that are set up by Google on content, how the app works within the Android ecosystem and how the app interacts with the user’s phone.

An app is not simply done when these codes are followed — there are numerous OEMs that use Android, it being open source.

Apart from the Pixel devices, there are few others who offer the ‘pure Android’ experience and have their own versions of the OS playing as a skin over Android. All major makers of phones in the world, including Samsung, OnePlus and Huawei have skins that each work differently.

Apps need to keep up with all these devices and at times, developers might not have the means to test their apps on every single product out there.

Complexity conundrums

Next let’s talk complexity: mobile apps do more than serve as banners for websites or as a list of products for sale. They are complex and have a lot of built-in functionality that developers have to deal with.

There are apps that manage products, interlinked with wallets, maps, email and other sensitive user info that cannot be lost or leaked. With added complexity comes the need for regular testing, not just before launch, but also when live and in use.

Let’s also consider geography: once an app hits the store, it is out there for the whole world to see and possibly use.

There is an argument to be made that one of the reasons why apps are made in the first place is to expand the reach of the product or service sold. Herein lies the problem.

Different regions of the world use and manage data, offer internet connectivity, and even regulate apps in their own way, so being prepared for such eventualities is an absolute requirement.

Testing matters

Here comes the need for regular testing.

Testing an app that is already live and in the market is not something that is simple or even cost-effective.

Finding the cure for application downtime is not impossible, there are cures…. but we need to take our medicene if we really want to stay healthy.

Looking at our AppDoctor’s tools, its automated API Testing allows developers to ‘smoke-test’ an API after release. It models real-world scenarios for an app to run, making sure that specific transactions on any endpoint are always available. Automated status pages provide a place for developers to look into the availability of their API while they monitor multiple endpoints. Request Proxy, on the other hand, offers developers searchable data from where they can gain insights and create dashboards. It is also possible to search for specific requests using this tool.

 

 

 


March 24, 2019  12:07 PM

Moogsoft CTO Cappelli: hats off to understanding AI understanding

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network team spoke this month to Will Cappelli in his role as CTO for EMEA and VP of product strategy at Moogsoft on the subject of just how far we need to understand Artificial Intelligence’s ability to understand, reason and infer on the world around it. 

Moogsoft AIOps is an AI platform for IT operations that aims to reduce the ‘noise’ developers and operations teams experience in day to day workload management — the technology aims to detect incidents earlier and fix problems.

Cappelli writes as follows…

Opinion makers and, more unforgivably, academics tend to confuse two very different ideas about the source of AI’s lack of transparency [and how it understands what it understands].

Any complex algorithm (including the ones that run your accounts payable systems, for example) are difficult to understand — not because of any inherent complexity, but because they are made up of 100s and 1000s of simple components put together in simple ways.

The human mind balks at the so-called complexity only because it cannot keep track of so many things at once. AI systems, like most software systems, have this kind of complexity. But some AI systems have another kind of complexity.

Neural obsessions

The market is currently obsessed with deep learning networks i.e. multi-layer neural networks — a 1980s vintage technology that does a great job recognising cat faces on YouTube.

Neural networks are notable because no one has yet been able to figure out, from a mathematics perspective, what makes them work so well.

There are hints here and there but, at the end of the day, there is no way of mathematically showing how neural networks arrive at the results they arrive at.

This is not a question of limited human powers of memory and concentration. This is a question of a seeming lack of basic mathematical structure from which one can infer the effectiveness of neural network mechanics.

Beyond cat-face recognition

Since many, at the moment, identify AI with deep learning, they slide from a genuine lack of intelligibility attributable to a very specific way of doing AI to more general statements about AI transparency.

Now, everything should be made clearer to users but, at the end of the day, how many people who drive cars or use mobile phones can actually tell you how they work? Is it really that important?

I do think that the mathematical lack of intelligibility of neural networks IS a long term issue – not the fact that users don’t understand their behaviours – and do harbour the suspicion that their effectiveness beyond cat-face recognition has been way over-stated.

Moogsoft’s Cappelli: let’s get beyond cat-face recognition.

 


March 22, 2019  9:15 AM

Infor tunes into (and visualises) high-fidelity self-orchestrated data

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Infor builds business cloud software, this much we know.

But this is software that is ‘specialised by industry’… a term used to convey its custom-aligned tuning to specific industry use cases with a specific application towards the digital supply chain networks.

The firm this month rebranded its GT Nexus supply chain network as, more simply, Infor Nexus.

A move, perhaps, which suggests that Infor is looking to play up its wider data intelligence capabilities and make supply chains execute in a way that is more customer-centric.

Infor Nexus combines the firm’s GT Nexus, IoT and Coleman AI products into one bright and shiny package that is supposed to help firms create an autonomous supply chain.

Key features include the fact that Infor Nexus combines AI, IoT and advanced visualisation through an end-to-end collaborative network.

What is advanced visualisation?

Infor (above) talks about so-called ‘advanced visualisation’ on the road to being able to provide real-time visibility and predictive intelligence… but what is advanced about  advanced visualisation?

As detailed nicely here on Dataversity, “Advanced data visualization refers to a sophisticated technique, typically beyond that of traditional Business Intelligence, that uses the autonomous or semi-autonomous examination of data or content to discover deeper insights, make predictions, or generate recommendations. Advanced data visualization displays data through interactive data visualization, multiple dimension views, animation and auto focus.”

Infor explains that its Nexus software connects companies’ enterprise systems, network partners and IoT devices in a single-instance, multi-enterprise business network platform.

“The new Infor Nexus brand culminates the past three years we’ve spent transforming GT Nexus – leveraging digital technologies such as IoT, in-memory processing, mobile, advanced visualisation and AI,” said Rod Johnson, EVP of manufacturing & supply chain at Infor. “Today, we’re delivering a next-generation supply chain network that is real-time, intelligent and self-orchestrating. Our customers are empowered to optimise service levels, costs and inventory through a digital environment that is hyperconnected and data-driven, with a path to the autonomous supply chain.”

The Infor Nexus re-brand comes on the heels of the recent launch of Infor Control Center, which aims to provide a high-fidelity picture of global supply chain flows.

The company insists that its Coleman AI function enables customers to predict potential issues, identify opportunities to act, and place their focus on situations projected to have the greatest impact on business.

So you thought data just had to be worried about its Velocity, Volume, Value, Variety and Veracity… now we need high-fidelity self-orchestrated AI-enriched advanced visualisation data… at least that’s the card Infor is playing here.

 

 


March 19, 2019  11:16 AM

Ribbon ties developers to connected comms on AT&T API marketplace

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

US telecoms stalwart AT&T is expanding its Application Programming Interface (API) marketplace connectivity points.

The AT&T API Marketplace (it’s an online store zone, not a real market, obviously) hosts pre-packaged software code for developers to embed into communications-related websites and applications.

Built on Ribbon Communications’ Kandy platform, the AT&T API Marketplace offers turnkey applications and self-service APIs for developers to create custom applications.

Enterprise software application developers will be able to add or upgrade services to their websites, such as click-to-connect voice, video and text as well as 2-factor authentication, conferencing, virtual directories and contact centre functions.

API selections

Self-service APIs are available for customers who prefer to integrate these communications capabilities into their own environment — but AT&T also offers support for more customised integrations.

These communication APIs give businesses the tools they need to provide potential customers and existing users the customer service capabilities they desire.

AT&T Business chief product officer Roman Pacewicz says that its all about creating the kind of omnichannel communications environment that contemporary businesses need to now operate with.

Pacewicz explains that business customers can use AT&T network integration services to design customer solutions — and that this includes developer-for-hire services to assist customers in building tailored software applications.

Sweet as Ribbon Kandy

According to Ribbon, “Kandy is a cloud-based, real-time communications software development platform, built from Ribbon’s communications, presence and security software. The solution framework includes APIs, Software Development Kits (SDKs) and pre-built applications (Kandy Wrappers) to improve customer engagement with click to call, visual Interactive Voice Response (IVR) etc.”

For the uninitiated, Visual Interactive Voice Response is conceptually similar to Voice Interactive Voice Response. Visual IVR uses web applications to instantly create an app-like experience for users on smartphones during contact centre interactions without the need to download any app.

Ribbon further explains that Kandy is presented as an outcome-based pay-as-you-grow business model to help the implementation of multi-channel communications environments.

The Kandy APIs and SDKs allow developers to integrate real-time communications into their applications. For example, with Kandy, programmers can integrate calling capabilities into CRM (or any other business process/application) using an enterprise’s phone lines, numbering and voice features and preserving the existing investment in communications infrastructure.

Connected click-to-chat

As an example use case of AT&T on Ribbon Kandy, a business could use an app to embed a click-to-chat button on their website to give customers an easier way to connect with support representatives.

As another example… a customised app could also help banks improve customer data security and reduce fraud by enabling the automatic 2-factor authentication of users via text messages.

AT&T Business says is it now looking to 5G and the company thinks that as 5G technology becomes more widespread, it will work to position its marketplace in a way that will help businesses create video-intensive apps via next-generation wireless technology.


March 15, 2019  9:49 AM

Progress offers low-code-for-pros with Kinvey

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Progress puts forward another (arguably weighty) hand in the low-code game this week.

The Boston, MA-based application development firm has upped the spec on its Progress Kinvey platform with extra features.

The Progress Kinvey platform includes  Kinvey Studio: a low-code visual development tool for building mobile, web and chat-based apps — and, the difference is, it doesn’t sacrifice full developer control over the application and code.

Round-trip code

Kinvey offers what Progress calls ‘round-trip’ code generation, so developers can move in and out of the low-code environment seamlessly.

It works with existing source control, testing and other dev processes and offers acces s to all application source code for inspection and alteration.

So this is perhaps a new form of low-code – one that’s focused on the professional developer – with productivity capabilities that speed the app development process, but that still enables the delivery of differentiated multi-channel experiences that run natively.

Handcuffs off

Progress says that while traditional low-code offerings are effective for simple apps, they often fall short for more complex apps as they ‘handcuff developers’ to a proprietary framework that may be ineffective or could stifle innovation.

Based on JavaScript, the Progress Kinvey platform enables developers to build app experiences across a variety of channels – native iOS and Android, web, chat and others.

It provides a serverless backend for operational efficiency, full control of the application code and the end-to-end data management capabilities.

“Traditional low-code platforms are adequate for helping companies rapidly roll out many tactical apps. However, they lack the ability to deliver high scale, truly engaging, consumer-grade experiences that offer consistency across channels,” said Dmitri Tcherevik, CTO, Progress.

“At the other end of the spectrum, DIY development projects deliver high-touch experiences, but are highly inefficient, slow to market and costly to maintain. IT teams need to focus on innovation, not infrastructure. Our focus on high productivity for professional developers changes all of that by bringing a solution to market that offers the best of both worlds, low-code development that results in flexible, scalable, omnichannel apps,” added Tcherevik.

Also included here is Kinvey Chat: an Artificial Intelligence (AI)-driven technology for creating and deploying guided task chatbots that integrate with existing enterprise and legacy systems.

Finally here, Kinvey Data Pipeline provides end-to-end data and authentication management from device to source enterprise systems; engineered to protect, virtualize and synchronize data and authentication layers from disparate enterprise systems.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: