CW Developer Network

Page 5 of 106« First...34567...102030...Last »

December 15, 2017  8:28 AM

YugaByte: 7 core IoT developer skills

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

YugaByte is a newly established company that sets out to deliver what it describes as turnkey distributed consistent and highly available database delivering data access with cache level performance.

The core YugaByte database offering (logically called YugaByte DB) aims to reduce the learning element associated with big brand well known databases to combine what essentially becomes a combo of the best of the SQL and NoSQL paradigms, but in one unified platform.

In essence, YugaByte says it is purpose built for agility inside cloud-native infrastructures — the firm’s founders have suggested that this product represents the new breed of ‘distributed’ systems

Recently emerged from stealth mode [as in corporate launch, not as in video game], Yugabyte is co-founded by ex-Facebook engineers Kannan Muthukkaruppan, Karthik Ranganathan and Mikhail Bautin.

7 core IoT developer skills

Providing the Computer Weekly Developer Network with some insight into its views on software application development for the Internet of Things (a key potential use case for YugaByte, claims the company), the co-founders have suggested 7 core IoT developer skills that programmers need to embrace if they choose to work in the IoT space.

Muthukkaruppan, Ranganathan and Bautin write from this point onwards…

1 – Data Collection:

Typically, data agents are deployed on various devices which can preprocess the raw data if necessary. These agents then send the data to a well-known endpoint (which is a load-balancer) using a persistent queue. These persistent queues, with the store and forward functionality, are often implemented using an “emitter” component of a messaging bus solution such as Apache Kafka.

2- Data Ingestion:  

The data received by the load-balancer is sent to the “receiver” component of the messaging bus, again Apache Kafka being a popular choice. Very often, these massive streams of data coming from edge are written to a database for persistence, and sent to real-time data processing pipelines.

3 – Data Processing & Analytics:

The data processing and analytics stage derives useful information from the raw data stream. The data processing may range from simple aggregations to machine-learning. Examples of applications these data processors may power include recommendation systems, user-personalization, fraud-alert, etc. Common choices for tools here include Apache Spark and TensorFlow.

4 – Data Storage:

A transanalytic (hybrid transaction/analytical) database is needed to store data in serveable form as well as for deriving business intelligence from the collected data. The database needs to be efficient at storing large amounts of data over many servers, and highly elastic to meet the growing demands of the data sets. The database must be capable of powering user-facing low-latency requests, web-applications/dashboards, etc. while simultaneously being well-integrated with real-time analytics tools (such as Apache Spark). Databases such as YugaByte DB and Apache Cassandra are good choices for this tier.

5 – Data Visualisation:

Mobile and web applications need to be built to power the end-user applications such as a performance indicator dashboard or a customized music playlist for a logged in user. Frameworks such as node.js or Spring Boot along with websockets, jQuery and bootstrap.js are some popular options here.

6 – Data Lifecycle Management:

Some use cases need to retain historical data forever and hence,  need to automatically tier older data to cheaper store. Others need an easy, intent-based way to expire older data such as specifying a Time-To-Live (aka TTL). And last but not the least, for business-critical data sets, it is essential to have data protection/replication for disaster recovery and compliance requirements. The database tier should be capable of supporting these. YugaByte DB is a good option for some of these requirements.

7 – Data Infrastructure Management:

The number of deployed devices and the ingest rate can vary rapidly, requiring the data processing tier and the database to scale out (or shrink) reliably and efficiently. Orchestration systems such as Kubernetes and Mesos are great choices for automating deployment, management and scaling up and down of infrastructure as a function of business growth.

December 15, 2017  7:33 AM

Stream feed platform APIs reflect Features-as-a-Service trend

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Stream is an activity feed platform for developers and product owners – it is used by programmers to build newsfeeds (think: how your Twitter or Instagram feed populates, or how YouTube recommends videos to watch).

Stream 2.0 is built on Google’s Go programming language (Python is still used to power the machine learning for Stream’s personalised feeds).

Stream offers an alternative to building feed functionality from scratch, by simplifying implementation and maintenance.

Features-as-a-Service

The new APIs currently coming out of Stream have been to reflect the trend for so-called Features-as-a-Service, that is – the process of integration and maintenance of common application functionality, often brought about via APIs.

“We’re helping our customers [developers] focus on what makes their app unique instead of wasting dev cycles reinventing feed technology. Our platform improvements allows us to continue enhancing our feed technology, specifically around performance and machine learning,” said Thierry Schellenbach, Stream CEO and co-founder.

With the announcement of Stream 2.0 also comes complete multi-region support. Developers can now select from four geographical locations from which to run their feed functionality on Stream’s API: US East; Ireland; Tokyo; Singapore.

This, enables developers to optimise for network latency by mapping their usage to the region closest to their users.


December 8, 2017  9:16 AM

A low-code User eXperience ‘design language’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Mendix Atlas UI is a low-code tool intended for software application developers who find themselves bereft of any perceptible level of ‘front-end’ development or UI design skills.

Atlas can be used to define a standardised design language (to promote design best practices) across multiple autonomous development teams — this can help ensure that all apps conform to the organisation’s own design principles.

Magical analyst house Gartner has estimated that there may be as few as one User eXperience (UX) designer for every 17 developers, a survey result that Menidx is keen to highlight.

Layered componentised UI

Mendix Atlas is described as a layered componentised UI framework with a web-based modelling tool featuring ready-to-use UI elements.

UIs are constructed through a combination of navigation layouts, page templates, building blocks (pre-configured assemblies of UI components) and widgets. Atlas assembles widgets in pre-configured layouts with optimal proportions, spacing and individual design properties.

The resulting app’s visual style automatically reflects a selected visual theme – either one provided by Mendix or a custom theme created with the Theme Customiser.

Enterprise design language

According to Mendix, “All design elements can be packaged as enterprise design language modules and distributed via a private App Store, ensuring that the organisation’s UI and design best practices are automatically employed across all Mendix development teams.”

The Mendix Web Modeler is meant for developers (and power users) wanting to materialise their app ideas using Atlas design elements.

An integrated Toolbox enables discovery and drag-and-drop consumption of reusable building blocks and widgets. In addition, the Web Modeler bi-directionally syncs with the Desktop Modeler so developers can enhance prototypes with more complex logic and integrations, turning them into production applications.


December 7, 2017  12:42 PM

TeamViewer 13: lucky for some, who need remote support

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Digital networking and collaboration software company TeamViewer has reached the version 13 iteration of its central product.

In terms of basic function, TeamViewer is a software product designed to support the need for remote control, desktop sharing, online meetings, web conferencing and file transfer between computers.

With slightly more colour, the firm also describes itself as a player in the or IoT, connectivity, monitoring, support and collaboration markets.

Instant Connect box

The software application development team behind TeamViewer 13 have highlighted the new Instant Connect box at the top right corner of the new TeamViewer client, which lets supporters establish a remote connection wherever they move on the client.

The product also has a new Recent Connections list, which is supposed to help to resume recently closed connections quicker. This, insists the company is “particularly helpful” when a support job is put on hold and needs to be updated at a later point in time.

“Once more we took a step back to look through the eyes of our users. The result is TeamViewer 13, the best TeamViewer version ever,” said Kornelius Brunner, chief innovation officer at TeamViewer. “TeamViewer 13 is a gateway to tap into the future of remote support. It is statement that reflects the market’s and our user’s needs alike.”

A new Essential Asset Management function allows users to resolve certain issues without having to connect to a device.

According to TeamViewer, this feature provides “crucial information” about associated devices – such as device name, operating system and hardware specifications.

Essential Asset Management can be accessed via the web-based TeamViewer Management Console.

The extended device dashboard in TeamViewer is supposed to help users get a head start when resolving issues. This latest extension provides information that may be residing deep in a system – such as the BIOS version, disk health, battery state or the current uptime of a device. The extended dashboard also features a shortcut to open the event log.

Emergency patch, fixed

Other recent news has seen TeamViewer issue an emergency bug patch to fix an issue which could potentially open a gateway for malicious hackers to get control of another user’s  PCs during in desktop sessions.

As reported here by Charlie Osbrone, “The vulnerability first came to light when Reddit user xpl0yt told other Redditors to ‘be careful’ after discovering the security flaw. The user linked to a Proof-of-Concept (PoC) example of an injectable C++ DLL which takes advantage of the bug to change TeamViewer permissions.”

Other core news emanating from TeamViewer sees the company note its implementation of Hardware Accelerated Scaling as a performance boost. This new feature uses the rendering power of CPU and GPU to allow for faster remote connections, reduced reaction time and a reduced load on the CPU.

“TeamViewer’s new Identity and Access Management feature enables centralised control of TeamViewer accounts in organisational groups. The feature provides an additional authentication factor and requires an identity provider that supports SAML 2.0. After the feature activation in the web-based management console, administrators activate or deactivate a TeamViewer account via the identity provider,” said the company, in a press statement.

As many organisations have a need to document what has and has not been done during a remote session, TeamViewer 13 will allow for enforced session recordings. This feature requires a corresponding policy to be set up in the web-based management console.

Upon activation, supporters will be recorded during remote control sessions and have no means of opting out. The session will then be saved and can be used to clarify questions about what has been done during a remote control session.

Linux goodness

There’s open source goodness here too, TeamViewer runs on a range of platforms and operating systems. Upon the release of version 13, TeamViewer also announced the preview of a new native Linux client to be available shortly.

TeamViewer’s Mobile Device Support also comes with the iOS 11 screen sharing capabilities.

But probably the key differentiator here is the fact that this software includes the ability to access and control IoT devices from anywhere in the world. With the IoT growing at speed, the need to build software engineering with remote support elements (and functionality) within it has (arguably) never been greater.


December 5, 2017  5:10 PM

Redgate SQL tool sniff out ‘code smells’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
SQL

More and more developers are now writing SQL code as part of their roles – for example, research shows that 75% of developers now work in teams responsible for both the application and the database.

This predicament has been argued in some circles (where developers are less experienced at writing SQL code) to lead to potential performance, reliability and maintenance problems.

Code smells

In an attempt to rectify this issue, the latest version of Redgate Software’s SQL Prompt tool (version 9) automatically analyses code as it is written detecting what have been called ‘code smells’ to alert developers to known code issues and pitfalls as they type.

Code smells are usually not bugs—they are not technically incorrect and do not currently prevent the program from functioning. Instead, they indicate weaknesses in design that may be slowing down development or increasing the risk of bugs or failures in the future. Bad code smells can be an indicator of factors that contribute to technical debt.

Not just a smell detector, Redgate claims that its tool will also offer instant solutions – and it also features IntelliSense-style code completion – hence the firm talks up (spins up, even) the productivity gain factor here.

SQL Prompt’s productivity spin (sorry, capabilities) also comes from its ability to ensure every team member follows the same best practice rules. Developer shops can choose which analysis rules they want to follow from the library included within SQL Prompt.

Other distractions

SQL Prompt is already a favoured plug-in for SQL Server Management Studio and Visual Studio because it autocompletes code and takes care of formatting, object renaming and other distractions.

“SQL Prompt has been regarded as the leading SQL coding productivity tool for years, and typically lets users code 50% faster. The latest feature makes it a learning tool as well because it will help a whole new generation of developers with suggestions and standards that can be implemented immediately,” said Jamie Wallis, Redgate product marketing manager.

Wallis adds a comment saying that the addition of static code analysis is a major step forward and makes it a tool that improves the quality of SQL coding as well as increasing the speed of that coding.


December 4, 2017  9:06 AM

Is low-code a cop out, or a leg-up to automation?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The jury may currently be quite firmly “out” on whether software application developers are attracted to so-called low-code platforms.

Does a software engineer have to think of themselves as coming out of systems engineering expertise when adopting low-code?

Surely it would be too cruel to suggest that there is some kind of “cop out” in low-code development?

Part of the problem (it is often argued) is that a good proportion of low code tools still require convoluted and complicated setup procedures to engineer-in. This problem is compounded by the need to install plug-ins and to make sure that configuration settings and scripts are all in order.

This responsibility takes some low-code toolsets (the debuggers in particular) outside of the realm of capabilities and skills possessed by your average user looking for a low-code option.

Further, we might also consider that low-code could be argued to simply be a route to an ever-decreasing circle of returns and an ever-increasing circle of technical debt — an opinion offered by coding aficionado Theo Priestly.

Age of automation

But low-code naysaying aside, shouldn’t these tools be working better by now?

In this age of [software] automation and best-practice playbooks, templates, reference architectures, established workflows and code automation, surely low-code tools should be regarded as a leg up towards code build and architecture efficiencies which now be more quickly brought to bear upon a deployed piece of software – shouldn’t they?

Despite its name, OutSystems would insist that its tools are anything but a cop out — the firm this month comes forward with a new low-code visual debugger.

The software is designed to troubleshoot server-side and mobile code while running.

“Organisations [need] a low-code approach for building rich mobile experiences, but to expect [these same organisations] to resort to complex developer tools when it is time to debug [the software built to deliver these experiences], completely defeats the purpose of a low-code platform,” said Gonçalo Borrêga, head of product at OutSystems.

Borrêga says he sees developers modeling complex interactions and logic that runs on the device, taking advantage of native capabilities. By providing a seamless and visual debugging experience, whether the code is running on an iPhone, Android, or server-side, OutSystems says it ensure teams get the benefits of low-code throughout the entire development lifecycle.

Point of code complexity

Magical analyst firm Gartner has said that, increasingly, Mobile Application Development Platforms (MADPs) are adding support for wearables, chatbots, virtual personal assistants (VPAs) and conversational UI endpoints through the same services and APIs they create and orchestrate for mobile apps and web.

These capabilities enrich the experience for the user, but arguably create complexity for the developer.

OutSystems says that by providing a consistent low-code experience whether you are debugging server-side code or a complex mobile app with offline data synchronisation patterns and native device integration, OutSystems solves two major challenges:

  • First, the same low-code skillset can be used to create and troubleshoot any type of application, providing teams with more resourcing options for projects.
  • Second, with low-code, the knowledge transfer times are significantly decreased, reducing the risk of critical mobile initiatives.

Low-code is growing, the question for software engineers and computer scientists today is… how much low-code is in your own code — and would you debug your low-code (or indeed high-code) with more low-code tools as you trundle down the low-code rode from node to node?


December 1, 2017  12:15 PM

What to expect from Alfresco DevCon 2018

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
ECM

Hang on – a conference preview in 2017, are we not done on events season already?

Well yes, developer conferences do actually happen in January, if you’re Alresco Software they do.

Alfresco is an open source alternative that exists in the Enterprise Content Management (ECM) space. The company claims to have 3.5 million downloads and over 2500 companies as our customers around the world.

ECM as it is, Alfresco would rather describe itself as a firm that develops software for ECM and also process automation, content management and information governance… if that wider definition serves to provide extra colour and context for you.

Alfesco DevCon 2018

Firmly targeting software application developers, Alfresco DevCon 2018 takes place in Lisbon, January 17-18, 2018 at the Museu Fundação Oriente. The event will offer  sessions on Alfresco’s soon forthcoming Application Development Framework (ADF) 2.0.

“We couldn’t be more excited or proud to host the complementary DevCon 2018 in Lisbon,” said Alfresco VP of product management Thomas DeMeo. “We look forward to deep technical conversations, seeing all the exciting solutions from our developer community, providing a glimpse of the future directio  and gathering feedback on how Alfresco can add value to their organisations.”

The Alfresco Application Development Framework is a set of reusable and custom­isable web components. It is intended to allows developers to build mobile-ready applications that interact with content and process services running on the Alfresco Digital Business Platform.

Hard core new features

What’s actually new in the software being presented is the File Viewer component. The viewer was one of the first components to be created, but in 2.0 it gets a complete rework. It has a new design, along with a ways to customise, including the info drawer, toolbar and thumbnails.

To support the new File Viewer, Alfresco has added a new component based on the Info Drawer to show metadata from Content Services. This component can be externally configured to include/exclude metadata and offers inline editing of properties as well.

“We are embracing Angular CLI and in 2.0 the Yeoman Generators will give you three options: Create a Process + Content app, a Content app or a Process app. These apps will have a very small footprint and have the basic setup you need to get up and running with ADF and Angular,” said the company, in a communications statement.

Last but not least, Alfresco has made big improvements to its documentation. On GitHub, the firm now presents one Markdown file per component and an index page that gives clearer detail about what is available in the docs. Internally, the Markdown files now have a more consistent structure that should make them easier to read, write and maintain.

Developers can read extended feature set details here.


November 28, 2017  10:36 AM

Overcoming common roadblocks to ‘data vault’ development

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Development

This is a guest post for the Computer Weekly Developer Network written by Barry Devlin, founder and principal of 9Sight Consulting.

Devlin believes that data warehouse developers (Ed – is that actually a role and a ‘thing’ already?) have historically walked a narrow line between data quality and business agility.

At the same time, Devlin points out that they very clearly need to balance the needs and relationships of both IT itself and internal business clients [users, operators]. 

So what to do?

The argument put forward here is that technology has answered this dilemma with two separate approaches:

  • the data vault optimised for data warehouse agility and
  • data warehouse automation for faster and more reliable development.

Devlin writes as follows

Data vault modeling is designed for long-term historical storage of data from multiple operational systems, looking at data associated with auditing, tracing of data, loading speed and resilience. Data vault inventor, Dan Linstedt, first conceived this approach in the early 2000s.

Data vault modeling is now in its second generation.

While data vaults grow in popularity and expand with features like a new development methodology, developers who want to implement one continue to face numerous challenges, here are a few of the most challenging obstacles and tips on how to overcome them.

Rift between IT & business

The age-old struggle between IT and business explicitly challenges data vault projects.

An overly engineering focused mindset in IT may alienate business interests. As the data warehouse staff concentrates on implementing a new data vault model, they could reduce their face-to-face time with the business, leading to poorer, less detailed, or delayed delivery of specific business solutions.

Delays can widen the rift between business and IT and prompt the business to look elsewhere for quick-fix solutions. An automated approach to data vault design, development, deployment and operation can both accelerate time to data vault delivery, as well as provide new abilities to iteratively collaborate with business users early in the project – increasing engagement, trust and success in delivering value to the business the first time.

Data sources

The first step in adhering to data vault principles is to understand the source systems, their structures, relationships and underlying data quality.

Although a time-consuming task, it is necessary to validate the model design and implementation approach. Automated discovery and data quality profiling will reduce the design time and population process.

Business and IT can collaborate in compressed time windows to iterate on model designs and validate with live data. This approach eliminates assumptions, enables the model to be validated before deployment and ensures the data warehouse can evolve at the pace needed by the business.

Set rules and follow them

The data vault model involves an extensive framework of rules and recommendations. A data vault’s data objects—from common hubs, links and satellites to the lesser known point in time and bridge helper tables—must adhere to specific standards and definition rules to ensure data vault agility and ease of maintenance. When developers “re-invent” these structures, problems arise that demand reworking, both in the initial build and in ongoing operation.

Whether sharing tasks between diverse teams or onboarding new team members, ensuring sustained best practices requires strict design standards, documentation, error handling and auditing.

By eliminating the idiosyncrasies of each developer’s coding style, the generated code is consistent across the team and adheres to the same naming standards, resulting in ease of maintenance and future upgrades as well as quick on-boarding of new developers.

Automated culture of maintenance

Perhaps the most under-appreciated challenge for a data warehouse team is the ongoing operation, maintenance and upgrade of the environment.

Prepare for it now.

Data vaults, like data warehouses, require ongoing operations overhead to schedule, execute and monitor the data feeds—including handling failed jobs and restarts, while ensuring everything is processed in the correct order. Data vaults are also challenged by the added complexity of scheduling and management numerous data and processing objects.

Manual approaches are inadequate to address this. A particular challenge is that in manual deployment the necessary logging and auditing capabilities are often sidelined when projects fall behind schedule.

Capabilities in automation software, such as integrated scheduling tools and automated logging and auditing capabilities, help IT teams to meet the complexity and continuous need for operational attention head on.

Non-stop, high-speed change

Businesses are inundated with change—constant, rapid and unpredictable change—sometimes even before the first data warehouse iteration is rolled out. A key driver of the data vault model and methodology is to ease the problems associated with such ongoing change.

At the level of practical implementation, response to change first requires the ability to carry out extensive and effective impact analysis. What tables and columns will be affected by changing this code? What are the unintended, down-stream consequences? How can we reduce risk and, simultaneously expedite necessary change?

Documentation is supposed to provide answers, but the reality is that manual approaches to development are seldom accompanied by complete, up-to-date documentation.

Beyond the productivity and standardisation gains associated with eliminating the vast majority of hand-coding required to deliver a data vault, documentation automation may be the most visible and impactful contribution to a project seen by IT teams. With code and documentation tied to metadata, change management can be automated and reduced to hassle-free review rather than decoding ancient programming. Such metadata-driven automation is key to keeping pace with the ever more rapidly changing business needs.

During the last decade and a half, businesses have been gradually adopting the data vault model as a new foundation for their data warehouses. Its design and approach has been instrumental in successfully addressing the growing need for agility in business analytics and decision-making support.

However, many companies have found that the structural complexity of the model can challenge the IT teams charged with implementation. Automation software built to tackle data vault development, such as WhereScape Data Vault Express can improve collaboration between business and IT, boost developer productivity, increase organisational consistency and standardization, better position teams for change and help organizations reap the benefits of Data Vault 2.0 much quicker.

About the author Devlin

Barry Devlin has worked in the IT industry for more than 30 years, many of those years as a distinguished engineer at IBM. He is now founder and principal of 9Sight Consulting, specialising in the design and the human, organisational and IT implications of deep business insight applications.


November 23, 2017  8:48 AM

Airtame is a (wireless) streaming dream

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software runs the world, yes – but it typically needs hardware to run on and even the Computer Weekly Developer Network is occasionally tempted to focus on a piece of kit that ‘does what it says on the tin’ – and anyway, it’s usually something with a good dose of intelligent embedded software in it anyway.

Such it is then with Airtame.

We have looked at wireless streaming devices before and struggled with installation, cross-platform compatibility, core usage and GUI effectiveness and their core ability to ‘hold onto’ a connection between the transmitting device and the object for broadcast.

Airtame plays in this same wireless streaming devices market, but is more effective than other similar products by virtue of it not appearing to suffer from the above-mentioned performance limitations.

This device slots into the HDMI port of a television, monitor or projector and once connected to the Wi-Fi network, anyone who has downloaded the app will be able to present – or, in our case, stream either live or recorded (or ‘burnt’) TV programmes and movies to a TV screen.

Streaming dream

Full screen mirroring allows users to share work from both Windows, Mac, Linux and Chromebook – so, none of those cross-platform incompatibility issues.

With multiple Airtames, one display screen can be shared on many screens; and this could be a use case for large auditoriums.

Customers include schools and businesses — but we found this product ‘did what it says on the tin’ in terms of being able to connect both tablet and PC to a television some 10 yards away, although the range is wider than that.

The only drawback? Airtame needs its own power supply (via USB to electrical socket) to operate, where others often drawn enough power from the HDMI port itself… but hey, your television (or auditorium stage) probably has a power supply near it doesn’t it?

Cloud-friendly

According to the company, “Our option to remotely manage all your Airtame devices through Airtame Cloud, is greatly appreciated by all IT admins. They can quickly check to see whether devices are online and diagnose what needs to be done, in case something is not working. They will be able to access and edit device settings directly from the Cloud without even having to open up the Airtame app, saving them a lot of time and effort.

When no one is actively streaming, Airtame can be turned into a digital signage solution, putting valuable information in front of students or employees including sales figures or upcoming lessons.

Airtame says its target markets for product availability are United States, Canada, United Kingdom, Germany, Australia, Netherlands, Belgium, Denmark, Norway and Sweden.

The product is priced at 299 Euro here.


November 22, 2017  12:25 PM

Druva heeds, don’t just ‘plonk’ data into the cloud

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Druva

This is a contributed article for the Computer Weekly Developer Network written by Druva’s Dave Packer in his role as VP of products and alliances marketing at the company.

Druva is a data management and protection specialist that offers Backup-as-a-Service (BaaS) to the cloud across endpoints, servers and cloud applications — the firm augments this core offering with integrated workflows, global deduplication and encryption key services.

Packer asks, why would we build our IT and applications in the cloud?

He surmises that the reason behind the shift to cloud is simple [enough], it will allow organisations to meet business or consumer needs faster.

However, the move to cloud infrastructure can have further consequences, particularly when it comes to managing all the data our new applications and infrastructure produce over time. More traditional IT management tasks like data protection, archiving and disaster recovery can’t be solely left to the cloud platform.

Packer writes as follows 

Adopting a cloud-native strategy opens up more options. As the volume of data we create every day grows, it makes more economic sense to use cloud services to host that data and run those applications compared to building up more internal IT infrastructure.

This can vary from full cloud-based apps that run across public cloud, to more specific deployments that replicate more traditional IT designs and use cloud to help with scale. The cloud implementation that suits you best will be based on what you want to achieve.

For example, it’s possible to run specific VMware virtual machines on Amazon Web Services (AWS) alongside internal data centre workloads; alternatively, you can implement a full Software-Defined Data Centre on AWS or host these applications natively on AWS using EC2 and requisite additional services.

Don’t plonk into cloud

However, applications are not just ‘put [i.e. plonked down] in place’ in the cloud, or in virtual machines.

Let’s remember that AWS offers a plethora of ways to store data created by applications – from the object storage of AWS Simple Storage Service (S3) and traditional block storage of Elastic Block Store (EBS) through to more specific database services like Relational Database Service (RDS).

Alongside this, you can implement applications based on the full set of compute services contained in EC2, that use S3 and EBS alongside Glacier for long-term, ‘cold’ data storage. With these new services getting used to store potentially huge amounts of data, additional data management considerations should be made.

For some IT teams, moves like VMware and AWS working together will support their long term goals of migrating their workloads to the cloud, while maintaining system consistency with their on-premises environment. For other IT teams, this is a chance to review their infrastructure strategies from their choice of hypervisor through to more widespread strategic changes. Picking the right approach here will depend on previous decisions, and the process will probably result in a hybrid world for years to come.

Whatever the specific architecture choices, moving to the cloud will mean that data that is spread across more physical locations.

You may have islands of data held in different locations and used for different business applications. With more mobile employees, cloud applications and remote branch offices in place at enterprises too, it’s harder to maintain that complete picture and consolidate how the data is used and managed over time. It’s therefore important to look at how all this information can viewed and managed from a single control plane.

Keeping up with data in the cloud

Each cloud service can fulfil a role for managing data created by an application. However, without a full overview of each set of data that is being created, it can be difficult to help the business meet all its requirements. While these applications may fulfil a specific business goal, wider issues like compliance can be compromised if they are not considered.

For example, all IT teams will have seen the initials GDPR bandied about. If your customer-facing app holds customer records, then managing this data is going to be a necessary requirement over time. Establishing if your public cloud platform can help you support tracking data over time – or whether this is solely left up to you – should be thought about at the beginning, rather than added on afterwards.

Whether multiple sets of data are created on S3 or EBS – or stored within hybrid cloud infrastructure that sits across public and private cloud – business applications have to be managed and the data they produce protected. Not only should this help your overall move to the cloud, it should support any efforts taking place around compliance when it comes to customer data and GDPR.

Alongside moving applications built on cloud into production, IT teams have to look at where that application data is hosted or moved to. Whatever public cloud platforms you use, or hybrid environment is deployed, all this data will have to be protected and managed over time.

All cloud-native application designers will therefore have to think hard about how they use data, and how they maintain security, auditability and management over this.


Page 5 of 106« First...34567...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: