CW Developer Network


November 6, 2019  8:00 PM

CI/CD series – Sauce Labs: Soft skills are the key to highly functional pipelines

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Marcus Merrell in his role as director of technical services at Sauce Labs – the company is known for its continuous, automated and live testing capabilities which can be used for cloud, web and mobile applications.

The Sauce Labs director says it can be tempting to look at the mechanics of the modern CI/CD pipeline and think it’s a black and white process, the success or failure of which is defined by coding skills and technology purchases.

Merrell writes as follows…

The reality for those on the ground, however, is considerably different, and considerably grayer than any notion of a black and white process.

While having skilled coders and arming them with the right development, testing and integration technologies is indeed important, soft skills and intangibles are often what make or break most development teams.

Sustainable soft skills

Take adaptability, for example. If there’s one constant in the development world, it’s change. Changes to organisational structure and business priorities happen all the time. Changes in customer behaviour and product requirements are equally recurrent. As are changes to the overall market. In the midst of change, it’s more important to have a development team that adapts well to new processes, leaves their collective ego out of it and takes constructive criticism from peers, than to have a team of superior coders and developers who are unable or (worse) unwilling to adapt to the inevitable cycles of change. That’s why, when building out your CI/CD team, soft skills are every bit as important (if not more) as coding acumen.

The good news for tech leaders and practitioners alike is that adaptability is a skill that can be honed and advanced by spending time to understand the roles and needs of other functional teams within the organisation, as well as the roles and needs of your customers.

Try to get your developers close to the customer, whether through your ‘customer success’ team, or by participating in customer advisory boards.

Empathy epiphany

The more CI/CD teams understand and empathise with the challenges their customers and colleagues face, the more adaptable they’ll become.

But even adaptability won’t be enough to overcome modern development challenges if everyone on your team has the same resume or CV. The most successful CI/CD teams in the world embrace diversity and foster a culture of inclusion.

It’s impossible to understate how important it is to have varying perspectives and life experiences on your development and delivery teams. Your customers are diverse, and your development team needs to be as well. You can’t put yourself into the mind of someone with a completely different background and set of life experiences as you. To develop and deliver software that meets your customers’ needs, you have to understand their needs. To understand those needs, you need people on your team who share their perspectives.

Most teams already have the requisite skills, invest in the right technologies, and instill the right processes and procedures.

What they typically do not have are the critical soft skills, and that will make the difference between a struggling team and a high-performing one.

Merrell: the road to good code starts with empathy.

 

 

November 5, 2019  9:51 PM

Inside the new ScyllaDB feature toolbox

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

ScyllaDB is the firm behind the Scylla NoSQL database claims to be thinking big — and so, the organisation has used its annual Scylla Summit conference to detail a whole selection box of new features designed to serve real-time big data applications.

Scylla has unveiled new capabilities including Lightweight Transactions, Change Data Capture (CDC) and an Incremental Compaction Strategy.

CEO and cofounder of ScyllaDB Dor Laor explains that he knows his firm has become known for its approach to speed and reliability for latency-sensitive big data applications… and for its ability to help reduce storage costs.

Latency-sensitive applications are (perhaps obviously) chunks of enterprise software that can not work effectively with any extended period latency (or wait time), typically because they serve a real-time data need in some live operational/transactional deployment.

“But performance [for low-latency] is just part of what makes Scylla so powerful. With these latest features, we’re extending Scylla’s benefits to exciting new use cases and opening the door to a wide range of new functionality,” said Laor.

Lightweight Transactions

Among the new features is Lightweight Transactions (LWT), a development that is already committed into Scylla’s main source code branch.

LWT works to deliver ‘transaction isolation’ similar to that of traditional relational databases, which the company says will help bring Scylla to a new range of business-critical use cases. In database systems, isolation determines how transaction integrity is visible to other users and other systems… so it’s a good form of lock down and control where needed.

Going deeper here… LWT ensures, with a reasonable degree of certainty, that whenever you [i.e. your database] reads a record, you see the version that was last written. Without LWT, you might read a record (call it Record A) off one node of the cluster just as someone was updating the record on another node. With LWT, the database updates Record A on all clusters at the same time, so the nodes don’t (or very rarely) disagree, and two applications querying the same record are much less likely to see two different versions.

The company will also soon release a Scylla Open Source version that includes this LWT feature.

Change Data Capture (CDC)

Scylla reminds us that the modern application stack is no longer a monolith.

Because of this core truth, we see microservices that need to constantly push and pull data from their persistence layer. CDC enables Scylla users to stream updates to datasets with external systems, such as analytics or ETL tools.

Scylla CDC identifies and exports only new or modified data, instead of a full database scan. Beyond its efficiency, CDC allows organisations to use Scylla tables interchangeably.

“Opening new possibilities for users to consume data. CDC is already committed into Scylla’s main source code branch. We will soon release a Scylla Open Source version that includes this feature,” noted the company, in a press statement.

Incremental Compaction Strategy (ICS)

Reducing storage costs by what are said to be up to 40%, Incremental Compaction will soon be available with Scylla Enterprise and Scylla Cloud.

While compaction reduces disk space usage by removing unused and old data from database files, the operation itself typically suffers from space amplification. ICS lowers costs significantly by improving this operation.

Finally here there’s DynamoDB-compatible API for Scylla Cloud: Project Alternator, which presents an alternative to Amazon DynamoDB… and the technology is now available in beta for Scylla Cloud, the company’s powerful database as a service (DBaaS).

Applications written for DynamoDB can now run on Scylla Cloud without requiring code changes. This enables DynamoDB users to quickly transition to Scylla Cloud to significantly reduce costs, improve performance, and take advantage of Scylla’s cloud and hybrid topologies.

 

 


November 5, 2019  8:57 PM

CI/CD series – PagerDuty: the pipeline is only as good as you build it

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Matty Stratton in his capacity as DevOps Advocate for PagerDuty, a provider of digital operations management.

In more detail, Pager Duty offers what it calls a Digital Operations Management Platform that aims to integrate machine data alongside human intelligence on the road to building interactive applications that benefit from optimised response orchestration, continuous development and delivery.

Stratton writes…

We need to remember that CI/CD is not just Agile iteration, but a direct comparison isn’t quite right either – it’s a little bit like asking for the differences between a car and an engine.

CI/CD can definitely enhance and support Agile methodologies, but it’s not directly linked. You don’t necessarily need to be doing Agile in a structured way to take advantage and get benefits of CI/CD — and CI/CD is a great way to support feedback loops – but they aren’t required.

That’s not to say I don’t support or endorse good feedback loops – they are critical to software success. But they’re slightly orthogonal to CI/CD and shouldn’t be conflated.

You can absolutely do CI/CD without feedback loops – CI/CD is about how changes are tested and deployed. Your CI/CD implementation will still “work” even if you aren’t getting feedback – it’s about how the software is built and shipped. I can write a great CI/CD pipeline that has nothing to do with feedback, and it will work well and accomplish its goal, which is to ensure that the software is shippable at any given time.

No plug-&-play CD/CD

Nothing in CI/CD by itself will provide fewer bugs.

You can’t just ‘install some CI/CD’ and increase quality. Having useful and high quality tests as part of your pipeline are what will help reduce bugs – or at least make those bugs cheaper to fix. But bugs will happen. We want to detect them as close to the introduction of the bug as possible and CI/CD provides the ability to do so; but it won’t do it ‘automatically’. You need to include proper tests (including security tests for example) at all stages of the pipeline… and as early in the process as possible. Your pipeline is only as good as you build it – and continually reviewing it just like you would review your application code. If a bug escapes into production, part of the postmortem on the incident for that bug should include any potential improvements to your deployment process to address catching it earlier next time.

So how frequent should CI/CD cycles be? Well, how frequent is an easy point to answer – because every commit should trigger the pipeline. Smaller batch sizes always help.

That being said, executing your CI/CD pipeline doesn’t mean you release/deploy to production! But if you have rolled up a week’s worth of changes into one merge, that makes it a lot harder to see what in that change caused the issue. But there is no magic frequency number – you should be able to deploy/release as quickly and as often as your business requires. CI/CD is about ensuring that all changes are shippable at any given time, and they are available to be released to production when the business requires it.

A question of concurrency

If your deployments and builds required a lot of computing and are taking longer than you would like, looking at ways to parallelize them is definitely helpful, but by no means required. I would caution folks against over-optimizing for an issue until it actually comes up.

Start small and scale when you do hit the cases that require you to scale.

I find that I reiterate the distinction between how CI/CD contrasts with continuous deployment quite often. The only difference between continuous delivery and continuous deployment is whether the last step – the push to production – is either automatic or a human gate. Not everyone is ready for continuous deployment (or may never need it!) but everyone can, and should, engage in continuous delivery. We should be testing the deployment of every change, just like we perform functional tests of every change. That’s continuous delivery.

It’s also important to note that the difference between functional and unit tests has nothing to do with whether they exist in CI/CD or elsewhere. The difference is in what is being tested.

That being said, unit tests tend to exist in the “CI” part; which is to say they are testing the code without it being deployed somewhere. The functional tests require the application to be running, which is more of the “CD” side of the equation.

A document trail is as valuable as it’s readability by others and also how often it is actually looked at.

Stack traces aren’t always helpful for someone other than the engineer who wrote the code – having your tests provide meaningful, human-readable output is key.

PagerDuty’s Matty Stratton: knows his code, all the way to the motherlode.


November 5, 2019  8:56 PM

Diffblue: How does ‘AI-assisted’ software development work?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Mathew Lodge in his capacity as CEO of Diffblue — the company is known for its work as it aims to ‘revolutionizse’ software engineering through Artificial Intelligence (AI) and offer what it calls an #AIforCode experience.

Diffblue’s Lodge reminds us that, in the last decade, automation of software development pipelines has rapidly taken off as more teams have adopted DevOps practices and cloud-native architecture.

Automated software pipelines have led to The Rise of The Bots: robot assistants within the continuous integration loop that automate tedious and repetitive tasks like updating project dependencies to avoid security flaws.

Lodge says that today, bots can generate ‘pull requests’ (a pull request is a method of submitting contributions to an open or other development project) to update dependencies and those requests are reviewed by other bots and, if they pass the tests, automatically merged.

Lodge writes as follows…

The crucial part that makes all of this [AI coding] work is tests.

Without tests to quickly validate commits, automated pipelines will risk automatically promoting junk – which is much harder and slower to fix later in the software delivery process.

In his canonical 2006 article on Continuous Integration, Martin Fowler pithily notes that: “Imperfect tests, run frequently, are much better than perfect tests that are never written at all.”

AI for code: developing at scale

Writing tests is like eating healthily and drinking enough water: everyone aspires to do it, but life tends to get in the way.

It’s often the least enjoyable part of development and it takes time and attention away from the more interesting stuff. So automation seems like it would be a great fit – except that the rules-based automation that works well for dependency-checking bots, does not work well for automating test generation; it’s a much harder problem to solve.

AI-based test-writing approaches that apply new algorithms to the problem have emerged in the last few years. Machine learning-based tools can look at browser-based UI testing code, compare it to the Document Object Model… and suggest how to fix failing tests based on training data from analysing millions of UI tests.

But much more code will have to be written by AI to move the needle.

Gartner has estimated that by 2021, demand for application development will grow five times faster than tech teams can deliver. So we’re now seeing the emergence of AI that writes full unit test code, by analysing the program to understand what it does and connecting inputs to outputs.

While the tests aren’t perfect, as no one has solved the halting problem and other well-known challenges in program analysis, the tests are good enough – and infinitely better than perfect tests that were never written.

Benefits beyond automation

AI for code can do more than simply increase the speed at which developers work: it can actually improve the quality of the finished software product, and reduce the amount of required debugging. It can quickly complete repetitive tasks, without losing interest or making mistakes as humans sometimes do. Automating the boring (but necessary) parts of the job can also prevent burnout and increase job satisfaction at a time when companies have to compete for the best talent.

Diffblue CEO Lodge: AI for code can help shoulder the (work) load.

With AI for code, the developers of tomorrow will have more freedom to innovate in the way only people can – benefits that go beyond what’s possible with automation alone. Expect the future of software development to be increasingly AI-assisted.


November 5, 2019  8:56 PM

What is close-to-the-metal?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Recent discussions with database company Scylla threw up the term close-to-the-metal, or some simply say close-to-metal.

But what does close-to-the-metal mean?

The Computer Weekly Developer Network team has gathered a handful of comments and definitions on this subject and lists them (in part, with full credit and links) below for your reference.

Essentially, close-to-the-metal means database software that works in close proximity to and with knowledge of the actual instruction set and addresses of the hardware system that it is built to run on.

This means that the database (or potentially other software program type) itself can work to ‘squeeze’ out as much power for any given hardware estate (the process of scaling up) before it then needs to expand with further processing and analytics nodes (the process of scaling out).

As noted by wikic2, close-to-the-metal (or close-to-the-hardware) means we’re deep in the guts of the system. “The C [programming language’s] memory management is considered close-to-the-metal compared to other application languages because one can easily see and do mathematics on actual hardware RAM addresses (or something pretty close to them).”

The above-linked definition suggests that close-to-the-metal can sacrifices hardware choice through lock-in and may introduce risk because there is no interface layer to protect against silly or dangerous ranges, settings, or values.

Roger DiPaolo, provides an additional (and much needed) piece of extra colour here when he says that with close-to-the-metal (in programming terms) means that the language compiles (or assembles) all the way down to native machine code for the CPU it is to run on.

“This is so that the code has no ‘layers’ it has to go through to get to the CPU at run time (such as a Virtual Machine or interpreter). A close-to-the-metal language has the facilities for directly manipulating RAM and hardware registers. C and C++ can both do this.”

So trade offs or not, this is the approach Scylla has taken to building its core technology proposition.

The company claims that independent tests show a cluster of Scylla servers reading 1 billion rows per second (RPS) – and so the firm says that this is a performance that ranks ‘far beyond’ the capabilities of a database using persistent storage.

“Everyone thought you’d need an in-memory database to hit [MOPS] numbers like that,“ said Dor Laor, CEO of ScyllaDB. “It shows the power of Scylla’s close-to-the-metal architecture. For 99.9% of applications, Scylla delivers all the power [a customer] will ever need, on workloads other NoSQL databases can’t touch and at a fraction of the cost of an in-memory solution. Scylla delivers real-time performance on infrastructure organisations are already using.”

Bare-metal platform provider Packet partnered with Scylla to conduct the test on 83 of its n2.xlarge servers, each running a meaty 28 physical cores.

The benchmark populated the database with randomly generated temperature readings from 1 million simulated temperature sensors that reported every minute for 365 days, producing a total of 526 billion total data points.

Could close-to-the-metal could be close-to-the-edge of where the art of database computing goes next? Some would say Yes.

 


November 5, 2019  8:51 PM

Scylla’s ‘scale-up & scale-out’ mantra for monster database power

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There are databases. Then there are big data databases. Then, then there are super-performance high-speed big data databases.

And finally, there are real-time big data databases that run at hyper-speed built around an efficiency mantra that champions scale-up (and then…) scale out engineering.

This is the spin (the server hard disk kind, not the marketing kind) that Scylla wants to put forward to describe its performance of millions of operations per second (MOPS) on a single node.

The company claims that independent tests show a cluster of Scylla servers reading 1-billion rows per second (RPS) – plus (and here’s the point of all of this) the organisation says that this is performance that ranks ‘far beyond’ the capabilities of a database using persistent storage.

“Everyone thought you’d need an in-memory database to hit [MOPS] numbers like that,“ said Dor Laor, CEO of ScyllaDB. “It shows the power of Scylla’s close-to-the-metal architecture. For 99.9% of applications, Scylla delivers all the power [a customer] will ever need, on workloads other NoSQL databases can’t touch and at a fraction of the cost of an in-memory solution. Scylla delivers real-time performance on infrastructure organisations are already using.”

NOTE: As TechTarget reminds us, NoSQL (not only SQL) is an approach to database design that can accommodate for a wide variety of data models, including key-value, document, columnar and graph formats – over and above ‘traditional’ relational databases (where data is sits in tables and data schema is designed before the database is built) NoSQL databases are useful for working with large sets of distributed data.

Close-to-the-metal

But let’s stop a moment. Scylla CEO Laor said ‘close-to-the-metal’… so what did he mean by that? We’ve detailed a complete definition commentary piece here, but essentially it’s all about database software that works in close proximity to and with knowledge of the actual instruction set and RAM addresses of the hardware system that it is built to run on.

In the company’s benchmark example, we see Scylla tell us that the test involved scanning three months of data (some served from cache and some from storage), which resulted in Scylla reaching speeds of 1.2 billion data points per second.

NOTE: Data points per second (DPPS) refers to the number of individual records inside any given architecture (or type) of database schema that the database query engine and management plane can accurately read, in one second.

Scanning a full year’s data, all from persistent storage (with no caching), Scylla scanned the entire database at an average rate of 969 million data points per second.

With one day of data, scanning everything from memory, Scylla achieved 1.5 billion data points per second.

Scylla uses power of modern hardware such as bare metal servers — with its shared-nothing, lock-free and shard-per-core architecture, which allows it to scale up with additional cores, RAM and I/O devices.

NOTE: A shared-nothing architecture is an approach to distributed computing architecture in which each update request is satisfied by a single node (where a node itself can represent a processor, block of memory or storage unit) so that there is (hopefully) no contention among nodes. In shared-everything, any data task (processing, memory or storage) can be served by an ‘arbitrary combinations of nodes’, so there could be traffic and the potential for collisions.

Scylla says that it stands in ‘contrast’ to Cassandra, because “Cassandra’s ability to scale-up is limited, by its reliance on the Java Virtual Machine, which keeps Cassandra from interacting with server hardware. Where Cassandra’s threads and locks slow down as the core count grows, Scylla can take full advantage of all of a server’s resources.”

The company claims that the performance Scylla demonstrated in these (above noted) benchmarks has implications for real-world applications. For example, analytics jobs that previously took all day to complete can now be run continuously to provide ‘intraday’ (i.e. inside one day) reports.

As with any Scylla story, it’s a bit like drinking from a firehose and the company presents itself with a degree of unashamed swagger and confidence. There’s also a lot to cross-reference and learn (hence the three clarification notes above and the separate close-to-the-metal explanatory story) in order to take all this in.

Scylla (the company) takes its name from the Greek sea monster of the same name in Homer’s Odyssey… let’s hope this stuff is no fairy tale.


November 2, 2019  9:01 AM

CI/CD series – Plutora: best practices for continuous code

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Jeff Keyes, director of product marketing at Plutora — the company is known for its technology which works to capture, visualise and analyse critical indicators of speed and quality of software delivery.

Keyes writes…

CI/CD done right will build high quality software faster, as it’s one of the foundational parts of DevOps. To ‘do’ DevOps, teams need to focus on decomposing the applications as well as integrating automated checks as much as possible. Which in essence focuses on building quality in, instead of inspecting the quality after the code is built.

By putting in checks to verify the quality throughout the process, you’re doing it closer to when the code was actually written, meaning there is a more frequent feedback loop between the developer and what is taking place. Therefore, best practice of continuous integration is for the developer to be the one that not only writes the code, but also writes the tests for the code to ensure it works effectively.

For this to be successful, these tests need to be run quite frequently. By frequently running the automated tests, fewer bugs slip through to follow on phases of testing. By using CI/CD practices instead of traditional methods, the IT team will overall end up with higher quality code once it’s done.

Taking this a step further, when there are individual teams coming together and collaborating on this, the code that they integrate together is also of higher quality. This is because the quality of the individual sub-systems and component are of higher quality and the team can focus on ensuring the quality of the integration points.

Buggy bugs

It can be difficult when an IT team is faced with trying to find a bug and they are not sure whether it’s in their own code, someone else’s code, or in the touchpoint between the code. These automated tests help focus from where the core problems originate.

This enables the team to take the next step of integrating the code lines more frequently. This is the foundation of Continuous Integration. Continuously integrating code lines closes the feedback loop between the teams. So when the IT team puts it all together, there is a reduced risk of any errors due to multiple bugs being present.

In traditional software development pipelines, another common point of failure is getting the built software onto preproduction and production environments. Automation of the deployment is an integral part of CI/CD pipelines.

Automation ensures consistency and speed and it means that the IT team can regularly deliver code together with minimal effort. Defects in the process are addressed in a permanent fashion. The Continuous Deployment portion of CI/CD again raises the overall quality of the applications being developed.

So when CI/CD is implemented correctly, it can lead to a higher quality code being produced. But while it can facilitate this, in reality it depends on what it is used for, and how individual teams work together to ensure that when the code is brought together, they are sharing the best version of it.


November 2, 2019  7:45 AM

Nutanix VP: IT evaluation challenges in the subscription transition zone

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Evaluating IT is complex.

As much as the press has something of a love-hate relationship with technology analysts (did somebody say Magic Quadrant?), we do know that enterprise tech market analysis is a long, complex and convoluted process.

One of the biggest issues is the question of metrics.

Just as we’re getting really clinically competent on machine file log data metrics, the higher tiers of the industry itself are (arguably) becoming ever-harder to track… primarily as a result of the industry (and its multifarious platform-level technologies) moving so fast.

The old technology tool metrics just don’t apply anymore.

Vice president of investor relations and corporate communications at Nutanix is Tonya Chin.

Pointing to the way companies like Salesforce came to market with a cloud subscription model, Chin argues that for Salesforce, early growth was a little less striking than some of the traditional enterprise software firms saw when they were startups.

“They [Salesforce] started by seeing rapid adoption of their subscription model and the company then worked to develop a ‘land and expand’ dynamism. This meant that Salesforce could grow steadily, without worrying about the troughs (that inevitably come along with the spikes) of the traditional software sales and deployment model,” said Chin.

New KPIs, please

Nutanix’s Chin asserts that today, in the Hyperconverged Infrastructure (HCI) sector, we need a new set of Key Performance Indicators (KPIs) and ways to unravel the confusion caused by a complex IT world. She notes that some companies still prefer to deploy software and hardware on-premises — and pay upfront for it. Other customers wish to adopt the cloud as their default choice and pay via a subscription model. What is clear is nearly all customers would like the flexibility of doing both, allowing them to put workloads where they are best suited and pay for them how they choose.

“To serve customers, companies like Nutanix are moving from selling hardware to software. This is perhaps not such a revolutionary change; for many years, makers of appliances only wrapped their software in ‘tin’ for the convenience of buyers or to get a higher revenue number. But the hardware-to-software metamorphosis has certainly affected how influential industry analysts understand what’s going on,” said Chin.

As with moving from traditional software to cloud applications, moving from selling boxes to selling software can cloud (pun intended) what’s really going on.

Splintering the spikes

In short, just like Salesforce, Nutanix knows that it (increasingly) won’t get as many of the big revenue spikes large deals will cause, because the new model is based on providing customers with the flexibility of shorter contract durations, which means enterprise software organisations and their partners receive less revenue up front, but have the opportunity to expand over the life of the customer relationship so long as customers are happy.

“Making the transition from hardware takes time for analysts to register the change — and it becomes harder for watchers that aren’t immersed in what has happened in the enterprise software space to see the wood for the trees. Just as annual recurring revenue (ARR) came to be a big indicator of cloud service success, analysts need to see annual contract value (ACV) as the critical KPI of companies like Nutanix during times of transition, rather than focusing on billings and revenue growth in the near term. As the company completes its transition, traditional software metrics, including ARR, will also be important indicators of commercial success,” said Chin.

Nutanix’s Chin notes that Adobe has shown how to pull off a software-to-cloud transition. She also notes that Microsoft, under Satya Nadella, is doing something similar as part of a broader reinvention of its brand and, being such a closely watched company, the tech sector analyst community has picked up on that very quickly.

“But these things can take time. Remember, it’s not so long ago that some analysts considered Amazon to be a disaster in the making because it trades off profitability for growth. Today, it’s recognised as one of the world’s most valuable companies,” said Chin.

The Computer Weekly Developer Network spoke to Nutanix’s Tonya Chin to get her VP-level-view on how we need to look at the longer-term signs of value and move away from short-termism in our quest to gain a 20:20 perspective on the way the market is going to move next.

Nutanix VP Tonya Chin: No short-termism allowed if you want a 20:20 perspective.

 

 

 


November 1, 2019  11:48 AM

CI/CD series – Contrast Security: pipeline tools must be robust, sharp & safe

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by David Archer, sales engineer at Contrast Security — the company’s technology works to automatically detect and fix applications vulnerabilities as it identifies attacks.

Archer writes…

As we know, the purpose of a CI/CD pipeline is to automate the checks and processes involved with delivering software into production so that it can be performed in a consistent way and is not affected by human error.

What is sometimes overlooked is that the ‘CD’ part of CI/CD can refer to either Continuous Delivery or Continuous Deployment and there is a subtle difference.

The process of Continuous Delivery means delivering code into production-like environments in order that you can confidently push a release into production as and when required. With Continuous Deployment the process is completely automated, and code is deployed into production.

The process starts with a developer committing code to a repository which triggers the CI/CD pipeline to start. During this, a number of tests are run. These include integration tests that verify that the changes the developer made do not affect any other components of software.

There is no limit to the number (or size) of the changes. Having said that, developers are encouraged to keep changes small and commit code regularly so that should any tests fail, the root cause can be quickly isolated.

CI/CD pipeline

The use of a CI/CD pipeline arguably becomes more important as organisations move from deployment of a single monolith application to a loosely coupled architecture using microservices. When microservices are used there are exponentially more components to release and teams must be able to deploy changes into production independently of other teams.

By eliminating human error in the delivery process and running a large number of automated tests in the CI/CD pipeline, it is possible to significantly reduce the number of configuration problems or bugs seen in production environments. However, the benefits do not stop there.

The CI/CD pipeline also results in a shorter feedback loop for developers. This allows bugs to be remediated not only earlier, but faster, as the developer still has good context around the code that they wrote. When you couple these benefits with a fast, automated and reliable delivery process, then it is obvious why companies have embraced this approach to delivery.

Having said that, the learning curve for CI/CD can be steep. There are a number of CI/CD tools available but figuring out the right tool for your organisation will depend on a number of factors including your code languages, deployment environments and complexity of your build process. When using microservices, each team will normally be able to choose the technology stack which best serves their requirements and therefore a single build process will not be enough. The CI/CD tool needs to be able to accommodate each team’s preferred language, build process and test frameworks, so better to look for a flexible solution.

The bottom line is that a poorly configured or maintained CI/CD tool can result in intermittent failures which will quickly frustrate developers, so reliability of your pipeline is key. In order to get the most from your CI/CD tool it requires up-front effort to create a robust pipeline. However, this will yield numerous long-term benefits. If you have a few issues at the start it is important to deal with any failures early so that you maintain trust in the delivery process.

Once a CI/CD pipeline is in place there can be a temptation to jam additional tools into the process, but you should approach this with caution. There are a number of tools which can be disruptive to the pipeline including security scanners which may extend your pipeline duration past the magical 10-minute feedback loop for developers.

David Archer — knows his way around the (code) pipeline.


November 1, 2019  9:14 AM

CI/CD series – Sumo Logic: the pipelines provide continuous data – are you using it?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Mark Pidgeon, vice president of technical services at Sumo Logic — the company is known for its cloud-native machine data analytics platform

Pidgeon reminds us that the model for CI/CD aims to make the process for getting software into production easier, faster and more productive. The pipeline model defines the stages that software has to go through and then makes those stages more efficient through automation.

Pidgeon writes…

The reality of the pipeline works well for getting software out.

However, it’s not the whole story when it comes to the data that this creates over time. Each stage of the development process will create data… and those applications will add data once they are in production.

While it is a by-product of running applications over time, this information acts like a ‘digital exhaust’ that developers can tap into and use over time. This data can – if approached correctly – support a more accurate feedback loop for development, for deployment and for business teams.

By looking at the impact of any changes on application usage, it’s possible to understand how well the application is performing from a technical viewpoint as well as from usage and business demand perspectives too.

Business-wide CI/CD

But truly effective CI/CD should inform the whole business.

This involves thinking about observability from the start. Application logs, metrics and traces can each provide insight into the application and any changes taking place, but the sheer amount of data coming through from these continuous changes has to be understood in context. Putting this data together from different sources – particularly from software containers – can be difficult as it involves a lot of normalisation, after which it can be linked up to data from any CI/CD pipeline.

Linking data sets from application components ahead of release into production provides a strong ‘before and after’ set of data. This can be useful for looking at the impact of changes as part of any move to production.

This data is created continuously during any development, and it provides data that can be used over time for making better decisions across the software process.

However, it does not end there – this data can show where more work is needed in security and compliance, where business decisions affect IT and where IT choices affect business performance too. This continuous stream of data can be made useful for more people over time.

As more companies expand their CI/CD processes to speed up their businesses, the end result should be more efficiency across the whole operation. This approach should help companies improve decisions using metrics and data. However, this is a mindset change. Moving to a continuous intelligence approach can help.

Sumo Logic’s Pidgeon: effective CI/CD should inform the whole business.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: