Hsu argues the point that modern software architectures have changed the nature of the build vs. buy question — he says that the build vs. buy tradeoff used to be more of an either/or proposition.
That is to say, firms could option to either build custom software, or buy a complete solution that addressed all of current needs — and there wasn’t much of a middle ground.
But, says Hsu, The iterative and modular nature of modern software development is changing that previous balance.
He contends that many companies now offer fine-grained services [and microservices… and Application Programming Interfaces (APIs) that connect smaller component pieces of functionality] that address a variety of areas: computation, storage, e-commerce, security, machine learning, payments and more.
These services can be easily integrated with existing software components to help you accomplish your business objectives more efficiently.
“I’ve found that the most effective approach is to bias [your software purchasing] towards third-party components and only build the components that provide your business with a significant competitive advantage. This helps your current engineers focus on what’s most important. Additionally, it makes it much easier to onboard new engineers — instead of teaching them the intricacies of your proprietary code, they can benefit from the documentation and training materials that third-party service providers typically offer,” said Cimpress’ Hsu.
Hsu’s 3-laws of build vs. buy
- Doing this [above approach] requires your own software architecture to be modular. It’s much harder to plug third-party services into monolithic architectures. If your company still hasn’t made this transition, it’s highly advisable that you make it so. Modular architectures enable your business to be more agile.
- It’s often tempting to build your own service if the third-party service doesn’t do exactly what you need. Let’s say that it accomplishes 90% – but ask yourself if the missing 10% is worth taking your engineers away from delivering unique value to your customers.
- Even if you decide that building your own service is the right thing to do, it can be useful to build prototypes using third-party services. It gets your innovations to market faster and helps you learn.
This is a guest post for the Computer Weekly Developer Network written by Nathan Wright in his role as SwaggerHub product marketing manager and API evangelist at SmartBear.
SmartBear Software is know for its application performance monitoring (APM), software development, software testing and API management tools.
Wright defines API standardisation (or API standardization, depending on which side of the pond you sit on) for us in light of the prevalence of this technology approach, which essentially exists to ‘glue’ different parts of applications execution streams together.
SmartBear’s state of API 2019 report uncovers API standardisation as the top challenge facing API teams as they support the continued growth of microservices… so what is it? Wright writes as follows…
What is API standardisation?
With the continued growth in the number of services (both internal and external) that we are seeing organisations start to support, there is a common trend for overarching requirements to be followed by the smaller, distributed teams building and supporting them.
We see API standardisation as a way to enforce guidelines on how these services communicate and govern how the data itself is being exposed — something like a common definition of what information a ‘user’ object includes regardless of which service is returning it.
One of the major benefits of enforcing these standards is speed of delivery – when teams don’t have to redefine existing assets, it allows them to focus on the core requirements of the new project. Additionally, this helps with not only simplifying something like a new integration and speeding up the amount of time, but also reinforces a positive developer experience regardless of the service that is being used.
What defines it and makes it so?
What we find in many cases is that there is a high-level group, or team, who defines overarching standards and rules that development groups follow. It could be an architecture team or something like a centre of excellence in an organisation.
These rules are communicated out to the groups that are tasked with developing, testing and implementing services.
Often, the most successful companies have a very open feedback loop between these various groups. The standards that are defined should be open to evolve like other services – requirements change, and the standards need to grow with them.
Is it a never ending task?
So is API standardisation (and classification) something of a never ending task as new standards constantly need to be built?
While it might seem like a never ending task, we find there is typically a core set of standards that are laid out through a larger architectural project.
An example would be something like an organisation-wide shift to microservices, tying to a standard for how paths are defined or the content types that services must support. These generally don’t change much after an initial sign off as doing so would in many cases would mean re-architecting an entire system.
Where we see a shorter evolution of standards is in the format of data being served – this is very much in line with traditional development, where a team takes on feedback, adjusts requirements, roadmaps changes, gets sign off and implements.
As the scale of the amount of data that is being consumed continues to grow, being flexible in matching that growth, and recognising what the line is between a breaking and non-breaking change, becomes critical.
Who agrees on the standards?
We find in many cases there is a small group of practitioners who will lay out new rules and standards, but there will be buy-in or a ‘sign off’ from a larger group of key stakeholders, such as a group of team leads who will be the primary consumers of a service.
While they may not be laying out the initial standards, in successful organisations there is a very short feedback loop between these groups and most importantly it is a relationship that is defined by collaboration and fast iteration, not push back or top down implementation.
Oh yes, AIOps… hadn’t you heard?
That’s Ops (as in Ops, operations… as recently ‘shown some love’ by DevOps) driven by Artificial Intelligence (AI) in a world where many of the core operational application and data management functions that Ops might perform across roles in Database Administration (DBA), sysadmin, penetration testing and security provisioning (and so on) are actually executed by AI intelligence.
You can’t look far without hitting AIOps news and this week we see software analytics company New Relic acquire Israel and USA based SignifAI, an ‘event intelligence’ firm specialising in artificial intelligence (AI) and machine learning (ML).
How will New Relic bring AIOps to bear?
By offering software teams technology to predict and address performance issues — it’s all about detecting issues early to reduce alert noise.
“To deliver reliable software at scale, DevOps teams need to leverage machine learning to help them predict and detect issues early and reduce alert fatigue,” said Lew Cirne, CEO and founder of New Relic.
Cirne contends that what’s really exciting about SignifAI’s open platform is that it sits above a customer’s existing set of monitoring tools.
“With more than 60 integrations ranging from open source and commercial monitoring tools to popular services found in many DevOps toolchains, SignifAI automates correlation and enriches incident context so that software teams can get answers quickly during incidents and ultimately reduce mean time to resolution,” said Cirne.
It’s true, as modern systems become increasingly complex, the incident response process has become more complex, too.
Plus… with microservice architectures, containers, and serverless technologies, companies face issues of cascading failures and alert noise.
SignifAI delivers AI and ML-powered correlations to provide:
- Faster mean time to resolution (MTTR) with automatic correlation, aggregation and prioritisation of alerts to help teams focus on what matters most.
- Automated predictive insights and recommended solutions to resolve issues faster.
- Efficient root cause analysis, with automatically enriched issues containing all the relevant logs, events and metrics that teams need, regardless of the timeframe.
The SignifAI team will continue to work from offices in Sunnyvale, California and Tel Aviv, Israel.
NanoVMs has announced the first unikernel tool for developers that loads any Linux application as a unikernel.
But hang on… what is a unikernel?
The company says that unikernels are unique single process systems that run in a single address space.
Instead of deploying a Linux operating system and then an application on top of it… the application and the operating system become one secure isolated unit.
To run a unikernel system, a developer selects (from a modular stack) the minimal set of libraries which correspond to the OS constructs required for their application to run
These libraries are compiled with the application and configuration code to build sealed, fixed-purpose images (unikernels) which run directly without the need for an operating sysrem.
Because unikernels are a system (with no users) there is no need for usernames or passwords, which are a major contributor in the average data breach.
A system with no shells means no one can remotely log in to the system and start running random programs on it or worse enlist a lowly camera or edge device into a botnet.
The NanoVMs tool, called “Ops” requires no complex coding or configuration and only requires a simple command to execute.
The company claims that running an application as a unikernel is beneficial in many ways and can be superior to containers. Unikernels are faster, more secure, smaller and come provisioned as virtual machines, which gives them much greater density.
To drill into this, unikernels embrace a four-point security model:
- No Users
- No Shell
- Single Process System
- Massively Reduced Attack Surface
According to NanoVMs, the fact that unikernels are a single process system is vital to solving cyber security vulnerabilities.
“A traditional multiple process system such as Linux has the inherent capability of running multiple programs concurrently. With single process systems by design the system can only run your program not anyone else’s. This immediately stops a lot of remote code executions,” noted the company, in a press statement.
With Ops, developers need no prior experience or knowledge of how to build unikernels, so [in theory] removing the barriers that may have prevented unikernel use in the past.
Ops can be used to build and run unikernels locally on a laptop — no account needs to be created and there aren’t multiple installations to sit through, just a single download and one command.
“We have numerous software issues that are reaching critical mass – security and cloud efficiency to name a few – and moving from outdated operating system-based applications to a unikernel system could have a radical impact,” said NanoVMs CEO Ian Eyberg. “Unikernels have been challenging to deploy in the past, but with our new Ops tool any developer can immediately begin implementation and reap the benefits.”
NanoVMs will also be offering several premade Ops packages for common programs that users would run, but not necessarily code themselves, in addition to databases and webservers.
Millions of lines of code
As additional background here — a unikernel is usually measured in the tens of thousands of lines of code. Compare that to a bloated system that has hundreds of printer drivers, USB drivers, audio drivers, etc. that is never used inside a virtual machine. The Linux kernel is around 15M lines of code and 7-9M of it is all drivers… and that is just the kernel. The operating system itself can be 50M lines of code on the low end to 200M lines of code on the high end.
A unikernel at the end of the day only needs a handful of drivers – something to talk to the disk, something to talk to the network, a clock and that’s about it. The opportunities for a hacker to hide malicious code in software running a Linux operating system are almost endless, not the case for a minimalistic unikernel.
Skuid (pronounced squid) is a low-code cloud-based UX design-and-deploy platform for users to unite data, other applications and processes.
Duensing contends that mobile applications are the driving force behind microservices, but why?
Part of the reason for this is the pace of required change.
“The process of modifying a mobile app and getting the update installed on each user’s device is not a speedy one. So there needs to be a way to move more functionality to back end services and rapidly make enhancements without requiring a full mobile app refresh deployment,” said Duensing.
He also rams home the need for scaling (Ed – nice subtle fishy reference there).
According to Duensing, mobile applications are popular because of the freedom of movement they provide while still being able to access the information users need. This, he says, means that there is a greater opportunity for more concurrent users to exist on a back end system that these apps connect into.
“Monolithic systems that have most or all of the server side functionality on a single system are not scalable and reactive enough for large concurrent user loads typical with many mobile deployments,” said Duensing.
So how do microservices make data more accessible for developers to continuously iterate and deploy enterprise applications?
“A microservices architecture is very conducive to continuous development and deployment iterations. By breaking down a large suite of features into discrete functions, where each runs as its own service that is not tied to any specific server or dependencies, developers are able to create a loosely coupled system of independent functionalities,” said Duensing.
In all, each microservice can do one or a few things very well.
Duensing reminds us that this (above) fact allows each microservice to have specific development teams that are responsible for the functionality on that service. They can enhance the microservice without impacting other services. Mobile apps can benefit from new functionality without the need for a new app deployment.
So how do no-code platforms enable microservices architecture?
According to Duensing, the microservices architecture still has to be built with coding. This is important as a scalable architecture has to return data quickly.
“The advantage of leveraging no-code/low-code platforms with microservices architecture is that it adds more functionality that can be leveraged by front end applications. No-code/low-code platforms with microservices allows developers to quickly access data and build front end applications that can easily consume the back end services,” concluded Duensing.
Andi Grabner works as the DevOps Activist (real job title, not joking) for software intelligence company Dynatrace.
Activist eh? Why’s that Andi?
“Because I want to activate people into becoming change agents and embrace DevOps,” said Grabner.
Grabner’s comments were made at Dynatrace’s annual user conference Perform held this January in Las Vegas… he explains that his firm’s approach to cloud-centric cloud-based Application Performance Management (and allied layers of software intelligence) means that DevOps practitioners will be able to use Dynatrace technology to track what he calls application performance signatures.
What is a performance signature?
“After functional tests have been carried out [on an application] and unit tests have been carried out [and so on]… Dynatrace can show developers [by feeding information into a Continuous Integration (CI) Continuous Delivery (CD) engine such as Jenkins] how the actual performance signature of their application code has changed after updates have been carried out,” said Grabner.
For Grabner, DevOps is all about getting the right [application performance] data to the right people [programmers and operations staff] at the right time in order for them to know what they need to do next.
This can be done through pre-defined channels [could be Slack… or could even just be email] … but however it is conveyed, it is crucial that performance signature data be ‘pushed’ to the practitioners responsible for looking after the app or service in question,
Where it lives
The performance signature itself lives next to the developer’s code in GitHub (or another repository) and is created to express the metrics upon which an app’s success should be measured.
So, to explain the above statement in more colour, the performance signature is based on metrics that could be application speed, application resource consumption, end user experience… or some business use case Key Performance Indicator (KPI) such as an app’s ability to help convert sales and so on.
Dynatrace has developed a Pipeline State UFO to help express the state of an application performance signature and show the status of any single project. This IoT gadget can be used to alert the development team to problems if the commit code that impacts the performance signature.
“The whole ethos of DevOps is based on collaborating and sharing, so we wanted to find a way of fostering this spirit,” explained Grabner. “The UFO is giving visual status updates on the quality of code both in development and in production.”
The UFO monitors progress at every stage of the continuous delivery process, with a separate device for each team or feature. If there are issues that impact getting a build completed, such as code that doesn’t compile, the LEDs turn red.
Your app has a performance signature… now you need what shape (and colour) it is.
This is a guest post written for the Computer Weekly Developer Network by Eran Kinsbruner in his role as chief evangelist for Perfecto.
Acquired by Clearlake Capital-backed Perforce in October 2018, Perfecto is a specialist in cloud-based automated mobile and web application test software solutions.
Kinsbruner reminds us that in the era of Agile and DevOps, organisations have to accelerate the delivery of value [through working, valuable, effective, productive software] to their customers — and they need to do it quickly and at high-quality.
The upshot – or direct corollary – of this reality is that implementing continuous testing processes and reducing the number of software iterations has become more essential.
Put simply, this also drives a clear need for continuous monitoring.
Kinsbruner writes as follows…
The Continuous Monitoring (CM) phase of the DevOps cycle is one of the most important because it continuously delivers real customer feedback from production back to the developers’ workstations.
This provides valuable end-user experience feedback from the app.
Developers implement continuous monitoring (often named Application Performance Management – APM) through either synthetic monitoring or Real User Monitoring (RUM).
Synthetic monitoring is when the team builds a ‘production-like’ environment that mimics the production environment as closely as possible.
Against this environment, developers execute test automation code (through the scheduler) that is connected to an alerting dashboard e.g. Splunk.
The main disadvantage of this method is that this, by design, is a ‘synthetic’ rather than ‘real user’ environment.
This can result in issues being missed, particularly in mobile apps that are specific to unique devices and environmental conditions.
Continuous monitoring via RUM is done via an SDK or an agent that is a piece of code bundled within the application in production.
This code continuously sends metrics and other important properties from the app back to the monitoring dashboards.
The advantage of this method is the unlimited coverage that developers get from the real user environment.
The risks include security – as this code acts like a backdoor from the app to the developers’ machines, reduced application performance due to the additional payload added to the app transactions by the SDK… and compliance with global regulations such as privacy.
Triggered CI jobs
With both approaches, DevOps teams automate key business transactions that are executed through triggered CI jobs – a few times a day or even hourly.
If an anomaly appears in the results of the continuous monitoring tests, an alert is thrown and sent back to a monitoring dashboard notifying developers about its severity and in some cases the root-cause of the incident.
They also enable Dev and Ops teams to tune their production environment during peak usage and / or other unique market events.
You know Cisco, that networking company that makes hardware for deep internal systems and spends all its time developing switches, routers, hubs and token ring adapters and suchlike, right?
No, that was Cisco, that was the 1990s… this is Cisco in the post-millennial software-defined universe.
This is a Cisco that would rather be known as a software company (altogether now… software runs the world and every company is a software company tra la la) with a focus on software application developers and all strains of programming professionals.
There’s been a lot of focus on rearchitecting at Cisco… and much of it has been focused on rearchitecting towards the new mantra for technologies such as cloud-age Network Function Virtualision (NFV) and software-defined networking… and that’s rearchitecting internally as well as rearchitecting Cisco’s core customer base.
So where’s the meat here then?
The firm used its Cisco Live Europe show this January 2019 to roll out a specific set of programmer tools aligned to the Internet of Things (IoT) space.
Cisco’s developer program, which is DevNet (developer stuff for networks – get it?), features a new set of IoT developer tools to help build and manage applications at the edge and enable the extra flexibility.
The new IoT Developer Center is presented with learning materials, developer tools and support resources.
“In IoT, the conversation is about business outcomes. It starts with secure connectivity as the foundation of every IoT deployment. By providing scale, flexibility and security, we’re turning the network into a secret weapon for our IoT customers,” said Liz Centoni, senior vice president and general manager, IoT at Cisco. “And, with a new DevNet IoT developer center, we’re empowering thousands of partners and developers around the world to build upon our IoT platform.”
Technologies showcased here include IOx for edge compute development.
In the data and network management space, but also specifically presented for developers, Cisco here showcases Cisco Kinetic, Cisco Kinetic for Cities, Control Center and Blockchain.
A dedicated open source subsection of the site features tools including Joy, NeXt UI Toolkit, TRex, YANG Development Kit (YDK) and YANG Explorer.
DevNet Sandboxes provide zero-cost, access to infrastructure and platforms to develop and run code against 24×7… and DevNet Ecosystem Exchange makes it easier for developers to both find and share applications and solutions built for Cisco platforms.
Clearly, in reality, we know that Cisco has been a software-focused developer-centric company for years and the firm has always had a pedigree in programmer functions of many disciplines — what we now see if Cisco working to try and present a greater degree of developer accessibility to its toolsets for people who code across a far greater transept.
In the world of technology, there’s open source, but it is often open source served up in a somewhat limited or restricted format under an essentially proprietary overhang that leads quite quickly into a commercially supported (essentially paid for) supply model.
There is true open source too… and we often call this free and open source software (FOSS), meaning software that is, for whatever functionality is presented, fully open source and free for use.
Then there is free for life (or FFL, if an acronym were needed)… and this is software, tooling, training (or any other elemental aspect of technology) that is both completely open and also completely free.
For any firm to provide free for life resources, there has to be a reason (after all, there’s no such thing as a free meal)… so one example that has come forward this year is Free for Life Developer Program from cloud software intelligence company Dynatrace.
The firm has provided this initiative to deliver resources devoted to the recruitment, education and growth of (customer and partner) developers who are extending the Dynatrace software intelligence platform across hybrid cloud ecosystems.
Why is Dynatrace doing this?
Because (says the company), there is concerted effort and move across the IT industry to enable autonomous cloud operations so that functions such as cloud-scale DevOps can be carried out with automated intelligence and with little or no human intervention.
You could call this NoOps, or AIOps… as in Ops directed by AI (or what feels like nobody at all)… and Dynatrace often does.
Dynatrace’s open software intelligence platform can be integrated with third-party offerings to automate operations, drive smart process and workflow and to ingest new sources of data and events.
The Dynatrace Developer Program provides a centralised community for developers and enterprise application architects.
It consists of training materials, best practices and information on how to develop integrations and extensions on the Dynatrace software intelligence platform, to become more productive using Dynatrace’s own developer tools.
Connect & collaborate
All Dynatrace Developer Program members gain free access to a developer instance on the platform, where they can create new value on the platform. They can also use it to connect and collaborate with the Dynatrace community for as long as they are licensed to use the Dynatrace platform.
“The Dynatrace Developer program is a one‑stop shop for developers to innovate and create with Dynatrace,” said Steve Tack, senior vice president, product management at Dynatrace. “We’re empowering enterprises to extend the AI and automation core of the Dynatrace software intelligence platform to third-party tooling, platforms and workflows across the enterprise cloud to garner greater insight and automation, including autonomous cloud operations.”
So that’s free for life, as long as you work just on the platform, stay licenced to the platform and pledge allegiance (and your first born child, presumably) to the platform. We jest… there are wider third-party benefits here… as Dynatrace CTO and founder Bernd Greifeneder explained at Dynatrace Perform 2019 in Las Vegas… those other third party benefits could come in the form of helping to uncover efficiencies across wider aspects of the cloud stack that customers could be using… and also help create plug ins and other additional elements that may not have been immediately found unless there was engagement with the Dynatrace fabric.
The Computer Weekly Developer Network is in the engine room, covered in grease and looking for Artificial Intelligence (AI), Machines Leaning (ML) and Deep Learning (DL) tools for software application developers to use.
This post is part of a series which also runs as a main feature in Computer Weekly.
With so much AI, ML & DL power in development and so many new neural network brains to build for our applications, how should programmers ‘kit out’ their AI toolbox?
How much grease and gearing should they get their hands dirty with… and, which robot torque wrench should we start with?
The following text is written by Peter Silvio in his capacity as vice president of engineering for platform solutions at Shutterstock — the company hosts millions of stock images, photos, videos and music on its portal website.
Silvio writes on the evolution and democratisation of AI/Deep Learning development as follows…
When it comes to the works of AI, much of the focus has been in the realm of pure science and mathematics, as well as researching, developing and training models.
Many businesses are attempting to leverage AI and deep learning in their applications… however, engineering and executing a production-ready build which is scalable and performant is a difficult challenge.
This presents interesting challenges as the applications move to the cloud.
Some microservices architectures have robust infrastructure capabilities and tooling, such as Docker and Kubernetes; however the same toolset support for deep learning based applications, is still nascent.
Offline Tasks & Online Tasks
In computer vision application development for example there are generally two types of tasks to complete application, ‘Offline Tasks’ & ‘Online Tasks’.
Offline tasks are those which do not impact current production environments, including model design, development, training and even initial testing and model validation. In terms of tooling, this is the most mature space within deep learning with diverse languages from TensorFlow, Keras to PyTorch and a growing ecosystem of supported libraries.
Additionally model development and training is beginning to see democratization through the availability of commercial software and managed services to reduce the overhead of the model development lifecycle.
A key tool many data scientists have used for years are Jupyter notebooks. In recent years, many vendors including major cloud providers have focused their attention on providing managed services which aim to make machine learning/deep learning design and development more efficient as well as accessible to data scientists and engineers alike.
Amazon Sage Maker, Google Data Lab and Azure ML Studio all provide fully managed cloud Jupyter notebooks for designing and developing machine learning and deep learning models by leveraging serverless cloud engines.
By leveraging these platforms businesses can take advantage of a full spectrum of capabilities, allowing developers to focus on building truly differentiated models where the business need or opportunity requires. As well as the ability to leverage pre-trained models or managed APIs for more standard, commoditised capabilities or in between by leveraging transfer learning.
Given the time and resource required to build and train deep learning models, transfer learning is an incredibly efficient and powerful method, which has become popular in deep learning.
Transfer learning allows us to use a pre-trained model on new problems. Leveraging models such as Inception V3 for image recognition reduces overall development time while still producing highly accurate results.
Online tasks consist of the productionisation and operations of AI / deep learning capabilities into business applications and platforms. At Shutterstock we have applied our research in image recognition to develop a platform which offers our customers rich features such as ‘Reverse Image Search’ and ‘Similar Image Search’.
When it comes to executing on a production build for deep learning services such as custom computer vision models the tool-chain available to developers is still fairly nascent and relies heavily on more generalized tools and custom development.
Up until recently deploying Graphics processing unit (GPU) and Input/output (IO) heavy deep learning applications generally require expensive, intricate configurations and do not take full advantage of modern capabilities such as containerization, autoscaling and tools such as Kubernetes.
This will change rapidly, there is already great language support across Python, Tensor, MxNet, etc, and as the offline tasks are being simplified so too will online tasks. Now with the continued evolution of open source projects such as Kubernetes and Istio; building, deploying, testing and scaling are becoming streamlined.
About the author
Peter Silvio is a passionate technology leader with over 20 years of experience, he has spent the past decade architecting and building distributed service platforms and designing data architecture not just for internal applications, but also defining API products to power future company solutions.