Load balancing and application services used to rely on traditional (i.e. old fashioned) appliance-based web application firewalls.
That day has now passed, according to Avi Networks… a firm known for its so-called Intelligent Web Application Firewall (iWAF) technology.
Where [software] appliance based firewall technologies were (admittedly) constrained by the operational and performance limitations of the hardware upon which they say, Avi Networks says it has come forward with a software-only solution that operates as a centrally managed fabric across datacentres, private clouds and public clouds.
It is, by nature then, scale-out… as big as the web needs.
Wall of knobs
The assertion here is that traditional web application firewalls suffer what the company calls ‘wall of knobs’ complexity, that is – they present a massively complex surface of dials and switches all operating as controls for black boxes that provide little to no intelligence.
Avi’s elastic scale-out platform enables iWAF to perform at what is claimed to be 50X faster than legacy appliances, processing hundreds of Gbps of throughput and over a million transactions per second.
Also… iWAF protects web applications from common vulnerabilities identified by Open Web Application Security Project (OWASP), such as SQL Injection and Cross-Site Scripting, while providing the ability to customise the rule set for each application.
Don’t fly blind
“In the face of ever-increasing cyber threats, enterprises need to protect their most important revenue-generating assets: their web applications. We designed iWAF to be robust and secure for any application, traditional or cloud-native, under any amount of traffic,” said Guru Chahal, vice president of product at Avi Networks. “With the Avi Networks iWAF, IT teams no longer need to fly blind when writing and enforcing security policies.”
Additionally, iWAF analyses the security rules that match a particular transaction, providing this in real-time as applications and attack patterns evolve.
Based on this intelligence, one-click customisation of rules and exceptions could help reduce the problem of false-positive fatigue.
Essentially this product provides centralised management, elastic scale and closed-loop security analytics.
The Avi Networks iWAF is the latest component of the Avi Vantage Platform, which features a Smart Load Balancer and an Application Service Mesh.
This is guest post for the Computer Weekly Developer Network blog written by Ben Newton, analytics lead at Sumo Logic.
Sumo Logic, a firm known for its cloud-native machine data analytics platform designed for what has been called ‘continuous intelligence’ in the continuous always-on world of cloud.
Newton writes as follows…
So, what does it mean to be cloud native?
It reminds me of that age old cocktail party question – “Where are you from…originally?”
In most fast moving urban areas, that answer is almost always somewhere else… and then you meet that rarest of creatures – the native.
They grew up in the neighborhood.
They are ‘street smart’ i.e. they know where to go to get the best sandwiches and where to vacation away from the tourists. They may never have visited all the tourist attractions – because, you know, it just never occurred to them to do it. Being cloud native is like this and more.
So, what are the assumptions and secrets of the cloud native?
Assumption #1: What’s a server anyway?
This is really the most basic and the most profound question.
I remember racking my own servers. I could see the blinking lights and hear the fans. To the cloud native, a server is just a line on an administrative screen. It is a unit of compute. When a better unit of compute comes along, the cloud native presses a button and upgrades.
They don’t name their units of compute after Simpsons characters or care if there are blinky lights. They don’t even know what a server looks like.
Assumption #2: Why build – that sounds painful!
In the pre-cloud world, Oracle clusters are lovingly configured and nurtured – hours are spent deciding on network device specs and hardware configurations. The cloud-native doesn’t understand such drudgery.
These are just services to be configured and be billed monthly by per-second usage.
Why on God’s green earth would you build a database cluster when you stand one up in seconds with a simple command – and scale it by the mere power of thought? In the time it took you to ponder that thought, the cloud native has already scaled their database cluster to 3 continents.
Assumption #3: Is your scaling organic & conflict-free?
In the sad, gray world of the pre-cloud, scaling was something that required thought, planning, and purchase orders.
The cloud-native thinks of their application as organic, like the milk they buy at Whole Foods (which is now on the same bill as their cloud services and can be delivered by drone. Nice!). It starts in a small plot (without using pesticides, of course), and grows to fill the space. Scaling is just something that happens when you shine sun on plants and feed them compost, silly.
The cloud native uses services that are built with scaling as a given, not something to be planned. Of course, there might be hiccups along the way, but you feed the misshapen tomato to your inordinately happy, free-range chickens and move on with your life. It’s 2017 for crying out loud.
Assumption #4 – Geography is for slackers
In the olden days, moving to a new datacenter was like building a new town from scratch. It was painful and lasted for years. The cloud native sees no boundaries to their international ambitions. They can replicate their responsibly grown, modern application to Dublin without even going there to drink their Guinness.
The beauty of the cloud world is the consistency – everywhere. And unlike certain unnamed fast food joints, the cloud native enjoys their non-mediocre, artisanal, conflict-free services the world over.
Just like anyone that moves to a new place, you can assimilate and try to fit in. But you will never know New York like the native Yorker, or understand Chicago like a native of the windy city.
You will never understand technology as well as your kids. That’s life…
About the Author
Ben Newton has spent the last decade and a half of his working life in the world of IT. He is machine data analytics lead for Sumo Logic and part of a team focused on a new approaches to machine data/big data analytics.
Follow Ben on twitter @benoitnewton
This is a guest post for the Computer Weekly Developer Network written by Eric Sigler in his capacity as head of DevOps at PagerDuty.
Pager Duty offers what it calls a Digital Operations Management Platform that aims to integrate machine data alongside human intelligence on the road to building interactive applications that benefit from optimised response orchestration, continuous development and delivery.
Sigler writes as follows:
These days it’s rare that I speak with a customer building on-premises only infrastructure.
Most companies I speak with are moving to the cloud [model] in some capacity, have already migrated, or were built from scratch in the cloud. As a result of this huge shift in how companies approach their infrastructure, many are inevitably and necessarily adopting a cloud-native architecture.
Going cloud native acts as a ‘forcing function’ for how applications are built on top of infrastructure.
Because you’re standardising on the behaviour of lower-level components like compute and network, what you’re effectively telling individual teams working on these smaller, more agile units of software (be it ‘service oriented’ or ‘microservices’) is….
“Please stop spending all of your time reinventing the wheel [in respect of] everything below the application layer.”
This is where traditional or virtualised application design tends to fall down, as you end up spending lots of time reinventing how you ship that software. Which is a painful process, and one that doesn’t often result in useful business value.
Failure 5#1t happens
There are a couple of different, related useful mindsets to remember. The first is just to accept that failure happens, yet this is an area where many software developers tend to treat their software as a precious, unique snowflake.
If, for instance, the network breaks, it’s assumed that the software had nothing to do with that failure – but that’s not the case. Failure happens at all parts and in all aspects of operating a cloud-native infrastructure. In my experience, you ignore this reality at your own peril.
A dependency on dependencies
The second mindset to adopt is that you can’t ignore external application dependencies, such as DNS providers or SMTP delivery services.
Own your dependencies or they will own you.
This isn’t a new concept, but whatever you are building, it’s the responsibility of the service under your control to deliver. Your customers won’t care what downstream dependencies you use, or what problems they had. You must have that engineering in place so that when a dependency fails, you can rapidly recover.
Shifting to a cloud-native approach absolutely changes an organisation’s collaboration among its developer and operational teams. With cloud native comes the move to a consistent, standardised set of primitives that a team can use. This is closer to total ownership of that service, which is uncomfortable for some teams.
The operations team in most companies should be shifting toward providing those platforms and the primitives to their developer teams.
- Ops teams are no longer charged solely with spinning up infrastructure; they’re also now there to be enablers for developers.
- Conversely, the developer teams must effectively use these primitives and platforms provided by operations.
One final aspect that I consider foundational to a cloud-native approach is the importance of small teams working together on smaller services that are then composed together. This leads to far more agility within the business and to a better ability to address the multitude of problems that come innately with software development.
Keeping these points in mind should ease your move to a cloud-native approach and hopefully make it a little less painful.
PagerDuty explains more on digital Ops management here.
Enterprise cloud company Nutanix has announced new capabilities for its Acropolis File Services (AFS) software. This is new technology and it has only been around for a couple of quarters. In that time Nutanix states that around 10% of its customers are choosing Nutanix AFS for their file services needs.
Delivered as part of the Nutanix Enterprise Cloud OS, AFS software is intended to streamline IT operations by ‘natively converging’ Virtual Machines (VMs) and file storage into a single computing platform that is inherently engineered for cloud computing with its ‘scale-out’ characteristics.
Nutanix (as is common with all firms in this space) is usually economical of its explanation of what the mechanics of native convergence might mean. We can safely assume that this is about engineering file structures to the types of compute and data management DNA needed to operate in the cloud i.e. an environment where storage comes from services-based streams rather than one single host machine of any description.
Essentially this is all about:
- getting to cloud computing
- mixing best of public and private cloud
- doing it without creating data silos
Cloud portfolio services
The Nutanix Enterprise Cloud OS architecture has been engineered to provide ‘portfolio’ services for cloud applications. Again we ask, what’s a portfolio service?
Nutanix is using this label to express a notion of application building blocks that come with on-demand compute resources. That makes sense, that’s what cloud is.
The kind of things found here are elastic storage services for block and file-based data. The concept is that IT managers could use these tools to build and operate enterprise datacentres that rival public clouds.
Pleasure from your partner
The firm says that its AFS partner ecosystem will also provide tools to help with security (simple in-line antivirus scans using software from Symantec); backup (partners Rubrik and Comtrade); and detailed audits of file system access (partners Varonis and STEALTHbits).
“Rapid AFS adoption shows that customers are clamouring for more ways to converge and simplify all their IT operations, and to replace legacy products with more flexible software-driven solutions,” said Sunil Potti, chief product and development officer at Nutanix.
Potti further insists that his firm is reducing operational complexity with a common software layer. He explains that in addition to simplifying IT infrastructure by removing silos created by dedicated file services offerings, AFS software gives customers a public cloud experience, but with the security and control of a private cloud deployment.
AFS customers can manage file services via Prism, delivering a unified, consumer-grade management interface for their full infrastructure stack and freeing IT resources to focus on critical, business-driving activities.
Deeper dive info on this subject can be found on the Nutanix blog.
There are software application development professionals in every industry from oil & gas to cake baking, obviously.
There are core characteristics and functions that will bind all developers together — such as the need to adopt web-scalable composable containerised architectures in the modern cloud era along with Agile programming practices… and so on.
But could the need to engineer payment mechanisms into modern app structures in our oh-so very customer centric society form a new ‘must have’ in contemporary programming environments?
Paysafe thinks the answer to the above technology propositions is yes.
But then it would do… the firm is a developer-focused API-first global payments company with a programming platform that it sees as one of the key channels for receiving direct feedback to optimise its product range.
Paysafe says it has long-held strong relationships with developers within medium-to-large enterprise merchants.
The aim of its Developer Centre is to allow developers to understand the breadth of Paysafe’s product range, how they can integrate APIs/payment services.
Its also aims to encourage engagement through its blog and Stack Overflow.
The ‘checkout experience’
Paysafe.JS is a hosted payment product that allows merchants to download an SDK to create a customised payment form that adheres to a firm’s own brand and so-called ‘checkout experience’ (Ed – oh, that’s really a ‘thing’ now then).
“The Developer Centre is the main platform for Paysafe to showcase product innovation to the developer community, it also allows merchants to integrate their new product innovation/APIs. Paysafe has a number of new features in development, including its Global Product Roadmap, greater interactive product tutorials and demonstrational videos; allowing merchants and developers to subscribe to real time APIs statuses updates via Status.io. Paysafe also plans to further simplify its application process so that merchants and developers can onboard even faster,” said the company, in a press statement overview.
Look & feel
The Paysafe Card Payments API appears to offer a full suite of credit and debit card functionality for card-processing needs… plus, the Paysafe 3D Secure API offers control of the cardholder authentication process for a developer’s e-commerce platform of choice.
Paysafe… hmm, it almost sounds like a good excuse for a padlock and keyboard picture, oh okay, go on then.
Software application developer-focused revision (and version) control system company Perforce Software has nipped down to IKEA and bought itself Sweden-based Hansoft Technologies.
Hansoft is neither known for its effective implementation of flat-packed furniture or its development of delicious low-cost meatballs.
Instead, Hansoft is a provider of enterprise-level Agile planning tools.
Perforce then, for its money, gets additional project management tools as the core component of this deal.
“At Perforce, we’re on a mission to connect teams to the efficiency, insights, support and security they need to innovate. With this latest acquisition, we’re pleased to build on that undertaking with a solution that provides both macro- and micro-level views into enterprise Agile projects,” said Janet Dryer, Perforce CEO.
Hansoft software is said to be applicable across entire teams — from developers to managers to executives — to provide planning, management and collaboration functions.
Previously Perforce will have already had a good degree of planing and management tools inside its Helix ALM (Application Lifecycle Management) product to help centralise and link development artifacts across the application lifecycle… but presumably the firm wanted an extra dose of Agile, which it will get from the Hansoft boys and girls.
The software achieves its team-wide functionality by synchronising two entities that often have conflicting needs in Agile enterprises: the development team and the management team.
Agile + Waterfall
“Using Hansoft, team members are free to use their preferred combinations of Agile and Waterfall project management methodologies, including changing methodologies in flight, while managers and executives can make informed decisions with real-time insights and status provided by comprehensive visual dashboards,” explained Patric Palm, Hansoft co-founder and former CEO.
The announcement comes weeks after Perforce unveiled its latest software, Helix TeamHub Enterprise, a code hosting and collaboration solution for multi-repository types that offers build performance for large-scale Git environments.
Customer Relationship Management (CRM) and Relationship Intelligence (RI) software company SugarCRM has launched a mobile Software Development Kit (SDK).
The SDK can be used to develop purpose-built CRM mobile applications for individuals, roles and teams.
What it means is that SugarCRM customers can build customisations for Sugar Mobile, akin to what can be done today for the Sugar desktop experience.
Slow mobile uptake
While smartphones are firmly entrenched in today’s business world, SugarCRM argues that truly valuable mobile apps from enterprise software providers are rare.
“This is because businesses are still learning how to take advantage of the contextual and real-time information provided by smartphone capabilities like geolocation, push notifications and the camera,” said the company, in a press statement.
According to the findings of a Nucleus Research report, 65 percent of companies that consistently take advantage of mobile CRM are meeting or exceeding their current sales quotas.
“While the ‘mobile-first’ era is powering modern businesses, the key to continuing that trend is to fuel the mass customisation of the app development process,” said Rich Green, chief product officer at SugarCRM. “The mobile SDK’s formalised, public APIs and guidelines will help developers extend Sugar Mobile in an upgrade-safe manner. SugarCRM has done the heavy lifting so customers do not need to build their own applications from scratch.”
Common examples of customisations are: integration with enterprise-ready mobile device management tools, custom fields, views and buttons; integration with native device capabilities like GPS and the camera; and custom styling, theming and navigation.
The Sugar Mobile app is available for all Sugar Professional, Enterprise and Ultimate customers. The app includes features such as ‘offline storage mode’, single-sign on connectivity and mobile device management capabilities.
Splunk is using its .conf2017 annual conference to explain how governments and higher education institutions using its machine data machine learning technologies to analyze, correlate and act on machine information streams to serve (and in some cases underpin) their operations.
Could this be the rise of (or even the coining of the term) g-data?
G-data is not a formal term yet by any reckoning, but perhaps it soon could be… and surely G-data-as-a-Service (GdaaS) will follow.
The news from Splunk is that thousands of public sector organisations including all three branches of the U.S. government, 15 cabinet-level departments, 43 out of 50 U.S. states and over 750 higher education institutions are using the firm’s IT and security products.
Being a North American event, Splunk did not offer corresponding UK or European stats at this time… although the firm’s VP for public sector did say that this trend is being seen in public sector and higher education organisations worldwide.
According to the 2017 Splunk Public Sector IT Operations Survey, at least 60 percent of public sector IT professionals feel as or less confident in carrying out responsibilities compared to last year.
Examples of g-Splunk in use
In terms of working examples of Splunk in use in the public sector – or if you will allow us – g-Splunk in use, we know that the US Defense Health Agency (DHA) relies on a secure and always-on infrastructure to ensure that armed forces are healthy and ready to serve around the world.
Splunk is helping the DHA successfully monitor the newly deployed MHS GENESIS Electronic Health Record (EHR) system and will soon provide the foundation for DHA’s enterprise Log Consolidation, Analysis and Retention Service (LCARS). We believe the Splunk platform can help to deliver significant savings by identifying tools we no longer need and automating reporting and IT processes,” said Wayne Speaks, chief operating officer, Defense Health Agency Infrastructure and Operations Division.
Other examples of g-Splunk for g-data include with the UK Ministry of Defence (Mod).
Air Commander Chris Moore is ISS director service operations. Moore says that Splunk works to provide the MoD with access to real-time protective and network performance monitoring as part of its ‘Enterprise Security & Service Management’ programme.
Splunk has cited other use cases in the City of Los Angeles public operations division and in Georgetown University.
Machine data operational intelligence platform specialist Splunk has hosted its .conf 2017 conference and exhibition in Washington DC.
Many of the firm’s partners (Splunkners, perhaps?) also attended.
The Splunk global partner ecosystem is now said to be some 950 partners strong – it is a union of system integrators, distributors, value-added resellers, technology alliance partners, OEMs and managed service providers.
What the partners said
Speaking in relation to this event, CTO of xMatters Abbas Haider Ali told the Computer Weekly Developer Network that his firm works to integrate activity across the tools and people that build and run enterprise applications.
“Gleaning real time intelligence across all of these activities is a critical component of performance and secure applications and Splunk is the tool of choice for our customers to make that happen. The Splunkbase platform makes it easy to build and distribute integrations for the Splunk community and connect them to the full ecosystem of xMatters integrations. As a partner, we benefit from new use cases and applications of Splunkiness to hard operations problems and alerting people of notable events with xMatters,” said Haider Ali.
VP for business & corporate development at Digital Shadows is Alex Seton. A Splunk partner for some time now, Digital Shadows monitors & manages so-called digital risk across a range of data sources to protect a business.
“The app we have developed for Splunk Enterprise customers means they can now use Digital Shadows’ solution to help manage and mitigate their digital risks across the open, deep and dark web alongside Splunk’s real time operational intelligence. This will enable customers to manage their digital risk from cyber threats, data loss, brand exposure, VIP exposure, infrastructure exposure, physical threats and third-party risk, and create an up-to-the minute view of their organization’s digital risk with tailored threat intelligence.”
“Splunk users know that operational intelligence makes outsized demands on file storage infrastructure. These workloads have new requirements for scale that legacy storage appliances are unable to meet. It’s no longer just the storage capacity that matters; it’s also the number of files that can be stored and managed and here’s where legacy storage runs out of gas,” said Ben Gitenstein, senior director, product management at Qumulo.
At Qumulo, Gitenstein says they have taken a different approach for a completely different level of scale. They know that users of large-scale storage need control over (and insight into) file system usage and performance in real time. Gitenstein says he also knows that software developers and engineers need complete programmability of infrastructure.
Operational intelligence isn’t just storage as usual, he said.
V for VictorOps victory
VictorOps develops a full-stack DevOps incident management platform that ingests real-time operational intelligence from Splunk (and other monitoring tools) into a timeline of activity for people watching the systems.
“Whereas Splunk delivers intelligent insights throughout the delivery chain, VictorOps connects those insights to the people who have the expertise to take the right action. As more modern development organizations invest heavily in continuous deployment, microservices and agile practices, they use these rapid cycles to deliver value to customers faster. Splunk is an important partner in collecting data across that delivery chain. VictorOps takes that information from Splunk, disseminates it, and facilitates continuous learning so that teams retro on what went wrong and don’t make the same mistakes twice,” said Joni Klippert, VP of product, VictorOps.
Other comments will follow…
Amido is a technical consultancy specialising in customer identity, search and cloud services.
Gray writes as follows…
Companies are attracted to cloud native applications because it takes them further away from the ‘bare metal’ maintenance associated with traditional infrastructures; for instance installing software packages and managing updates etc.
When companies decide to go cloud native, you can essentially move into a serverless architecture and can choose to go with a managed service provider like Azure App Service or AWS Elastic Beanstalk/ECS. By doing so, you remove the overhead of having multiple teams –a team in-house writing code to create what you the application or solution, and, another team making sure it operates well, and instead have a number of skilled workers who can develop and operate simply by using platform tools.
Additionally, you don’t need to hire as many specialists to implement more complicated elements of data processing – i.e. data pipelines or machine learning algorithms; these are all now drag and drop interfaces so developers can focus on the insights the data brings, rather than the infrastructure and algorithm build.
What changes in cloud native?
So what elements of programming architecture change when you go cloud native?
It can be difficult to remain vendor agnostic when you embrace some cloud native platforms because they work in very different ways and do not offer feature parity. It is, therefore, worth consulting with a technology specialist firm who are not tied into specific architectures as they’re more likely have DevOps teams that can help design a system that can be deployed over multiple clouds whilst also removing the layer associated with legacy architecture designed for monolithic applications.
When you find a trusted partner/consultant, the process to move into the cloud becomes exciting. You realise that you can introduce microservices and containers to help build services and minimise disruption when upgrades are needed, or if there is an issue.
Design for failure
An important change you need to consider when going cloud native is designing for failure.
Cloud native PaaS components usually have lower individual SLAs (typically 99-99.9% as opposed to 99.99%) which is a large increase in potential downtime. Designing for failure means being able to recover elegantly if an individual component is not responsive, or, returns an error.
Alternatively, it can also mean being more creative when designing a system which needs to have a high uptime. For example, we recently built an application which is designed for 99.99% uptime and built on some partial components which offer 99.9%. The trick is to understand the failure scenarios and design for these to ensure that the system stays responding when there is an outage of a sub component.
Microservice architecture allows businesses to manage parts of larger projects individually, avoiding blanketed updates or uploads, which can result in system delays or down-time. Their purpose is to increase flexibility throughout the business offering, allowing an enterprise to be more competitive and to become increasingly agile, as they can adapt parts of the business in isolation. This way of operating is increasingly important for today’s High Street retailers, for instance, as they look to compete with, and adapt to, omnichannel operating models.
The path to adopting a microservices architecture does not require wholesale digital transformation as it doesn’t have to be an all or nothing proposition. It is entirely possible to simply dip their toes in the microservices world without starting from scratch. This is likely to be music to businesses’ ears as they look to keep up with, and exceed, their customer’s growing expectations and demands when it comes to online capabilities.
With containers, it gives you more control over the infrastructure you are deploying. This is because you are not creating a Virtual Machine for every instance of an application, meaning deployments are rapid and the overhead of the operating system is significantly lower. This combination is powerful as it means that we can issue an upgrade/change that will take effect almost immediately, without disruption to the general use of the portal.
However, despite the advantages of containers, they are not a magic fix for all. Not every organisation can benefit from its use as some applications are not suitable for containerised deployment, and so the decision to containerise software must be considered carefully.
Our monolithic past
Monolithic applications, favoured by traditional enterprises, are not as well suited to containerisation owing to the considerably different tooling required by Microservices. Containers are far better suited for a microservices environment where large projects can be broken down into a set of manageable, independent and loosely-coupled services.
For some legacy or monolithic solutions, the decision to containerise software needs to be considered carefully. Containers are valuable when monolithic applications can be split into smaller components which can be distributed across a containerised infrastructure; but this is not to say that any application will work, just that care needs to be taken to see if it is suitable for containerised deployment.
Amido is an independent technical consultancy that specialises in implementing cloud-first solutions. We help our clients build resilience at scale, flexibility for the future and differentiation of customer experience. And, we do this while minimising business-risk and build-cost.