CW Developer Network

Page 1 of 11112345...102030...Last »

July 11, 2018  7:41 AM

What is a software defined perimeter?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Information access and identity management controls have never been more in the data developer news-stream as they are today.

Software Defined Networks (SDN) have a massive part to play in terms of the way cloud architectures are now being constructed to operate, many of which will be multi-tenant  deployments with complex integration gateway challenges.

Building virtual barriers between different services, different computing instances, different analytics operations, different data workflows and different layers of a complex cloud infrastructure requires that we create a system of delineation — so how do we draw those lines?

Increasingly prevalent in this space is the notion of Software Defined Perimeter (SDP) principles.

As previously discussed on Computer Weekly, a Software Defined Perimeter approach can form part of a multi-layered approach to network security using a zero-trust model.

Black cloud, device posture

Software Defined Perimeter has also been called a ‘black cloud’ — it evolved from work done at the Defense Information Systems Agency (DISA) in the USA.

Connectivity in a Software Defined Perimeter is based on a need-to-know model, in which ‘device posture’ and identity are verified before access to application infrastructure is granted

You might even call a Software Defined Perimeter a means of creating an ‘air gapped network’ in many ways.

As noted at the above link on TechTarget, examples of this type of application access are legion and might encompass healthcare clinical networks, industrial control networks, broadcast networks, retail payment networks etc.

Working in this space is Luminate.

Azure integration

The firm’s technology for secure access to corporate applications in hybrid cloud environments has now announced that its agentless Secure Access Cloud platform is integrated with Microsoft Azure.

What this means for users is direct, secure access to applications and services deployed on Azure.

“Traditional security tools that were effective for on-premises datacentres can take months to configure when the datacentres are distributed and involve cloud hosting. Once in place, these tools may provide users with excessive access to the entire network and increase the network attack surface,” said Luminate CEO Ofer Smadari. “Luminate provides fast and secure access to applications that are hosted on Microsoft Azure without backhauling traffic through the VPN or DMZ. Luminate’s platform takes only five minutes to configure and can be dynamically managed with ease.

Luminate’s platform also integrates with Azure Active Directory for authentication and policy management throughout the lifetime of the user’s session.

As cloud momentum continues, Software Defined Perimeters will (very arguably) now grow… what will (again, arguably) be interesting is how the major cloud platform players work with specialist providers to create secured zero-trust architectures and drive us towards what could 100% API driven infrastructures that are capable of integrating with automation and orchestration tools.

July 10, 2018  10:10 AM

What’s inside a Microsoft Intellimouse?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Yes this is a software column (blog), but a) this is summer b) all hardware has an element of embedded software in it and c) some of these stories just need to be told … oh okay and d) it’s the summer ‘silly season’ when news is supposed to get a little thin on the ground.

Readers will note that the technology ‘silly season’ doesn’t actually exist and that the technology industry is constantly 24x7x365, but… you get the point.

So then, inspired by the Microsoft Intellimouse Explorer 3.0 from 2003, Microsoft has recently released the new Microsoft Classic Intellimouse.

The Intellimouse 3.0 was lauded for its ergonomics due to its the asymmetric from, sculpted buttons and finger rests. The new model is also nice to hold (or so we’re told), but has intelligence inside and additional features made possible by technology today.

Microsoft’s Mark Rowland is category lead for accessories on the firm’s devices team. Rowland says that we’ve come along way since the scroll wheel.

New tracks on Bluetrack

Although Rowland wouldn’t be drawn on specifics in terms of the embedded software code that sits inside a Microsoft Intellimouse, he did tell the Computer Weekly Developer Network that perhaps the greatest piece of software engineering associated with this product is its Bluetrack abilities.

“Microsoft created Bluetrack, a technology that allows for more accurate tracking over that of optical and laser mice. The technology using an improved optical architecture that generates a wider beam of light on the lower side of the mouse. This light enables the mouse to cover more surface and give it the capability to support almost any surface, which laser mice cannot support. This is perfect for gamers who need accuracy,” said Rowland.

How much code in a mouse?

Although Microsoft’s Rowland wouldn’t offer an estimate on how many lines of embedded software sit inside the new mouse, we do know that Ford indicated that it now has over 150 million lines of code in the F150 pickup.

We also know that an average smartphone app has around 50,000 lines of code… a pacemaker has around 100,000 lines of code. But then the Large Hadron Collider uses 50 million lines.

All in all we can probably estimate the new Microsoft Intellimouse running somewhere around a figure comparable to a smartphone app.

The mouse itself is plug and play for USB 2.0/3.0 slots.


July 3, 2018  8:27 AM

MongoDB World 2018: what you (and we) missed

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network shamefully missed its chance to attend MongoDB World 2018 for the USA leg of the schema-fluid developer-focused database company’s contribution to this year’s conference season.

Sorry MongoDB, there was just so much on, yes we’d like to come again and we’ll see you in London in November for the UK leg of the tour.

Extenuating concessionary apologies notwithstanding, what did MongoDB CTO and co-founder Eliot Horowitz talk about this year?

“MongoDB has always been about giving developers technology that helps them build faster,” said Horowitz, in a kind of upfront ‘yeah, we’re really all about the developers’ message.

Horowitz spoke of MongoDB Stitch, the company’s new serverless platform for rapid application development (as in RAD) of mobile and web applications. The services provided in Stitch are meant to give developers access to database functionality while providing security and privacy controls.

“MongoDB Stitch brings our core strengths — the document model, the power of distributed databases… and the ability to run on any platform — to an app in a way we’ve never done before. Stitch is serverless, MongoDB-style: it eliminates much of the tedious boilerplate so many apps require to get off the ground and keeps you focused on the work that matters.”

MongoDB also announced Global Clusters in Atlas, which allow users to create ‘sophisticated policies’ to carefully position data for geographically distributed applications. By dynamically configuring these policies, MongoDB says that users can ensure that data is distributed and isolated within specific geographic boundaries.

Similarly, by using Global Clusters, relevant data can be moved close to end-users for worldwide, low-latency performance.

MongoDB Mobile was also announced (in beta) to allow users to run MongoDB anywhere. In terms of ‘anywhere’, MongoDB means  all the way out to the edge of the network on IoT assets, as well as iOS and Android devices.

According to a press statement, “MongoDB Mobile, a new mobile database, allows developers to build faster and more responsive applications, enabling real-time, automatic syncing between data held on the device and the backend database.”

Previously, this could only be achieved by installing an alternative or feature-limited database within the mobile application which resulted in extra management, complicated syncing and reduced functionality.

MongoDB Atlas

We also hear news this month of advancements in database as a service MongoDB Atlas.

MongoDB Atlas is a fully automated cloud service engineered and run by the same team that builds the database. It offers options to automate time-consuming administration tasks such as infrastructure provisioning, database setup, ensuring availability, global distribution, backups etc.

MongoDB Atlas is extending the Atlas Free Tier to Google Cloud Platform, in partnership with Google, obviously.

This allows the developer community relying on GCP services to build their applications using fully-managed MongoDB.

The Atlas Free Tier offers 512 MB of storage and is said to be ‘ideal’ for prototyping and early development.  

“Every business today is focused on digital transformation, which is all about leveraging modern digital technologies to drive superior business performance, but this is far easier said than done,” said Dev Ittycheria, president & CEO, MongoDB. “With the product announcements made [here], MongoDB not only provides a compelling database platform for the most sophisticated use cases, but also extends the power of MongoDB to a mobile database and a new serverless platform.”

In addition, the company announced a number of new security features for Atlas, such as encryption key management, LDAP integration, and database-level auditing, to offer the most security conscious organisations more control over their data.


July 2, 2018  6:43 AM

Compuware divvies up ‘diverse’ data, for DevOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Diversity matters, in all walks of life, obviously.

Data diversity is also an issue because data comes in many ‘types’, that is – structured, unstructured, semi-structured, big, dark, geo-tagged, time-stamped and so on.

Aiming to define, distill, divvy up and deliver a route through all these types of data diversity for DevOps teams this month is Compuware with its Topaz for Enterprise Data product.

The product provides data visualisation, extract and load and advanced data masking capabilities.

NOTE: Data masking is a method of creating a structurally similar but inauthentic version of an organization’s data that can be used for purposes such as software testing and user training. The purpose is to protect the actual data while having a functional substitute for occasions when the real data is not required.

Have you got PII?

Compuware says that data masking is a top concern today, given the importance of protecting personally identifiable information (PII) and complying with regulatory mandates.

CEO Chris O’Malley argues that Topaz for Enterprise Data can be used by enterprises with large diverse datasets of high business value residing on the mainframe that contain sensitive business or personal information.

“As senior mainframe professionals retire, large enterprises must transfer responsibility for stewardship of this data to a new generation of DevOps artisans with less hands-on mainframe experience,” he said.

Topaz for Enterprise Data allows DevOps users to understand relationships between data even when they lack direct familiarity with specific data types or applications, to ensure data integrity and resulting code quality. It can also convert file types as required.

Topaz users can access all these capabilities from within a Eclipse development environment.


June 29, 2018  10:33 AM

Who’s your Cloud Daddy developer daddy?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Who’s your daddy? — sang the Zombies in their 1967 hit Time Of The Season. Then came Ray Winstone in Scum in 1979, he was certainly the daddy.

Fast forward to 2018 and who’s the daddy?

Secure AWS-native data protection company Cloud Daddy thinks it could have the answer.

Available on the AWS Marketplace, this so-called ‘solution’ (Ed – yawn, aren’t they all?) is intended to coalesce backup and disaster recovery, advanced security as well as infrastructure management into one total package.

Cloud Daddy founder and CEO Joe Merces, does not, mercifully, call himself the Cloud Daddy daddy.

He does however claim that his firm’s move to become a unified AWS-native solution reflects the changing IT landscape.

“Not long ago, disaster recovery was built around the notion of natural disasters impacting an on-premises datacentre,” said Merces. “In today’s world, cybercrime and cyber threats join natural disasters in making secure backup and recovery a constant concern.”

Why should developers care?

Cloud Daddy sets out its developer proposition as follows:

“New business initiatives such as new application development and testing benefit from using the AWS elastic-cloud in combination with Cloud Daddy Secure Backup. [This technology] provides agility and security to protect your IP and application development and testing environments with extremely fast recovery times with no impact on production servers and networks. Full backups of development and testing environments can be scheduled with extreme prejudice with repeated snapshots on an automated basis, ensuring those continuously changing environments and temporary workloads are not only secure, but recoverable at a moments notice.”

As part of this new, we also note that AWS specifically calls out a ‘Shared Responsibility Model,’ where AWS is responsible for security of the cloud, but the customer is responsible for security in the cloud.

Cloud Daddy says that its Secure Backup gives users an at-a-glance understanding of their AWS infrastructure, navigated by tabs and incorporating a dashboard with a visualisation of protected instances and job status.

Users can select backups and replications quickly and easily, anywhere on the globe where AWS has a presence. Assets can be backed up, managed and recovered from one AWS region or account to the other, providing layers of disaster recovery with superior restore speeds using AWS over on-premises solutions.


June 26, 2018  8:01 AM

Software AG bags Belgian IoT bargain, no waffle

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

German technologyhaus Software AG has for some time now openly focused on technologies that populate the Internet of Things (IoT).

The man steering much of the company’s development in this space is Bernd Gross in his role as senior VP of IoT & cloud.

Gross now heads up Software AG’s Cumulocity division (a company Software AG first partnered with in March 2017 and then subsequently acquired in March 2018).

As noted here, Cumulocity enables developers at Software AG customers to build what could be called a ‘service wrapper’ around each of their IoT enabled products — and this means that they themselves can deliver new services to their own customers.

Software AG’s Gross has previously warned that over 50% of IoT projects fail, so in this still-nascent undeniably embryonic stage, what is the company doing to solidify its wider approach to IoT development?

Belgian buyout

News this month sees Software AG snap up Belgian visual data analytics company TrendMiner NV.

TrendMiner specialises in visual data analytics for the manufacturing and process industry — its technology will be directly integrated into Software AG’s Cumulocity Internet of Things (IoT) and Industry 4.0 product portfolio.

TrendMiner works to mine trends, surprisingly.

The technology uses all available time-series IoT data and delivers findings in a ‘user-friendly’ (presumably awesome) format.

Karl-Heinz Streibich, Software AG CEO stated, “TrendMiner provides an ideal fit into our Cumulocity IoT portfolio at a strategically decisive moment. We are in a phase of dynamic market development for IoT applications. Together with TrendMiner, we will be able to offer a leading streaming and visual time-series analytics platform – a unique combination.”

In terms of use, TrendMiner is designed to enable manufacturing companies and the process industries to recognise patterns and trends in their process data, identify production irregularities and adapt necessary process adjustments.

Software AG CEO Streibich has said that TrendMiner has specific expertise in the development and consulting of pattern recognition and analytics functionality for the oil and gas, life sciences and manufacturing sectors.


June 25, 2018  8:42 AM

Electric Cloud develops ‘credit score’ for application success

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There’s applications and application delivery… and then there’s the arguably more upmarket notion of Adaptive Release Orchestration & Continuous Delivery (AROCD – not a real acronym).

Placing itself unabashedly in the latter category is Electric Cloud.

The firm this month brings forth its ElectricFlow DevOps Foresight product as a piece of software designed to apply machine learning algorithms to the massive amounts of data generated by tool chains.

The software is capable of producing a ‘risk score metric’ that predicts the outcomes of releases before they head into production.

Taking predictive analytics even further, it will also illustrate where to improve pipelines based on metrics related to ‘developer influence’ (based on past performance and behaviour, presumably) and ‘code complexity’.

ElectricFlow DevOps Foresight is supposed to reduce bottlenecks and inefficiencies as it provides the ability to understand resource allocation for new and complex application and environment requirements.

A ‘credit score’ for apps

Much like a credit score, the creation of a release’s risk score numerical value is based on developer, code and environment profiles and gives everyone a visual way to interpret the likelihood of success for a particular build or pipeline.

If the score is high, DevOps teams can look at those profiles to determine what, specifically within those profiles, is driving up the risk.

Electric Cloud says that in order to illustrate areas for improving the pipeline, DevOps Foresight looks at contributing factors and what has helped to improve them in the past and will suggest appropriate changes in teams, code or environments.

CEO of Electric Cloud Carmine Napolitano has said that improving the pipeline is often based on trial and error or best guesses.

“What we aim to do with ElectricFlow DevOps Foresight is provide data-driven insights much earlier in the process by looking at past successes, build complexity, author profiles and then show where the pipeline can be improved based on facts,” said Napolitano.

Managers will be able to proactively answer questions such as:

  • Are we going to finish our release on time?
  • Can we move faster or can we do more?
  • Will this release cause more or less quality issues?
  • What’s the likelihood of a production deployment failure?


June 20, 2018  9:54 AM

Cast analysis: cold war era ‘Thatcher’ apps still rock, Agile more flaky

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cast is a company dedicated to the code analysis it performs as its core bread and butter. Not a security company as such, Cast describes itself as a software intelligence specialist

How much code analysis? Cast’s latest report analyses over 700 million lines of code.

The Software Intelligence Report on Application Age attempts to identify primary outcomes of old vs. new software that’s still in use today.

Product owner for Cast’s Highlight tool (and co-author of the report) Michael Muller argues that enterprise applications are often laden with software risk, as different pieces of functionality have been tacked-on over time.

Muller says that without adequate software intelligence, it is very difficult for executives to get an accurate assessment of resilience and agility risk — but then, he would say that, because he works for a software intelligence specialist, so where’s the meat in this sandwich?

“Smaller development teams and faster release cycles improve immediate outcomes for end-users, but without active portfolio-level management Agile teams can turn the codebase into a legacy headache for the next CIO,” said Muller.

What does all this mean?

Where Cast is going with this argument is that a) a lot of legacy apps (let’s remember that legacy means software that still works) are still very important and serve a lot of functional use and that b) Agile and Lean are fabulous, but they may not always lay down enough code annotation and other strains of application information to facilitate good application portfolio management.

Key ‘findings’ of this report include:

  1. Thatcher-era apps still have a big impact: 75% of applications built in the 1980’s still have a high impact on the service continuity of today’s businesses
  2. Older apps cause the most financial damage if they fail: 62% of apps built in the 1980’s will have a higher financial impact than modern apps if they fail today
  3. New app development methods aren’t designed for long-term success: Applications built on the Agile Methodology are less agile, meaning they are harder to integrate with new systems. A cause of TSB’s IT issues was the inability of legacy applications being able to work on modern platforms.

“IT still struggles to transform an ageing, monolithic app-centric environment into a nimble, outcome-driven engine needed to drive digital business,” said Dan Hebda, CSO at Mega, whose firm also contributed to the report. Mega builds visual modeling tools for application planning, design and development.

“In Mega’s experience, those most successful in modernising their infrastructure start with IT portfolio management to establish a baseline of software intelligence. This includes aligning resources to business outcomes, reducing infrastructure complexity, understanding technical debt and opening the road to accelerated innovation and growth,” said Hebda.

The Software Intelligence Report on Application Age looks at 2,067 applications, representing 733 million lines of code from 14 different technologies that are developed and maintained by more than 12,000 people across multiple verticals.

A link to the full free report is listed for download here.


June 18, 2018  9:16 AM

Can Hitachi Vantara build ‘ecosystem, stack, platform’ universe with enough granular clout?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hitachi Vantara continues its development in the post-HDS post-Pentaho post- ‘the other bit’ (okay we know it was Hitachi Insight Group) amalgam that has seen the company now build its current stack (it would say platform) of combined software and hardware engineering.

Platform indeed… the firm now focuses on its Hitachi Unified Compute Platform (UCP) family of converged, hyperconverged and rack-scale systems.

All which combined hardware and software power are intended to run what Hitachi Vantara has called its Application Ecosystem Solutions — or ‘apps’, for short.

So hang on, that was ecosystem, stack and platform and we’ve barely touched on product. Has Hitachi Vantara become so weighty that it forgets how to be granular?

Is Hitachi Vantara building the universe… and everything too?

To clarify what the company is doing, there are new certified applications optimised for SAP HANA, Oracle databases, VMware and big data analytics frameworks.

Hitachi UCP systems use modular building blocks of infrastructure that are pretested and validated to meet specific needs. It’s all about automation (minimising the human planning element) and combining compute, network and storage components to optimise performance.

Moving apps, moving workloads

Also in the mix now is the Hitachi Unified Compute Platform Advisor (UCP Advisor), an IT management and orchestration software offering. This software supports ‘full automation’ over server, network and storage components.

“UCP Advisor with customised workflows lets IT staff move applications and workloads between clouds and UCP systems in a smart and automated way to rapidly deliver new IT services. The latest release of UCP Advisor continues to enhance automation, including policy-based provisioning to speed initial deployment with less risk,” noted the company, in a product statement.

The firm also offers a modular converged architecture, Hitachi Unified Compute Platform CI (UCP CI) systems provide tooling for modern datacentre infrastructures.

“UCP CI systems with a management and automation toolset that uses UCP Advisor, enable the operational efficiencies of virtualisation and increase the performance of mission-critical applications. The systems simplify the control of both virtual and physical infrastructure to support a wide range of enterprise and cloud workloads,” noted the company, in a product statement.

Extending the product notes here, there is also Hitachi Unified Compute Platform HC (UCP HC) systems to make hyperconverged (clue, that’s what HC stands for) easier to deploy.

Last to note here is the Hitachi Unified Compute Platform RS (UCP RS) is a rack-scale (did you see the RS?) system designed to simplify the deployment of an agile data infrastructure at scale.

Integrated, optimised and certified

Where Hitachi Vantara is going with all of this is the development of pre-integrated, optimised and certified infrastructure software and hardware configurations

There are also reference architectures, combined with application-centric professional services… so it’s tested systems and best practices for data management… and this (in our opinion) is where Hitachi Vantara would insist that, yes, it is still granular down to individual application and data workflow needs.

Granular optimisation and integration intricacies in application ecosystem and data workflow health are tough to get right once you start going converged, hyperconverged or any shape of rack-scale — and that’s before you start debugging. Hitachi Vantara is playing for a big wins if it gets all this right.

Image: Hitachi


June 15, 2018  9:46 AM

LzLabs: source code dependency dragged down mainframe rehosting, until now

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Mark Cresswell in his capacity as CEO of LzLabs.

LzLabs develops what it calls its Software Defined Mainframe (SDM) package — the tool is intended to ‘liberate’ legacy applications to run unchanged on both Linux hardware and cloud infrastructures.

Cresswell writes…

The halcyon days of mainframe application development started in the 1970s and continued until the arrival of distributed computing in the early 1990s. It was during this period that most mainframe applications, which still form the backbone of over 70% of the world’s electronic commerce, were originally written.

Few programmers who developed these applications thought their work would still be in use in the year 2000, let alone 50 years later.

Source code management in these pioneering days relied heavily on local system knowledge. The source code management disciplines we take for granted today only became part of the application development lifecycle as the job market for programmers exploded, and staff turnover increased.

The loose relationship between source code, documentation and the actual programs in use, has been a developing problem for half a century. Whilst businesses in all manner of industries are increasingly looking to rehost mainframe applications in modern Linux environments in order to achieve cost savings and agility, the difficulty in identifying precisely which source code was used to build aspects of long-running applications is central to some of the challenges of these rehosting projects.

Unravelling source code dependencies

Without an application dependency map, which is often missing from legacy mainframe application documentation, users must undertake the arduous task of identifying which other applications and files the target application depends on before they can confidently approach rehosting it off the mainframe.

Once a user starts looking, it is very common for a list of program dependencies to grow exponentially. The initial program may reference two others, which may in turn reference another two, one of which may be called by other programs. The files used by the first program may also be used by an entirely different set of programs, all of which have their own dependencies.

This Gordian knot of interconnections can exponentially extend the scope and duration of workload rehosting projects beyond all forecasts

When the rehosting strategy is based on recompilation of source code, the situation becomes dramatically worse. For each program, the source code and any copybooks must be found.

As an IT executive in one large enterprise lamented, “We’ve got about 100 million lines of COBOL in our source code repository, but it’s OK as we only have 10 million lines active. We just don’t know which 10 million.”

To consider recompiling such applications to run on a new platform requires the daunting task of correctly identifying all the active source code, and making it available for migration.

Also, some early mainframe programs simply cannot be compiled to run on modern architectures. Such programs were written during a time when no one cared about the lock-in caused by direct reference to underlying system level functions such as operating system control blocks and unique file implementations. There are a surprising number of these programming needles in many organisations’ application portfolio haystacks.

However, source code availability is far from the only issue at play when looking at rehosting mainframe workloads. The way in which the operating environment was implemented, and has evolved over more than half a century, has resulted in fundamental differences in mainframe encoding, collation and database processing models that cannot be recompiled away. Stated differently, even if the source code is available, insurmountable problems remain. But this is a story for another time.

The solution?

The above paints a rather bleak picture of the challenge facing organisations who are desperate to liberate their legacy applications. Until now, businesses have seen no feasible way to run these old applications on newer hardware which appears on the surface to be entirely incompatible.

But what if a legacy mainframe application could be rehosted without the need to find, or even worry about the source code – without recompilation?

This is where the concept of a Software Defined Mainframe (SDM) comes in.

(Ed — this is a bit of shameless plug, but as it in context and… the wider context of this piece has developer reader value and it’s in context, then is it permissible.)

SDM makes application rehosting possible by making certain mainframe specific features available to applications. Using this approach, applications and associated data can be moved, with no reformatting, or requirement for original source code. Using other methods, the program migration would require time-consuming, risky and expensive rewrites, recompilation and extensive regression testing.

Modern Linux + legacy

Yet, even in this modern Linux environment, there is no escaping the need to work with some of the legacy source code. Many mainframe applications are regularly updated for business or regulatory changes.

However, an SDM enables ongoing development using modern tools. Graphical development environments; modern source-code management and DevOps processes and toolchains, make ongoing legacy application maintenance far easier, and open to a younger generation of programmers.

Mainframe workload rehosting is increasingly popular, but the legacy of 50 years of mainframe programming has created a bewildering array of barriers to success for any project that relies on source code – barriers made insurmountable by the mainframe skills shortage. With an SDM approach, we’re seeing the first low-risk, practical option to enable businesses to continue to use their investments in legacy application software as part of modern strategies.

LzLabs’ offices in Switzerland and the UK – you can reference the firm’s website here.

LzLabs Cresswell: today, people care about lock-in… and we’re fixing that.


Page 1 of 11112345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: