CW Developer Network

Page 3 of 10312345...102030...Last »

November 14, 2017  4:31 PM

SAP digs deeper roots for machine learning with ‘foundation’ expansion

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Machine learning, SAP

German enterprise software company SAP has used its TechEd Europe 2017 conference and exhibition to detail news of its SAP Leonardo Machine Learning Foundation (which is based on the SAP Cloud Platform) and the foundation’s current expansion.

The firm uses the term ‘foundation’ in this case to describe a technology, a software system and a team or technologists putting forward a platform — so this is not a foundation as in a charitable foundation for donkeys, for example.

In terms of working with machine learning in general, SAP earlier this year updated its Hybris Marketing Cloud with machine learning technologies for facial recognition and other Internet of Things (IoT) augmentations.

Current expansions within the SAP Leonardo Machine Learning Foundation see three new three major capabilities added:

  • ready-to-use services
  • retrainable services
  • a bring-your-own model

Customers, partners and developers can now rely on additional ready-to-use services such as optical character recognition (OCR) and multidimensional time series forecasting to predict the optimal price for a product, for example.

Additionally, SAP Leonardo Machine Learning Foundation now allows customers and partners to tailor generic models with company-specific data. The new image classification service can be retrained to recognise customer-specific objects such as products, components or spare parts.

Predefined, or retrained

Using predefined services or retrained models requires only minimal machine learning experience.

“SAP Leonardo Machine Learning Foundation puts digital intelligence into the hands of all customers, partners and developers – whether they are just getting started or already have dedicated data science teams. We aim to deliver offerings that are the easiest to use and have the most impact on companies,” said Markus Noga, head of machine learning, SAP.

Customers and partners with their own data science teams can now deploy and run their custom models based on TensorFlow models from Google LLC on SAP Leonardo Machine Learning Foundation. These advanced users can benefit from interoperability with SAP Cloud Platform.

As SAP has explained before no, the intention with SAP Leonardo Machine Learning Foundation is that application developers with no AI expertise have been able to use ready-to-use services to jumpstart the development of what has been called more intelligent applications.

These services are pre-trained to work on real data, readily available on SAP Cloud Platform and accessible via standard (RESTfull) APIs… and this makes them fit (so says SAP) for easier consumption.

A working example

SAP pre-trained model for image recognition can classify a variety of different objects. It can tell apart anything from cars to people, to trees. However, if a car manufacturer wants to facilitate visual shopping by accurately matching a given picture to one of its products, it needs to teach the foundation’s Image Recognition service to recognise different car models. In the same way, the wallpaper retailer mentioned before had to teach the image recognition model how to recognize different kinds of surface textures.

Image credit SAP

November 9, 2017  10:08 AM

Nutanix .NEXT 2017 partner podium: Veritas, Veeam & Druva

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Nutanix

Enterprise cloud company Nutanix has hosted its .NEXT 2017 conference and exhibition on the balmy shores of the Cote D’Azur in the French city of Nice.

Alongside key announcements focused on developer tooling and the firm’s wider approach to cloud infrastructure management, Nutanix has also welcomed partner updates from firms including Veritas, Veeam and Druva.

Leading the news on the partner podium was multi-cloud data management company Veritas.

Veritas backup certification

Veritas has cemented its relationship with Nutanix in a move designed to help joint customers get protection for virtualized workloads running on Nutanix, as well as the ability to move applications across clouds, while remaining open to a choice of hardware, hypervisor or cloud.

As part of the partnership, the Veritas NetBackup 8.1  data protection solution is now certified to protect workloads virtualised on AHV, Nutanix’s native hypervisor.

Joint customers will also now be able to optimise and protect the movement of data and workloads faster within the Nutanix Enterprise Cloud or to other public, private or hybrid cloud environments.

According to Veritas, “The collaboration is designed to address two key challenges joint customers face today. First, how to spend less time worrying about datacentre infrastructure and more time on the applications and services that power business. Second, the partnership helps to ensure critical data and workloads running on Nutanix are further protected with a single, unified solution that is the foundation for end-to-end 360 data management.”

Additionally, as a result of the enhanced partnership, there are new opportunities for joint go-to-market and support initiatives from both companies, designed to help customers accelerate their adoption of next-generation cloud approaches for a wide spectrum of workloads by leveraging the combined enterprise expertise of Veritas and Nutanix.

“Organizations today need a data management strategy for data spread across public, private, and hybrid cloud environments that require a proven, high-performance backup and recovery technology designed to accommodate the most demanding workloads,” said Rama Kolappan, vice president, product management and alliances at Veritas. “With the combined power of Nutanix and Veritas that blends a leader in hyperconverged infrastructure (HCI) and Enterprise Cloud with the premier backup and recovery solution on the planet, customers can now protect a vast array of enterprise workloads.”

Veeam, neat features

Also vocal on work with Nutanix this month is backup and replication specialist Veeam.

Michael Cade, technology evangelist at Veeam has said that Veeam and Nutanix’s extended partnership means Nutanix AHV customers can backup virtual machines, but more importantly, provide a level of recovery not previously available.

“We give users the ability to recover what they want, when they need it. This neat feature is enabled by an Agent or Proxy VM hosted on the AHV cluster, which provides connectivity and authentication through Nutanix’s data protection APIs. As security is paramount, we’ve also integrated with Nutanix’s Protection Domains for the tightest possible mesh between Veeam and Nutanix VM Protection,” said Cade.

Cade specifies that although the above is the ‘main item’, another detail of the partnership means a user can also use Veeam Backup & Replication (VBR) to store those backup files in the regular Veeam VBK format.

“That might sound obvious, but it allows the backup files to perform tasks through VBR such as tape backup, backup copy, with application item level recovery now possible as a result. At the Nutanix .NEXT event in Nice we’ve been able to offer a real exclusive – showing how this all works, via a user interface which is compatible with Nutanix’s easy to use Prism Central single pane of glass management interface. All that five months after it was announced at Nutanix’s .NEXT in Washington,” added Cade.

Druva in the groove

Cloud data protection company Druva was also in attendance at Nutanix .NEXT 2017.

Druva’s VP of global channel sales Timm Hoyt has said that the Druva and Nutanix solution narrative for end-users is highly complementary, offering a seamless hybrid cloud offering for business critical workloads and a rich data protection offering.

“Nutanix aims to provide a single point of control for enterprise IT and cloud infrastructure – we provide a similar approach for data management, allowing IT teams to maintain their processes across clouds, for cloud apps and for those IT assets that remain centrally. Combining the two approaches allows companies to get that single control plane strategy in place for data to be created, then managed as a service over time. We think this approach works as it does not require any additional hardware to put full data protection and management into practice,” said Hoyt.

So then, what does the future hold for IT and for data as all the elements get converged?

Hoyt argues that it will be more important to see where things are getting created, whether that is on hyperconverged infrastructure, on cloud apps like Office 365 or Salesforce, or on an endpoint.

Together, Hoyt argues that Druva and Nutanix can provide those points of management.


November 8, 2017  11:17 AM

Keep Nutanix Calm and carry on cloud computing

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

In addition to its core product updates released in line with the European leg of its .NEXT conference and exhibition series, enterprise cloud company Nutanix has added to its set of cloud tools, services and operating system components with some additional items.

Over and above new services designed for developers and cloud architects, Nutanix has also announced a new capability for CPU-intensive applications.

These would be CPU-intensive applications such as distributed analytics workloads, large scale front-end web services, Citrix XenApp deployments and the more advanced breed of in-memory analytics.

Automation & orchestration

Nutanix App Marketplace services are also being added to Nutanix Calm, the company’s multi-cloud application automation and orchestration solution.

New (and existing) applications can be defined via standards-based blueprints and then published to a marketplace.

This Nutanix Calm will also provide pre-integrated and validated blueprints that streamline the adoption of key infrastructure and developer tools, such as Kubernetes, Hadoop, MySQL, Jenkins and Puppet.

Application blueprints

These application blueprints can be applied by application teams so that new workloads can be developed and deployed into multiple cloud environments.

Nutanix also announced new AHV (Acropolis Hypervisor) capabilities and performance enhancements planned for its upcoming v5.5 software release.

The firm insists that this could make its built-in hypervisor the de facto choice for customers seeking enterprise clouds that work with the simplicity of public clouds.

According to Nutanix, “The new Acropolis Object Storage Service, will be built into the Enterprise Cloud OS and provide an Amazon Web Services S3-compatible API to enable application development teams using Nutanix to consume storage as a high performance on-demand service – just like public cloud offerings. Acropolis Object Storage Service will collect, store and manage billions of objects in a single namespace, providing a storage fabric for a variety of use cases, including data archival.”

This release will include support for Citrix Provisioning Services (PVS), a popular technology for virtual desktop (VDI) deployments. It will also include integrated support for virtual Graphics Processing Units (vGPU), which will  accelerate rendering of complex graphics common in high-resolution medical imaging, 3D geospatial applications and other demanding workloads.”

Intel Skylake

Additionally, the company formally announced that its Enterprise Cloud OS software will run on Intel CPUs based on the new Skylake architecture, driving faster performance and higher scale.

To complete the story here we should also note that Intel Skylake support extends to Nutanix-branded appliances, server-based platforms from its OEM partners Dell EMC and Lenovo and qualified servers from HPE and Cisco. Nutanix customers can continue to scale their Enterprise Cloud deployments by seamlessly combining newer generations of CPU and storage technology with existing deployments and eliminate what Nutanix would call expesnive ‘forklift’ upgrades.


November 8, 2017  11:16 AM

Nutanix hosts French ‘leg’ of .NEXT tour

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hyperconverged enterprise cloud company Nutanix holds the French leg of its .NEXT conference series this week on the balmy shores of the Riveria down in Nice… so what do we need to keep in mind as the festivities (and keynotes and breakout sessions) commence?

Some of the bigger news items this year saw the firm cement a new relationship with search (and now cloud) giant Google.

If you’re a Google Cloud Platform (GCP) cloud customer and also a convert to the Nutanix Acropolis Hypervisor… then that’s okay.

That’s because joint Nutanix-Google customers are now able to deploy and manage both cloud-based and traditional enterprise applications as a unified public cloud service, while blending the Nutanix environment with Google Cloud Platform (GCP).

Nutanix vs. VMware

Other tasty morsels at the show, inevitably, gravitate towards the Nutanix vs. VMware discussion. Initially billing itself (a couple of years ago) as ‘the VMware killer’, Nutanix makes much of its supposed superiority saying that legacy hypervisors were designed for a world of monolithic non VM-aware storage arrays and switch fabrics and were built to accommodate thousands of combinations of servers, NICs and drivers.

These ‘legacy hypervisors’ require multi-pathing policies and complex designs to mitigate issues such as storage congestion and application resource contention while still accommodating both high availability and scalability.

Here’s the Nutanix key summary on its own approach to hypervisor technology, “Nutanix’s AHV Hypervisor was built from the ground up to provide a much simpler and more scalable hypervisor and associated management platform by leveraging the software intelligence of the hyperconverged architecture. AHV changes the core building block of the virtualised datacenter from hypervisor to [software] application and liberates virtualisation from the domain of specialists – making it simple and easily manageable by anyone from DevOps teams to DBAs.”

Jambe de grenouille

Nutanix also rests a big part of its play on Prism, its management interface.

Prism is supposed to provide a single view to manage an entire infrastructure stack whether in a single datacentre or spread throughout datacentres and offices. Deploying, cloning and protecting (DR) VMs is done holistically as part of the hyperconverged architecture rather than utilising disparate products and policies.

As we have said before when looking at the Nutanix Enterprise Cloud OS, the firm wants all public, private and hybrid clouds to look like one single fabric. Nutanix wants to be THE cloud platform, the Windows, the Android, the one single OS that can provide a truly homogenised operations fabric.

Other key product areas include Nutanix Calm (the firm’s DevOps, abstraction and software operations control offering) and the Nutanix performance analysis tool X-Ray — plus also Nutanix Xi Cloud Services, which allow users to provision and consume Nutanix infrastructure on demand as a native extension of the enterprise datacenter.

Recent news includes updates to the Nutanix Acropolis File Services (AFS) software, a product which  is intended to streamline IT operations by ‘natively converging’ virtual machines and file storage into a single computing platform.

Portfolio of, portfolio services

The firm has engineered its Nutanix Enterprise Cloud OS architecture to provide ‘portfolio’ services for cloud applications. The term is used to express the creation of application building blocks that can access on-demand compute resources.

This means that cloud apps can get hold of elastic storage services for block and file-based data and, ultimately, that IT managers could use these tools to build and operate enterprise datacentres that rival public clouds. Theoretically, at least, this is how it is supposed to work.

“AFS software gives customers a public cloud experience, but with the security and control of a private cloud deployment,” said Sunil Potti, chief product and development officer at Nutanix.

So it’s all about a single means of controlling use of cloud computing in highly distributed multi-cloud environments while using software to get a public cloud experience, but with the security and control of a private cloud deployment.

As Peter High details on Forbes, this is the shape of CEO Dheeraj Pandey’s view for the operating system of the future.

“In the multi-cloud era, data and applications are dispersed not just across enterprise private and public clouds, but also distributed remote office/branch office (ROBO) and disaster recovery (DR) environments, as well as edge computing use cases. Today’s enterprises want to build these diverse deployment options into their end-to-end cloud designs, without disjointed IT operations or lock-in to any one virtualisation or cloud stack,” wrote High.

The total technology proposition here is then… a single software OS that unifies multiple clouds  across the full compute, storage and network stack   with simple operations and common IT tooling, enabling application mobility across clouds, while remaining open to any hardware, hypervisor, or cloud.

Big fat hybrid challenge

Looking forward, we know that Nutanix is going to talk more and more about the mechanics of how hybrid cloud functionality can be brought to bear inside its customers’ installations.

We know that making hybrid ‘just happen’ is difficult because it’s a question of converging clouds, converging operating systems, converging runtimes, converging user interfaces (and management interfaces) and converging data processing allocation among converged system resources… this, it appears, could be the next enterprise software headache that Nutanix is aiming to provide an solution for.


November 3, 2017  9:03 AM

Cloud native series: Avi Networks defines pros & cons

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Ides Vanneuville in his capacity as director of systems engineering for EMEA at Avi Networks.

Avi Networks is known for its elastic software defined application services fabric that provides per-app load balancing, predictive autoscaling, application insights, security, and automation in any datacentre or cloud.

The firm has come forward with a software-only solution that operates as a centrally managed fabric across datacentres, private clouds and public clouds.

Vanneuville writes as follows…

Cloud native applications are capturing the hearts and minds of strategic IT leaders by changing the narrative around how applications are developed and consumed.

The growth in container and microservices adoption frees developers from the shackles of the underlying infrastructure so they can consume limitless amounts of resources and services in the cloud.

Developers understand the benefits of going cloud native more than most, as it makes for much shorter development cycles and the ability to update and maintain applications with minimal impact on other systems and the customer experience.

The agility and flexibility of cloud native applications also give the enterprise a renewed sense of innovation, as developers embrace speed, automation and scale to drive the business forward.

Ops in DevOps gets tougher

While the pros of cloud native applications are apparent, there are a number of issues that enterprises need to address. This is especially true when it comes to the Ops side of the DevOps equation, where cloud native applications introduce a new level of complexity.

For example, an Ops team may currently have to manage, perhaps, a few hundred virtual machines, in order to stay on top of application performance and security.With cloud native applications, they could be faced with tens of thousands of containers, adding considerably to the support and security workload.

Consider also the exponential rise in the number of network connections needed for containers, each with the potential to become a performance bottleneck or security vulnerability, both from outside threats and from each other.

Granular crunch

Providing services and security for cloud native applications requires automation and a far more granular approach (per-app or per-container, instead of per-tenant) than currently practiced.

The complexity of cloud native applications is also why so much emphasis is being put on the use of advanced analytics and machine learning.

Analytic tools lend themselves to delivering visibility, making it easier for operations teams (with the assistance of machine learning) to identify and address anomalies and inefficiencies in real-time across complex cloud native deployments.

Bottoming out

The bottom line here is that the cloud native approach is gaining traction due to its ability to jumpstart the enterprise’s ability to innovate and deliver business agility.

Moreover, while this shift in architecture isn’t without its complications, a commitment to cloud native—from both developers and operations teams alike—can deliver significant business value and can only gain in popularity going forward.


November 1, 2017  10:11 AM

Ross Clark ‘war against cash’, reality check for developers?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

For software application developers and analytical DataBase Administrators (DBAs) in the retail, banking, government services and related commercial ‘shopping’ space, the promise of a cashless society is good news.

More electronic payments means more terminals creating more data feeding off more apps and that’s just in store… when we consider the amount of e-payments also flying through the web, the number of transactions is spiralling.

But is it all good news?

Journalist Ross Clark wants us to stand back and consider the wider implications of the cashless society. Yes, it’s good news for programmers building Business Intelligence (BI) apps, but what about us as individuals?

War Against Cash

Clark’s new book The War Against Cash argues that commercial interests want us to pay electronically in order to collect valuable data on our spending habits, while governments would love us to move to cashless payments in order to control the economy in ways which suit it, not us. If we choose to pay electronically, that is one thing, but we will regret it if we do not defend the right to pay with cash.

Clark considers if a government’s desire to go cashless is a symptom of contempt for democracy — big words indeed.

He explores the consequences for small businesses and scrutinises the link between abolishing cash and fraud prevention. Looking at the global landscape, he challenges why, of all the countries, Swedish banks seem keenest to eradicate the use of cash whilst the Japanese are determined to stick with it.

He also considers the problems with the growth of mobile money in developing countries – where data protection laws are a lot looser than in the developed world.

Software, dependency

Software developers might focus particularly on Clark’s chapter #9 entitled ‘Dependency’ – not dependency as in software architecture structure dependencies and threading parallelism for concurrency, but dependency on software at a higher level.

Citing examples of banks inputting data incorrectly and meltdown’s of electronic systems that have affected supermarkets and hospitals throughout the British NHS system, Clark points to the level of dependency we currently put on systems that have grown up in the Internet age… could this again provide some epiphany for those who happily conform to a live ruled by code?

“While I am hugely appreciative of the freedom and opportunities which have derived from the world being connected technologically, I am also grimly aware of how technology can let us down,” writes Clark.

A bit clingy

Concluding the book, Clark argues that we should cling dearly to our notes and coins and protest loudly whenever anyone tries to stop us using them.

“We will regret it if we do not,” he says. “We should boycott businesses that try to ban cash and only accept electronic methods of payment. We should run up bills with them and, if they try to decline cash, go and dump bag-fulls of pennies in the foyers of their head offices. So long as cash remains legal tender they will have to accept it. We should protest when public bodies try to run services cashless.”

The message and the takeaway for programmers, arguably, is… think just a little more about the extreme long term effects of creating a networked IT framework that completely digitises our world, because some of the consequences may end up on our own doorstep.

 


October 31, 2017  10:41 AM

Microsoft Future Decoded 2017: A new ‘culture’ for application transformation

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Microsoft UK COO Clare Barclay kicked off what is her firm’s major London event (for the autumn season at least) by taking the stage for Microsoft Future Decoded 2017 staged at the ExCeL convention centre.

After some lip service duly paid to so-called ‘digital transformation’ (no surprise there then) the pleasantries gave way to more technical issues.

Taking over from Barclay, Microsoft exec VP Jean-Philippe Courtois started his presentation by emphasising how important ‘culture’ (as in company culture, attitude and all round approach) as we strive to drive empower employees, engage customers, optimise operations and transform products.

(Positive) culture shock

The firm says that all of these changes will only come about when it can enable the modern workplace be re-inventing modern business applications (which will be largely be cloud first) all running on a data an Artificial Intelligence backbone.

Microsoft VP of devices Panos Paney took us into a ‘look at the future’ segment driven by the so-called intelligent cloud. His passion is focused on exactly how people are working and what they are able to create in the modern workplace… it is this that creates the new culture of work that the firm wants to champion.

Clearly massively enthused about the Microsoft Surface line of products, Paney spent a good deal of his session time clarifying the functions of various devices. Arguably rather more of a ‘live advertisement’ than a clarification of work culture, Paney is clearly so impressed with his firm’s own products that the love of the brand overtakes him at times.

Ultimately, it all comes back to a new culture of working says Microsoft… and this means using Office 365 not just as a product, but rather more as a ‘platform’ and the firm has gone to pains in the past to explain how the ‘suite’ can establish itself and indeed operate as a wider platform.

Other keynotes on day 1 also included Julia White in her role as corporate VP for Azure and hybrid cloud. White eluded to the list of technical breakouts and sessions staged throughout this two-day event.

In this regard then, Microsoft has used this event to stage a number of technical sessions (including those wholly dedicated to open source) and the breakdown of these forms a reading list on its own.

IoT realities

Of particular note was a session hosted by Microsoft developer favourite Paul Foster. This session was entitled, ‘Real world IoT: Making real things in the Intelligent Edge’.

This session was designed to demonstrate how to build the ‘Intelligent Edge’ to deliver service to millions of consumer electronics devices, instrument commercial buildings, factory floors and geographically distant sites.

According to Foster’s intro, “Using Azure IoT Hub, its cross-platform SDKs and the latest, low cost LPWAN technology, this technical session [runs] from radio modulation to coding to inspire [developers] to build the intelligent device solutions. Technologies discussed: LoRaWAN, IoT Edge SDK V1 on embedded Linux and IoT Hub Client SDK on Linux and embedded SoC.”

Session selection pack

Sessions hosted across the agenda at Microsoft Future Decoded 2017 included: Azure state of the nation today; Quantum computing keynote; The best bots out there; Containers and end-to-end DevOps; delivering apps and desktops from Azure; The agents of digital change; and Machine Learning (ML) at scale.


October 26, 2017  2:40 PM

Hitachi Vantara PentahoWorld 2017, major trends in data clarified

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hitachi Vantara has hosted one final event under the Pentaho banner before the ‘company’ finally now becomes part of the deeper ‘brand’ under Hitachi Vantara.

As the official company line states…

“This new company will unify the operations of Hitachi Data Systems, Hitachi Insight Group, and Pentaho into a single integrated business as Hitachi Vantara to capitalise on Hitachi’s social innovation capability in both operational technologies (OT) and information technologies (IT).”

Pentaho itself is 13-years old was it was founded under the principles of being an embeddable, extendable data analytics platform.

President & COO of Hitachi Vantara Brian Householder set out the stall for the firm’s current approach to data analytics inside of the Hitachi parent.

“We are of course focused on helping our customers become more profitable… but we have a second mission as well… and that is all about focusing on helping society work better… and that is what we call our ‘double bottom line’ and this event is designed to help explain more about what that means,” said Householder.

Hitachi shifts to analytics

Looking back at the history of Hitachi with a view on Pentaho itself, Householder explained how the firm (Hitachi) went from being a software and services company to being a business that looked more directly at becoming a specialist in data and analytics… hence the acquisition on Pentaho back in 2015.

“We also really liked the fact that Pentaho was an essentially open source company and we really liked the way Pentaho worked to champion the open model of design and software development,” said Householder.

Now with a very strong focus on Internet of Things (IoT), Hitachi Vantara has worked to explain in detail how its Lumada IoT platform fits into the current picture for this new divisionally grouped wider organisation.

So are we digitally transformed yet? Householder says that his personal assessment of the market sees the following:

  • Many firms are only digitally analysing as little as 5% of their data.
  • Even at best, progressive firms are only  digitally analysing perhaps 50% of their data.

Major trend #1: meta analytics

Hitachi Vantara sees a lot more analytics happening at the meta data layer… especially as we move from the world of applications that churn through terabytes onward to petabytes… and then even more so as we move to exabytes.

The firm says that a whole new tier of analytics will emerge at the meta layer (where exabytes of data is held) so that applications can feed from a more tuned, refined, analysed, intelligently automated (and essentially smaller and more accurate and de-duplicated) pool of data.

Major trend #2: application longevity

Another key trend highlighted by Hitachi Vantara is that the data will outlive the application that created it.

The average useful application will soon only last perhaps around one to three years… but the data that these apps channel, use, change, affect, ingest and output will last far longer than this.

Major trend #3: linear non-linear

Businesses are thinking in linear terms… but technology is changing in a non-linear way. If we accept the law of accelerating returns we have to accept a new cadence of innovation.

As Ray Kurzweil has said, “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns’, such as chip speed and cost-effectiveness, also increase exponentially.

Speaking as part of the Hitachi Vantara keynote, Forresrer analyst Brian Hopkins has asserted that adding IoT to a business requires a data fabric architecture… plus also, that architecture needs to be optimised for the public cloud … and, thirdly, a business needs to be architected in order to scale (to webscale) for growth.

This breadth of understanding in terms of where data storage exists and happens vs. where compute happens has significant implications for firms who are on the road to so-called digital transformation with its building blocks of cloud, big data analytics and perhaps also quantum computing when it finally arrives.

Pentaho’s future, inside Hitachi Vantara

The future for Pentaho under Hitachi Vantara appears to be positive in the sense that the parent firm (although it will no longer host a separate PentahoWorld conference, it will form part of Hitachi Next 2018 in San Diego) will now invest significantly in the analytics engine it has acquired.

Where some acquisitions are perhaps more cynically designed to absorb market share, kill off competitor brands or land-grab additional customer base — this (by all evidence presented) is not the case here, so let us hope the open computing credentials are also maintained.


October 19, 2017  2:36 PM

1000 stories on CWDN, a thank you

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is the 1000th post on the Computer Weekly Developer Network in its seven year of existence.

It would have been inappropriate not to thank all the firms and their representative communications engines who have supplied commentary since this ‘column’ style blog started way back in July 2010 when we were still discussing developments for Windows Phone 7.

So, thank you.

From mobile, to cloud

One of the most fundamental shifts we have seen over the past seven years has been the shift to cloud, obviously.

If you click back on the ‘July’ link above you’ll see that we ran a story entitled I am a software developer; therefore I am a mobile developer… running this today we would need to change that to I am a software developer; therefore I am a cloud developer?

Looking at 2024

If we consider where we will be in another seven years time, will 2024 see us write I am a software developer; therefore I am a AI developer, I am a software developer; therefore I am a Quantum Computing developer, or perhaps I am a software developer; therefore I am a (not yet invented platform) developer.

Thank you once again to everyone who has read, contributed and been involved with this blog in every way.


October 19, 2017  2:16 PM

AppDynamics Summit New York 2017: keynote noteworthies

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Application Performance Management (APM) specialist AppDynamics hosted its 2017 Summit #AppDSummit event in New York this week and CWDN was listening to the proceedings in full.

AppDynamics CEO David Wadhwani took the stage to kick off proceedings.

Wadhwani noted that every [user or machine] action initiatives millions of lines of code across multiple methods in a variety of datacentres in a multiplicity of different environments. For developers, Wadhwani says he sees more and more departments now working to adopt more Agile work methods that embrace the elastic nature of cloud.

Hardening up brittle services

With too many brittle services out there relying on outdated legacy structures, Wadhwani attempts to justify and validate his firm’s technology proposition by saying that the health and agility of any customer’s technology stack now also reflects and describes the health and agility of that same firm’s business.

Rowan Trollope, SVP and GM of IoT and Applications at Cisco joined this event’s keynote to explain how he, as a core software application development professional at heart, understood that there needed to be a more vertical integration of the total [new] stack of application components now being built in the componentised world of cloud, containers and microservcies.

“We want to build self-driving infrastructures where we appreciate that infrastructures will become more intelligent, more automated and more secure. We need to always understand what is important to the business owner and be able to drive application services to deliver on that basis,” said Trollope.

So as the vertically integrated software defined network now gets smarter, Trollope says that the most exciting part of how AppDynamics now works as part of Cisco is how the teams themselves are now working to build out a wider vision.

It’s all about end-to-end visibility  (from the end user, through the applications, all the way down to the infrastructure).

The suggestion here is that we should imagine a world where every technical decision that is made makes a direct [ultimate] impact upon every user through applications and then onward therefore to a business. But, deeper, that world runs in a universe where systems are self-healing and self-optimising. This is what AppDynamics says it is building.

I/O importance

Keynote presentations at this event also touched on the role of I/O (Input/Output) in APM.

According to AppDynamics, the role of I/O professionals increasingly includes not only justifying the cost of running the IT infrastructure, but also, more importantly, providing business executives with metrics and data that support business operations.

However, the firm insists that while the I/O team is comfortable monitoring network availability, application uptime and application response times, the business ultimately cares about how those things translate into revenue, cost, risk or some other business value metric.

Unified monitoring vision

What AppDynamics stresses is highly important is creating a single unified vision for monitoring through one platform and one User Interface.

The one platform monitoring vision from AppDynamics extends across: end users; application performance; database; server visibility… and now with a new IoT visibility offering, plus also a new network monitoring capability.

Looking at the platform scope that AppDynamics says it now brings to market, the software here is able to monitor every business transaction being executed within a given network through the diagnostics that it offers.

Application fabric & topology

As we all now look to getting this whole cloud migration process to happen, we know well that migrating all applications with all their associated methods and dependencies will be tough. The whole ‘fabric’ described here then becomes an even more complex ‘topology’ if we consider the fact that cloud apps, once running, will be deployed under a variety of different post-deployment billing methods for every instance in hand.

With this cloud centricity in mind, AppDynamics welcome Barry Russell from AWS to explain how AppDynamics has partnered with the cloud giant to also extends its APM vision (and capabilities) into the kinds of cloud deployments now happening… but all with a view on user experience (which should also channel to business success, as we have said).

Business iQ

This event would be incomplete without the a mention of Business iQ, the firm’s brand label for its line of application-centric and business-centric dashboards.

“Real time services inside Business iQ has allowed us to really see how changes in our IT stack [and its applications] really affects our end users,” said John Hill, CIO of AppDynamics customer and durable clothing wear seller Carhartt. “Most of the time we spend with the AppDynamics team was focused on understanding what each transaction in our system really meant and so then trying to model that out.”

Hill explains that his team works with AppDynamics to manage promotions management to analyse how promotions are working and [live] tune and monitor his company’s application propositions in real time.

Also noted in this main session was the notion of so-called ‘Business Journeys’ — this is the concept of being able to author, join, monitor and manage elements inside any given application in terms of its process steps.

If we take the example of a mortgage, this total process would involve loan applications, document verification, credit check, underwriting and then onwards to loan approval… all of which go together to form a total business journey.

In summary, AppDynamics has presented a keynote session with a surprisingly dovetailed mix of both corporate messaging (hey, we just got acquired, so here’s some roadmap and vision) alongside some deep dive practical product demos (hey, here’s some method architecture monitoring demos played out live by application owner developers)… and that’s a heady mix.

Transaction log message analytics alongside IPO stories, yeah really… now you know why people talk about distributed abstracted hybrid technologies, the same obviously goes for keynotes.


Page 3 of 10312345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: