Apple and IBM have been working together for a few years.
Thought to be an unlikely couple by some, the two firms established a partnership in 2014 to work on integration projects.
Fast forward to 2018 and the two tech giants are pairing up again, but at a granular developer focused level designed to help inject IBM’s Watson Artificial Intelligence (AI) platform and services directly into Apple iOS mobile applications.
The product/service launches are IBM Watson Services for Core ML and the IBM Cloud Developer Console for Apple.
In terms of what’s actually happening here, Core ML is a Apple’s software framework for running machine learning models on Apple iOS devices – the focus is very much on iPhone smartphones in the first instance, iPad second… and then onto Apple technologies for smart watches and television.
Core ML, first released in iOS 11, is an Apple framework for running machine learning models locally on iOS-enabled devices, meaning it runs even when the device is offline. Existing models written in Caffe, Keras, Scikit-learn and others can be converted to Core ML.
Due to this agreement, developers using Core ML can now gain access to IBM Watson AI functions inside the apps they are building.
According to Apple, “A trained model is the result of applying a machine learning algorithm to a set of training data. The model makes predictions based on new input data. For example, a model that’s been trained on a region’s historical house prices may be able to predict a house’s price when given the number of bedrooms and bathrooms.”
Developers can now build AI-powered apps that securely connect to enterprise data, are optimised to run offline and on cloud… and that continuously learn, adapt and improve through each user interaction.
“IBM Watson is available as a set of cloud-based services for developers to build AI applications, and Core ML delivers advanced machine learning to apps on Apple devices. Together, these technologies can deliver faster, smarter insights with continuous learning capabilities, transforming AI for enterprise mobility,” said Mahmoud Nagshineh, general manager, IBM partnerships & alliances
The Coca-Cola Company is said to be working with this technology to transform in-field capabilities for field service engineers. Initial functionalities being analysed are custom visual recognition problem identification, cognitive diagnosis and augmented reality repair.
Watson Visual Recognition is a service on IBM Cloud that enables you to quickly and accurately tag, classify, and train visual content using machine learning. It has built-in classifiers for objects such as faces and food.
The new IBM Cloud Developer Console for Apple provides tools, including pre-configured starter kits which aim to simplify Swift development, along with AI, data and mobile services optimized for Swift.
“For example, we’re introducing IBM Cloud Hyper Protect Starter Kit to enable iOS developers to safeguard credentials, services and data using the IBM Cloud Hyper Protect services,” said Nagshineh.
IBM’s focus here in the smartphone arena has been confirmed to extend to Apple only devices at this time i.e. Android does not appear to be in the picture. IBM is a fan of Apple’s ability to have integrated its hardware and software platforms together and says that they provides the level of enterprise grade security and control the firm was looking for in order to make this agreement happen.
Director at IBM Research Arvind Krishna opened IBM’s Think 2018 conference in Las Vegas this week by introducing the firm’s ‘5 in 5’ presentation.
Delivered as an almost TED Talk style opener, this opening session was pleasing for three key reasons:
- Female speaker/scientists outweighed the males.
- No up-front IBM product pitches were delivered.
- Absolutely nobody said ‘awesome’, or ‘super excited’ – not once.
Polymer chemist Jamie Garcia kicked off proceedings as master of ceremonies, a celebrated scientist in her own field, Garcia introduced the speakers one by one as they delivered their five-minute ‘pitch-explanations’.
The below notes are presented in paraphrased quote form in the speaker’s own voice – all the content represents quoted material, so quote marks have not been used.
#1 Blockchain crypto-anchors
Speaker: Andreas Kind, a specialist in crypto-anchors working at IBM Research.
From cinnamon to boiled eggs, we know that everything in this world has been copied these days. In some parts of the world, as many as 40% of parts in the car after market are fake. Drugs get recycled after bad storage procedures etc. The root of the problem is that global supply chains have become very complex because products are made in more than one country and assembled in others… and then sold in others still.
Blockchain can help us build a global provenance supply chain, but…
… anchors are needed to link cryptographic entries in blockchains to the physical objects [and services, presumably] in the real world.
We can use crypto anchors to validate everything from cars parts to medicine. Crucially, the DNA of every object [i.e. the physical shape and attributes] can be used to provide the information to track the provenance of everything tracked in blockchain.
#2 Quantum encryption techniques
Cecilia Boschini, lattice-based cryptography specialist
The appeal of mathematics is based upon the fact that once you learn the formula, you can work out any problem. Cryptography is the art of designing protocols to protect your data. When you have to send your credit card number to a store it is encrypted before it is sent. Today we use the most scientific approach to crypto systems based upon logic that makes breaking the system a very long and time consuming process that requires access to massive computing power.
So will quantum allow that power… so should we panic?
Well, we would need a quantum computer that has thousands of qubits… so we can now make problems that grow with us. To produce post-quantum theory we need to produce quantum resistant protocols.
This is what lattice is… a two dimensional lattice has a grid where the key is to find a specific point on a grid. In three dimensions with many layers, we start to get extremely complex.
#3 Robot powered AI microscopes
Tom Zimmerman, IBM research scientist – human/machine devices and paradigm scientist
By 2025 over half of the world’s population are going to be living in what we can call ‘water stressed’ areas. Plankton produce two thirds of the oxygen that we breathe, they are also the greatest sequester of carbon on the planet as they consume it for us when we produce it as a waste product.
Scientists has traditionally studied plankton by collecting them, treating them with chemicals and studying them while they live. We can combine remote sensing with AI to study plankton in their living environment.
#4 Bias in AI datasets
Francesca Rossi, IBM Researcher and university professor, global leader for AI ethics at IBM.
I think back to 30 years ago, when I went to first AI conference, we looked at making machines smarter with very little discussion related to the powers of those machines on our society.
Today I work with lawyers, economists, accountants and many other professionals to develop a multi-disciplinary view at what AI will do for us. Key concerns for AI are — bias, explain-ability and value alignment.
Once a system learns something out of a certain set of data, it will then try and generalize and apply that knowledge to many different areas. But if that set of data is not diverse enough, then the AI system will not have enough info.
But not all forms of bias are bad… some bias in domain specific skills (such as a doctor’s skills in medicine) is a type of bias that we do not want to eradicate. We predict that in the next five years we will eradicate bias and that only the systems that show a clean lack of bias will be the only ones that get used.
AI needs to be multi-disciplinary, multi-gender and multi-stakeholder.
#5 Quantum Computing
Dr Thalia Gershon, senior manager for AI challenges and quantum experiences at IBM.
Simulating the bonding of large molecules is tough because you have to simulate the relationship with every electron with every other electron. Quantum Computing gives us the power to do this kind of things as it encodes information into quantum states.
Many of our challenges today in computing are interdisciplinary problems – so we are now building systems with quantum properties to experiment with these issues.
Quantum Machines need to cooled down to -460 degree below Fahrenheit to keep them cool, so the task here is a big one.
Linear classical logical thinking does not help us in quantum exploration… we need to be able to think differently.
Computing classes will start to offer a quantum track within five years. Plus, we will also need to teach students in all disciplines what qubits are. Three different quantum computers are made available by IBM on the IBM Q Experience where there are three core programming options. AI and QC are not fully independent, we need to join these two worlds.
NOTE: We can (arguably) take these five areas (and others) as key pointers for software application development growth over the next five years.
This is a guest post for the Computer Weekly Developer Network written by Eran Kinsbruner, lead technical evangelist at Perfecto.
Kinsbruner wants to uncover the mechanics behind an application stream being called Progressive Web Apps (PWA) and examine why the could be the next big thing.
Firmly of the opinion that PWAs are hailed as a means of pushingt the mobile web forward, Kinsbruner reminds us that they can potentially bring parity to web and native apps, while also providing mobile specific capabilities to web users.
Perhaps unsurprisingly, Google came up with the term itself and PWAs are seen alongside Accelerated Mobile Pages (AMP) and Responsive Web Design (RWD) as key weapons in the fight for a slick mobile user experience.
Kinsbruner writes as follows…
Progressive Apps have one common source code base to develop for all platforms: web, Android and iOS – making them easy to maintain and fix. With Google behind the development; it’s perhaps no surprise that PWAs are relatively easy to adopt. So, developers don’t need to gain new skills, but rather learn new APIs and see how they can be leveraged by their websites.
PWA apps exhibit two main architectural features for developers to use; Service Workers (which give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity) and Web App Manifest (the file within the PWA that describe the app, provides metadata specific to the app like icons, splash screens and more) – and these present significant new opportunities for developers.
With RWD, the primary challenge was the visual changes driven by form factor.
PWA introduces additional complexities due to more unique mobile-specific capabilities, such as no network operation, sensors-based functionality (location, camera for AR/VR, and more) and cross-device functionality as well as dependency on different test frameworks like Selenium, Appium.
There may also be a need to instrument the mobile side of the PWA to better interact with the UI components of the app on the devices. Testers must be aware of what PWAs can access and how to keep quality assurance high at the top of their priority list.
A checklist for a PWA test plan:
Step 1: Perform complete validation of the Mainfest.Json validity file and check for load through the supported browsers (Opera, Firefox, Chrome, soon Safari)
Step 2: Test and debug your supported service workers. This can be done through the Chrome internals page. In this section, enable and disable each service worker and assure it behaves as expected – common service workers enable offline mode – and register the app for push notification. Perform these validations across mobile and web.
Step 3: PWA adds support for camera (e.g. scanning QR codes), voice injection (recording a memo), push notifications and other more mobile specific capabilities. Make sure to test the use of these functions on your app across mobile devices – these will be an additional set of functional tests only relevant to mobile devices.
Step 4: As PWA is a super set of a RWD, all the RWD testing pillars apply here also, which means you need to consider:
- UI and visual/layout testing across multiple form factors
- Performance and rendering of the content across platforms (load the proper code and images to the right platform – mobile vs. web)
- Network related testing – in addition to the offline mode that is covered through service workers, make sure to cover the app behaviour throughout various network conditions (packer loss, flight mode, latency %, 3G, 4G, 5G etc.)
- Functionality of the entire page user flows across platforms and screen sizes and resolutions
Step 5: Handle test automation code and object repository. PWAs are java script apps that add a .apk file to the device application repository once installed – especially on an Android device (apple has limited support so far for such apps). When aiming to test the Android .apk on the device, the developer and tester will need a proper object spy that can identify both the App shell objects as well as the WebView objects within this app. From initial tries for subset of PWA apps, the Appium object spy for Android will not work, leaving the user to only get the DOM objects from the web site only. This is currently a technological challenge to automate the mobile parts of a PWA.
Step 6: Be sure to cover Google’s recommended checklist for PWAs step by step, since it includes a lot of the core functionalities of progressive apps, and can assure not just good functionality when followed, but also great SEO and high score by Google search engines.
Forward looking developers will look to overcome the challenges this kind of innovation brings and use PWAs as an opportunity to deliver a better user experience.
If you’re a developer just starting to move from a .com or a .mob site to a cross platform web environment, then PWA is a compelling option. Web developers, should base any plan for change around an appropriate product or business milestone such as a next big website release or a complete rebrand; making sure that a move to PWA makes sense, and isn’t just a jump on the latest and greatest bandwagon.
Hitachi has aligned its data analytics divisions and and fused it with its Pentaho acquisition to call the new entity Hitachi Vantara. So… Vantara… kind of sounds like ‘advantage’ with a bit of Latin ‘avanti’ in there for added good measure right?
Branding shenanigans aside, Hitachi Vantara (more usually pronounced in an American accent as Hitachi Ven-tera) has continued to roll out products aligned for that role which we can now quite comfortably define as the ‘data developer’.
These data developers (call them data management and data science focused software engineering professionals with an appreciation for the need to apply analytics and machine learning technologies to database and application strucures… if you must – but it’s not as catchy) use machine learning (ML) functions, obviously.
As ML becomes the order of the day, these same data devs will also (arguably) need an increasing degree of orchestration functions with which to corall and manage the models they seek to build, execute and apply – this is what Hitachi Ven-tera (sorry, Vantara) is now rolling out.
The company now offers machine learning orchestration to help data professional to monitor, test, retrain and redeploy supervised models in production.
Emanating from its Hitachi Vantara Labs machine learning model management’ and these tools can be used in a data pipeline built in Pentaho.
Once an algorithm-rich ML model is in production, it must be monitored, tested and retrained continually in response to changing conditions, then redeployed. This work involves manual effort and, consequently, is often done infrequently. When this happens, prediction accuracy will deteriorate and impact the profitability of data-driven businesses.
Pipeline wear & tear
Hitachi Vantara explains that once a machine learning model is in production, its accuracy typically degrades as new production data runs through it. To avoid this, the company provides a new range of evaluation statistics helps to identify degraded models.
More organisations are demanding visibility into how algorithms make decisions. Lack of transparency often leads to poor collaboration in groups deploying and maintaining models including operations teams, data scientists, data engineers, developers and application architects.
“These new capabilities from Hitachi Vantara promote collaboration, providing data lineage of model steps, and visibility of data sources and features that feed the model,” said the company, in a press statement.
Love your pipeline
Building out the ML-enriched ‘data pipeline’ appears to be a surprisingly non-sequential process i.e. in that we can build our pipe and lay it down, but we will need to go back and look for leaks and other areas of weakness where the structure of the pipe itself may have become compromised as a result of the content we put through it.
In advance of Microsoft’s annual games developers event in San Francisco this March, Redmond’s key gaming division execs have spent time detailing some of the inner workings of the current developer tools specifically aligned for games programmers.
Brought together under what is now known as the Microsoft Gaming Cloud, the mix of technologies here sits at two levels i.e. while a specifically games-focused set of platforms and tools do exist, the bulk of what is available also plays (no pun intended) directly inline with what Microsoft offers in terms of Visual Studio and Microsoft Azure cloud platform.
Pointing out that it has been a games development company for some 35 years now (Microsoft Flight Simulator lists back to 1982 and Microsoft Adventure actually arrived way back 1981), Microsoft has been building out its gaming stack with a number of acquisitions in recent years.
Notable developer-related purchases this decade include Microsoft’s acquisition of Havok in 2015, Simplygon in 2017 and PlayFab this year in 2018.
Havok provides tools for game physics, AI and environment. For cloud gaming, this means physics rendering in the cloud to help bring better graphics to devices that don’t have great graphics capability, like mobile.
According to Microsoft, Simplygon allows games software application developers to optimise 3D assets to run smoothly on every device they plan want to target.
Microsoft also reminds us that it created Direct X – the collection of multimedia-focused Application Programming Interfaces (APIs) for gaming and video. DirectX is now the most widely deployed games API in the world.
Nearly all PC games (roughly 17,000) and all Xbox One games are powered by DirectX
Head in the (gaming) cloud
Where all of this strategising leads us is to cloud.
Microsoft Xbox lead Phil Spencer was promoted report directly into Microsoft CEO Satya Nadella last year, so it appears fairly clear that the company ranks the gaming market as an important target – and, it is a target that runs in line with the firm’s wider push to cross platform compatibility… all roads leading (Microsoft wishes) to Microsoft Azure cloud.
“There will soon two billion gamers on the planet and it is our goal to reach all of them through our core strategy of content, customers and cloud,”state Kareem Choudhry, CVP of Microsoft Gaming Cloud and Kevin Gammill, GM of Microsoft Gaming Cloud.
Ubisoft has used Microsoft cloud gaming technologies across PC and Xbox and PS4 consoles for ‘Rainbow Six: Siege’ – the company sought to achieve lower latency and expanded availability for hundreds of thousands of concurrent gamer users.
In Azure itself, Microsoft is offering games developers dedicated servers for multiplayer gaming. The suggestion here is that ‘cloud compute’ workloads can be executed to allow developers the ability to offload computational tasks to the cloud to provide better in game experience.
Azure also offers Azure Cognitive Services so that game developers can provide functionality that allows games to see, hear, speak, understand gamer needs through natural communication including image processing algorithms, intelligent recommendations, and speech to text and text translation.
Possibly the most tangible (interesting, even) aspect of what Microsoft is providing for games developers is called App Center. Described as a ‘mission control’ for apps, this cloud service allows developers to build a game in the cloud, test it on real devices, distribute to beta testers and monitor its usage with analytics data.
“For cloud gaming, this means developers can easily test and quickly debug games in development across multiple devices,” says Microsoft.
Interesting times indeed… we are seeing massively powerful enterprise-level software application development technologies being used for games development and we can see a certain strain of games DNA cross-fertilising and being applied in the other direction back into enterprise.
Gaming is big business… and games software development is big business programming aimed at immersive user experiences through use of content and use of cloud.
Press PLAY to start.
The software industry is always focused on acceleration and speed – but typically we’re thinking about processor performance and code execution effeciency.
But now, we are turning our focus to so-called software accelerators, so what are they?
In a world of software application development innovation where everything down to a database and a word processor is also a ‘solution’, accelerators actually a kind of solution (don’t tell the marketing people) to getting a task done based upon portions of pre-defined logic and data behaviour.
When we talk about predefinition in this sense, it is work to locate, define, quantify and qualify predefined repeatable use cases for specific pieces of data inside applications before we then move forward to and then manage that information.
As has been previously noted, software applications will usually have a shape that comes about as a result of their behaviour when they execute — and that shape and data topography will often be reinforced depending on the industry vertical that the software is applied to.
Big scale IT vendors (you can pick your favourite top five) are now all busy rolling out pre-defined templates and operations blueprints labelled as accelerators for specific vertical industries and for specific Line of Business (LoB) functions.
Jon & Clive get to adorn
“I think the word is coming from the consulting world, as a part of language to encourage enterprises to innovate and otherwise do things quicker. It also has strong alignment with the recipes [and work to lay down orchestrated logic] coming from the Continuous Deployment (CD) world. Indeed, it also resonates with the inference models coming from AI,” said Collins.
Collins argues that the challenge where will be that these meta-level constructs (models, accelerators etc) are an order of magnitude harder to keep up to date.
Let’s get ready for a new era of Accelerator Configuration Management (ACM) – not a three letter acronym (TLA) yet, but it might soon be.
In line with Collins’ stance on accelerators is Clive Longbottom in his capacity as founder at tech analyst house Quocirca.
Longbottom says we need to stand back and remember — there is open preconfigured and constrained preconfigured.
“Open preconfigured is where the software is easy to install and run, but it does not constrain what the business needs to do now and in the future. Then there is constrained preconfigured, where the software tries to decide for you what the business will need now and in the future – something that many [enterprise software] users have complained about for years with previous versions. Anything that hardcodes processes is bound to fail: today’s process only deals with today’s problems: tomorrow’s problems will need a completely different approach at the business – and therefore coded – process level,” explained Longbottom.
Now is the time of software application accelerators, but as our understanding of the term itself is (in some places) as flaky as our appreciation of where, when, why and how these technologies should be applied… perhaps don’t fully put the pedal to the metal quite yet in every instance.
Software truth #1 is: there is a global shortage of software application developers and skilled software engineers at all levels.
Software truth #2 is: users often have trouble communicating their software ‘requirements’ requirements to those software application developers who do exist and are working to attempt to deliver production ready code to applications that people need.
Software truth #3 is: there ought to be an easier way of doing all this – sometimes at least.
The combined result of the above three software truths (if we accept them to be so) are perhaps some of the factors that have given rise to the development of so-called ‘low-code’ software platforms… we are now entering the low-code motherlode.
Among the various players in this space, Appian is among the more vocal right now… so, as we prepare for the company’s Appian World 2018 event, what can we expect?
Appian’s approach to low-code development runs closely parallel to its description of itself as a firm that is essentially focused on Business Process Management (BPM) technologies. Its low-code technologies offer drag-and-drop, declarative*, visual development for UX design, process design, rules design and other related development functions.
NOTE* : As clarified on Stack Overflow, “Declarative programming is when you write your code in such a way that it describes what you want to do, and not how you want to do it. It is left up to the compiler to figure out the how.” Essentially, we users will express the logic of a computation without describing its control flow.
Appian aims to both extend and augment the notions of low-code and BPM – and part of this is its development of the Business Process Model and Notation (BPMN) standard. This technology is intended to form the ‘substrate layer’ and base from which software applications themselves can be created. In real terms, it manifests itself as a collection of ‘process components’ that can be used to map and model any business process and bring forward the software functions needed to serve it. BPMN is a graphical flowcharting technique – and if you’re thinking Unified Modelling Language (UML) as a close comparison then you are correct.
Appian has also been classified as a company that provides Dynamic Case Management (DCM) – technologies that automate and streamline aspects of each ‘case’ and in this context, a case is a collection of information about a particular instance of something, such as a person, company, incident or problem.
In respect of its DCM capabilities, a Forrester report has stated: “Appian’s data designer is the most improved [DCM related feature], with an auto-discovery feature that allows you to quickly associate data fields in an external system and an interface design model for RESTful APIs. These tools – combined with the ‘record type’ abstraction – allow data to be easily modeled and brought together at various stages of the case life cycle.”
Appian World 2018
So then, apologies for the contextualisation and clarification, but it would have been superficial to simply call low-code a drag-and-drop approach, there’s a lot going on under the surface here.
The event (held in Miami April 23-25 2018) is described as suitable for C-suite exec, project managers, process improvement and business analyst professionals — and also for developers and software engineers.
Software engineers yes… there will indeed a code-fest. Appian is offering up to $10,000 for the winning projects in the Appian World 2018 hackathon, which is already open and accepting submissions through to March 27.
Finalists will be able to demo in-person to an ‘all-star’ panel of judges, including Apple co-founder Steve Wozniak.
Other aspects of the show include innovation workshops devoted to IoT, AI and blockchain.
We can expect keynotes and lots of focus on Continuous Delivery (CD) as we start to attempt to convince other (non developer) users to start driving the software delivery process.
Will the speakers mention ‘digital transformation’ and not be able to stop themselves saying they are ‘super excited’ about ‘awesome’ innovation?
Ah come on now, welcome to Miami.
This is the age of women in technology and male appreciate of feminism and the drive for equality (although of course the first computer programmer was indeed a woman), so should we be using the ‘she’ pronoun for technology products and services as we sometimes do with countries and ships?
Cloud data management company Densify (the artist formerly known as Cirba) might think it’s okay, because the firm has named its latest product Cloe.
Actually not intended to convey any element of femininity, CLOE stands for CLoud Optimisation Engine – it is a software tool designed to provide machine-learning based analytics to optimise public cloud consumption.
But that makes this a channel story, not a developer story, doesn’t it?
No it doesn’t, Densify says that its technology enables applications to be self-aware of their resource needs. The implication for developers being that with a larger percentage of architectural provisioning and application resource management taken care of by this layer, they themselves will be able to focus on things developers enjoy like functional features along with look and feel.
Densify claims that Cloe beta customers are now saving an average of 40% on public cloud costs, with some exceeding 80%.
As we move to cloud (and the notion of cloud native application development) it is important to remember that application demands can fluctuate every day, hour and minute of the week.
Add this complexity to the fact that in parallel, every single compute instance of cloud can have millions of permutations.
What Cloe does is to analyse cloud usage patterns so that Densify (the higher level product from which the company takes its brand name) to proactively make applications self-aware of their resource needs – matching application need to available cloud resources.
“While applications perform an infinite number of critical tasks, they aren’t resource-smart and they are often allocated massive amounts of unneeded public cloud resources,” said Gerry Smith, CEO, Densify. “Cloe becomes the resource intelligence of an application, allowing that application to be self-aware of its resource usage patterns and to re-align public cloud use to its needs.”
According to a magical analyst house Gartner, cloud services can [typically] have a 35% underutilisation rate in the absence of effective management, as resources are oversized and left idling.
Densify Cloe offers multicloud support so that applications are provided with the right resources even when simultaneously using multiple cloud vendors; Cloe recommends the best cloud technologies for the given application.
Databricks styles itself as an analytics software company with a ‘unified’ approach – the unification is this sense is supposed to suggest that the software can be applied across a variety of datasets, exposed to a variety of programming languages and work with a variety of call methods to extract analytical results from the data itself.
This UAP (as in universal analytics platform) product also does the job of data warehouse, data lake and streaming analytics in one product, rather than three different ones.
So, more unifying unity, essentially… and that unification spans three basic data domains.
- Data warehouse – trusted data that has to be reliable, kept for years – but is difficult to get everyone using
- Data lake – great for storing huge volumes of data, but data scientists are typically the only ones able to use it
- Data that exists in streaming analytics engines – great for working on data mid-stream – can’t do the jobs of the other products.
It also unifies the data into one place so you can make it easier for data engineering, data science and business teams to all get what they need out of big data.
Ground level definitions out of the way, what has Databricks been doing to add to unified utopia?
The company has this month announced Apache Spark open-source cluster-computing framework. 2.3.0 on Databricks’ Unified Analytics Platform. This means that the company is the vendor to support Apache Spark 2.3 within a compute engine, Databricks Runtime 4.0, which is now generally available.
In addition to support for Spark 2.3, Databricks Runtime 4.0 introduces new features including Machine Learning Model Export to simplify production deployments and performance optimizations.
“The community continues to expand on Apache Spark’s role as a unified analytics engine for big data and AI. This is a major milestone to introduce the continuous processing mode of Structured Streaming with millisecond low-latency, as well as other features across the project,” said Matei Zaharia, creator of Apache Spark and chief technologist and co-founder of Databricks. “By making these innovations available in the newest version of the Databricks Runtime, Databricks is immediately offering customers a cloud-optimised environment to run Spark 2.3 applications with a complete suite of surrounding tools.”
The Databricks Runtime, built on top of Apache Spark, is the cloud-optimised core of the Databricks Unified Analytics Platform that focuses on making big data and artificial intelligence accessible.
In addition to introducing stream-to-stream joins and extending new functionality to SparkR, Python, MLlib and GraphX, the new release provides a millisecond-latency Continuous Processing mode for Structured Streaming.
Instead of micro-batch execution, new records are processed immediately upon arrival, reducing latencies to milliseconds and satisfying low-level latency requirements.
This means that developers can elect either mode—continuous or micro-batching—depending on their latency requirements to build real-time streaming applications with fault-tolerance and reliability guarantees.
The new model export capability also enables data scientists to deploy machine learning models into real-time business processes.
There is, arguably, unification here of many types and at many levels.
Machine data analytics company Splunk has acquired security orchestration firm Phantom Cyber Corporation.
Phantom is more widely known as a SOAR player – Security, Orchestration, Automation and Response (SOAR).
Splunk CEO Doug Merritt is on the record with the customary niceties designed to resonate with similar platitudes from Oliver Friedrichs in his capacity as founder and CEO of Phantom.
Both chiefs have suggested that Splunk plus Phantom is a positive for software engineers involved with security orchestration.
It is, in effect, big data plus SOAR.
SOAR, at machine-speed
SOAR platforms bid to improve the efficiency of security operations by automating tasks, orchestrating workflows, improving collaboration and enabling security software/data developers and their operations counterparts to respond to incidents ‘at machine speed’, as they say.
According to the magical box-loving analysts at Gartner, by year-end 2020, 15% of organisations with a security team larger than five people will be using SOAR tools for orchestration and automation reasons – and that’s up from less than 1% today in 2018.
According to a press statement, “Customers will be able to use Splunk technology for orchestration and automation as an integral part of their Security Operations Center (SOC) platform to accelerate incident response while addressing the skills shortage.”
Splunk now talks about using automation capabilities to help solve automation challenges in a widening range of use cases, including Artificial Intelligence for IT Operations (AIOps).