Amazon Web Services (AWS) is aiming to provide software application development professionals with more tangible tools to build native cloud applications on the cloud, for the cloud.
The public cloud technology offering has been augmented this month with the release of an open source Software Development Kit (SDK) for the object-oriented C++ programming language .
“It’s surprising that it took this long for AWS to ship a C++ SDK, though. C++, a descendent of C that first appeared in the late 1970s, is very widely used by programmers,” writes Novet.
“We designed it to be fully functioning, with both low-level and high-level interfaces. However, we also wanted it to have as few dependencies as possible and to be platform-independent. At the moment, it includes support for Windows, OS X, Linux, and mobile platforms,” notes Barr.
This SDK has been specifically designed with game developers in mind, but AWS insists it has also worked hard to maintain an interface that will work for systems engineering tasks, as well as other projects that simply need the efficiency of native code.
It’s NodeConf EU time again — the third annual gathering of what is hoped to be 400 of the top influencers in Node.js at Waterford Castle from September 6th to 9th.
Cheesy, but nice
For the duration of the event the private island surrounding the historic Waterford Castle will be renamed ‘Nodeland’.
The event features speakers from the Node Foundation for the first time since the merger of Node.js and io.js under the last May.
Node Foundation members and community leaders such as Todd Moore, IBM director for Open Technology and Partnerships, Danese Cooper, head of open source for PayPal, Gianugo Rabellino, senior director of Open Source Communities at Microsoft and Brian McCallister, CTO of Platform at GroupOn will be speaking at the event. Mikeal Rogers will also be present and is the originator of NodeConf in the U.S with a number of other top speakers.
The audience will be comprised of C-level executives, Node.js developers at all skill levels, tech managers with Node.js teams and technology thought-leaders and visionaries.
As in previous years NodeConf EU will be curated by nearForm, now the world’s largest Node.js consultancy, whose CEO Cian O’Maidin is the only European member of the Node Foundation.
Ireland’s oldest city
Waterford is Ireland’s oldest city, founded by Viking raiders in AD 914. It has played a pivotal role in the economic, political and cultural life of Ireland and developing into Node.js centre of excellence.
According to the organisation, “NodeConf this year will be bigger and better than ever with delegates treated to their own Node.js powered robotic bartender that will prepare a cocktail in two minutes.”
Other features at the conference will be a lavish opening ceremony with a flagbearer on horseback, a Speigel tent, live music, traditional whiskey tasting, dueling singing waiters, archery and falconry.
News this week sees Hortonworks finalise an agreement to acquire Onyara, Inc.
The rationale here is: Hortonworks as an open enterprise Hadoop company scooping up the creator of and key contributor to Apache NiFi.
NOTE: Apache NiFi supports scalable directed graphs of data routing, transformation and system mediation logic i.e. core ‘highway control’ for dataflow management inside IT systems.
Collect, conduct and curate
Hortonworks is hoping to make it easier to automate and secure data flows and to collect, conduct and curate real-time business insights and actions derived from data in motion.
As a result of the acquisition, Hortonworks is introducing Hortonworks DataFlow powered by Apache NiFi which is complementary to its own Open enterprise Hadoop platform, Hortonworks Data Platform (HDP).
What is the Internet of Anything?
This is the suggestion that this is a ‘new data paradigm’ that includes data from machines, sensors, geo-location devices, social feeds, clickstreams, server logs and more.
Many IoAT applications need two way connections and security from the edge to the datacentre — this results in a ‘jagged edge’ that increases the need for security but also data protection, governance and provenance.
These applications also need access to both data in-motion and data at-rest.
While the majority of today’s solutions are custom-built, loosely secured, difficult to manage and not integrated, Hortonworks DataFlow powered by Apache NiFi will simplify and accelerate the flow of data in motion into HDP for full fidelity analytics. Combined with HDP, these complementary offerings will give customers a holistic set of secure solutions to manage and find value in the increasing volume of streaming IoAT data.
What is NiFi?
Apache NiFi was made available through the NSA Technology Transfer Program in the fall of 2014.
In July 2015, NiFi became a Top-Level Project, signifying that its community and technology have been successfully governed under the Apache Software Foundation.
“Nearly a decade ago when IoAT began to emerge, we saw an opportunity to harness the massive new data types from people, places and things, and deliver it to businesses in a uniquely secure and simple way,” said Joe Witt, chief technology officer at Onyara. “We look forward to joining the Hortonworks team and continuing to work with the Apache community to advance NiFi.”
Cloud computing is a ‘decoupled’ thing.
To be clearer, this term decoupling arises time & time again in relation to the cloud computing model of service-based processing and storage power.
Two senses of mobile
Decoupling is a good emotive term that transcends previous pre-cloud notions of mere networking to provide us with a new notion of a computing layer where applications and their dependent resources can be set free for a more mobile (in the interchangeable sense AND in the smartphone sense) existence.
But this is superficial decoupling (actually it’s not, but we’re making a point here… so go with it for now), deeper decoupling occurs when we start to look down into the substrate.
Deeper decoupling involves disconnecting individual management layers, computing platforms and processing engines from their core algorithmic kin.
Apache Twill is an abstraction layer that sits over Apache Hadoop YARN (the clustering and resource manager) that reduces the complexity of developing distributed applications — it does this by decoupling Hadoop itself from the MapReduce algorithm.
This action is designed to allowing developers to focus more on their application logic.
Hadoop is then decoupled to be able to run with other processing engines such as Spark for example.
It’s like threads
The Apache Twill project team explains that this technology allows programmers to use YARN’s distributed capabilities with a programming model that is similar to running ‘threads’ i.e. separated-out dependent streams of logic that can exist on their own.
While YARN is extremely complex technology, Twill aims to make this easier to pick up programmatically.
According to the development team, Apache Twill dramatically simplifies and reduces development efforts, enabling you to quickly and easily develop and manage distributed applications through its simple abstraction layer on top of YARN.
Its like distributed is good, decoupled distributed is really good — but abstracted decoupled distributed is even better.
Software application development can be accelerated.
More specifically, software acceleration technologies exist to pump extra optimisation and performance out of existing tools — although in this category we often primarily talk about GPU-based acceleration, it has to be said.
The immodestly titled IncrediBuild is a provider of software development acceleration technology.
It’s tool for Linux and Android is described as a suite of of out-of-the-box acceleration solutions enabling developers to accelerate their software development up to 30x faster.
“IncrediBuild for Linux and Android also allows developers to visualise their build process with no vendor lock-in or need to change their development toolchain or workflow,” says the firm.
What it ACTUALLY does
Essentially this software is supposed to save time usually spent waiting for build, testing, packaging(or other development processes) as part of the Continuous Integration and Continuous Delivery process.
This type of software is hoped to streamline development cycles by running development processes in a distributed fashion — that’s what the firm really means by software acceleration.
IncrediBuild uses a ‘Docker-like’ proprietary distributed container technology to enable fast processing of development tasks in parallel, allowing developers to turn their computer into a virtual supercomputer by harnessing idle cores from remote machines across the network and in the cloud, increasing performance, speeding build time, and improving developer productivity.
IncrediBuild for Linux and Android supports the most popular Linux distributions, such as Ubuntu, Red Hat Enterprise Linux, and CentOS.
It can accelerate most built tools and various development tools without any integration, changes to source code, or build tools environment.
The DevOps factor
“Being able to directly visually audit the build process to look for bottlenecks while reducing execution time with IncrediBuild significantly speeds up our ability to deliver innovative solutions to customers,” said Richard Trotter, DevOps support engineer at GeoTeric.
IncrediBuild allow a developer to record and replay the entire build process execution and provide an intuitive real time monitoring to build execution in a very easy to use graphical representation providing statistics and reporting.
It allows a developer to drill into low-level data as well as monitor their build health, identify and detect key errors and detect build dependencies, anomalies, and inefficiencies. It is the only solution that provides a centralised area to inspect previous builds, replay and analyse them.
This helps enable and optimise Continues Delivery as well as aid regulatory compliance. Until now the only way Linux developers could analyse what was going on as they compiled their software was to rely on the command line and long textual output.
Microsoft Research was founded in 1991 and is the company’s division dedicated to conducting both basic and applied research in computer science and software engineering.
The ‘group’ has this month increased its development efforts directed towards F#.
F# (the hashtag denoting sharp, obviously) is an open source cross-platform, functional-first programming language.
Where the F#?
F# runs on Linux, Mac OS X, Android, iOS, Windows, GPUs and browsers.
The language now stands to benefit from a second generation of tools specifically developed for software application development pros to use in conjunction with the Visual Studio IDE.
Visual F# Power Tools
“The goal of the extension is to complement the standard Visual Studio F# tooling by adding missing features, such as semantic highlighting, rename refactoring, find all references [capabilities], metadata-as-source [functionality] etc.,” said Anh-Dung Phan and Vasily Kirichenko — both of whom are F# community developers.
The pair also state that what’s particularly special about this project is that it’s a collective effort of the F# open source community.
They explain that they work alongside the Visual F# Team at Microsoft in order to provide a complete toolset for F# users in Visual Studio.
You can read more technical details on the .NET blog here.
IBM has introduced two Linux mainframe servers under the brand name LinuxONE.
The machines can perform 30 billion RESTful web interactions/day with Dockerized Node.js and MongoDB, driving over 470K database read and writes per second.
The company says it will now also enable open source and industry tools and software including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker on its z Systems.
SUSE (which provides Linux distribution for the mainframe) will now support KVM, thereby providing a new hypervisor option.
Canonical and IBM also announced plans to create an Ubuntu distribution for LinuxONE and z Systems. The collaboration with Canonical brings Ubuntu’s scale-out and cloud expertise to the IBM z Systems platform.
“Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux,” said Tom Rosamilia, senior vice president, IBM Systems.
“We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale.”
Fraud detection — in real time
The system is capable of analysing transactions in “real time” and can be used to help prevent fraud as it is occurring.
A key part of IBM’s latest mainframe code contributions are IT predictive analytics that constantly monitor for unusual system behaviour.
The code can be used by developers to build similar sense and respond resiliency capabilities on other systems.
The contributions will help fuel the new “Open Mainframe Project,” formed by the Linux Foundation.
These latest products from IBM can scale up to 8,000 virtual machines or hundreds of thousands of containers – currently the most of any single Linux system.
The humongous MongoDB factor
In line with this news, MongoDB says it has deepened its partnership with IBM, announcing plans to offer support for its own products on IBM z Systems mainframe.
“MongoDB has become the world’s fastest growing database by enabling organisations to effectively capitalise on the power of modern applications and data to gain a competitive advantage,” said Dev Ittycheria, president and CEO, MongoDB.
“For years, the world’s largest companies have run critical applications on IBM mainframes. Our move to support IBM z Systems is a testament to our commitment to our users and customers to make MongoDB available on all major platforms. With this announcement, organisations can now build and run modern, mission-critical applications on proven mainframe technologies.”
MongoDB confirms that it is working closely with IBM to engineer MongoDB Enterprise Server to be optimised for Linux on z Systems and the new LinuxONE Systems.
As part of the agreement, MongoDB’s global support and engineering organisation will continue to collaborate with IBM to ensure business continuity for our joint customers running MongoDB on IBM z Systems.
Researchers from IBM’s X-Force security division say they have discovered a number of high-severity vulnerabilities affecting more than 55% of Android devices.
These vulnerabilities, both on the Android platform itself and in third party Android Software Development Kits (SDKs,) can potentially be exploited by hackers to give a malicious app with no privileges the ability to gain unauthorised access to information and other functionalities on the device.
Ponemon — gotta survey ’em all
Those who give credence to Ponemon research studies may find some interest in suggestions from the organisation that firms spend an average of $34 million annually on mobile app development, but only 5.5% of this spend is dedicated to ‘in app’ security.
It is claimed that 50% of those companies devoted no budget at all to securing the apps they developed.
The vulnerabilities revealed by IBM centre on the Android platform OpenSSLX509Certificate class, which is one of many classes developers leverage to add functionality to apps such as network access and the phone’s camera – much like the news from last week’s Black Hat conference which underlined webcams as highly vulnerable.
What can happen?
By introducing malware into the communication channel between the apps and phone functionalities, attackers are able to:
· Take over an application on a user’s device and perform actions on behalf of the victim. (i.e. take photos, share content, send messages, etc – depending on the app)
· Replace real apps with fake ones filled with malware that can collect personal information. (i.e. replace Facebook with a fake version that collects your information on the social network)
· Steal sensitive information from the attacked app. (i.e. steal confidential banking information from a banking app or login credentials for different accounts)
Google as well as the vulnerable SDKs have been patched, however, IBM Security recommends that all users make sure they have downloaded the latest version of Android and have updated SDKs. If you would like anything else on this news, please just let me know.
Animation house Pixar will now open source its Universal Scene Description software.
The company behind Finding Nemo, Toy Story, Monsters Ins., Cars and the Incredibles has made this move to embrace the more open methods by which animation data is processed in the current age.
Universal Scene Description (USD) software helps handle the creation and ongoing maintenance of extremely big graphics-intensive scenes.
Pixar, founded in 1983, has been working with this software technique for more than 20 years.
“One of the key aspects of Pixar’s pipeline is the ability for hundreds of artists to operate simultaneously on the same collections of assets in different contexts, using separate ‘layers’ of data that are composited together at various production stages,” commented Guido Quaroni, VP of software R&D at Pixar.
ACRONYM NOTE: DCC stands for Digital Content Creation.
“USD generalises these concepts in an attempt to make them available to any DCC application,” he added.
This is a guest post for the Computer Weekly Developer Network blog by Bob Wiederhold, CEO Couchbase.
Couchbase is company known for its open-source distributed NoSQL document-oriented database that is optimised for interactive applications
Wiederhold writes in light of recent NoSQL industry benchmarks comparing flagship products, which it has to be said… have been met with contrasting opinions.
So how do we know what to believe?
What benchmarks should be like
Benchmark tests may raise questions, however it’s essential that each report is open, reproducible and is not over-engineered to favour one solution over another.
Under these circumstances, competitive benchmarks are designed to provide valuable information to developers and ops engineers who are evaluating various tools and solutions.
More NoSQL usage necessitates more testing
The release of an increasing number of benchmarks isn’t surprising.During early phases of NoSQL adoption, benchmarks were somewhat less important because most users were experimenting with NoSQL or using it on lightweight applications that operated at small scale.
Since 2013, we’ve entered a different phase of NoSQL adoption, where appetite has grown, and organisations are deploying NoSQL for mission-critical applications operating at significant scale.
The use of benchmarks is increasing because performance at scale is critical for most of these applications.
Developers and ops engineers need to know which products perform best for their specific use cases and workloads.
Different benchmarks: different use cases
It’s entirely legitimate for benchmarks to focus on use cases and workloads that align with the target market and ‘sweet spots’ of the vendor’s products.
(CWDN Ed — this is Wiederhold’s ‘money shot’ killer line playing for validation isn’t it? The point is (relatively) impartially made and at least he is being candid enough to say it out loud. Doesn’t (quite) make it alright, but nearly. Let’s allow the gentleman to finish…
That doesn’t make them invalid, it just points out the importance of highlighting what those use cases and workloads are so developers and ops engineers can assess whether the benchmark is applicable to their specific situations.
Keeping It fair
To be useful, however, benchmarks need to be fair, transparent and open. Otherwise, they’re of little value to anyone, let alone the developers and engineers who depend on them to make an informed decision.
Vendors may complain that a benchmark isn’t fair because it’s focused on a use case and workload that’s not a sweet spot for them.
Those aren’t valid complaints. On the other hand, benchmarks need to make every effort to achieve an apples-to-apples comparison and, for example, use the most recent software versions.
These comparisons can be difficult, because the architectures and operational setups of each product are so different, but significant effort should be made to achieve this. Using the right version of software should be very easy to achieve and should promptly be fixed when it isn’t.
Keeping it transparent
Transparency implies at least two things:
(1) Clearly communicating the use cases and workloads that are being measured, and
(2) making the benchmarks open so others can reproduce them, verify the results, and modify them to align more closely with the specific use cases they care about.
A sign of NoSQL growth and adoption?
Vendors will continue to sponsor and publish benchmarks, and they’ll continue to gear them toward the use cases the vendor supports best.
All of this is just another indicator of the rising importance of NoSQL, which is growing fast. According to a recent report from Allied Market Research, the global NoSQL market is expected to reach $4.2 billion by 2020 – an annual growth rate of 35.1% from 2014 through 2020. When done fairly and transparently, competitive benchmarks can help enterprises choose the right product for their particular set of requirements.
Couchbase is very focused on supporting enterprise-class mission-critical applications that operate at significant scale with mixed read/write workloads. As a result, our benchmarks run on clusters with many servers and reflect those workloads.
We have recently seen some benchmarks focused on supporting applications that operate at much smaller scale and therefore tested with a small amount of data running on a single server.
Both are valid, but for completely different situations and users.