Open Source Insider


November 10, 2019  7:58 PM

NearForm clocks in with hackable open source JavaScript AI smartwatch

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Irish county town of Kilkenny is known for its medieval buildings and castle, its rich history of brewing, its distinctive black marble and as the home of White House architect James Hoban.

In more recent times, Kilkenny has become known as the home of the NodeConf EU conference, a coming together of Node.js specialists who all gravitate towards this open source cross-platform JavaScript runtime environment that executes JavaScript code outside of a browser.

This year’s event saw NearForm Research and Espruino surprise delegates by giving out something better than plain old lanyards and name tags — the two companies came together to offer an arguably rather more exciting Machine Learning (ML)-driven smartwatch to act as attendee’s conference badges.

Bangle.js is said to be the first open source JavaScript (JS) smartwatch to be powered by Machine Learning via Google’s TensorFlow Lite. It is hoped to be a step towards the mainstream adoption of JS and ML in low cost consumer electronics.

Developers will be able to create their own AI applications for the Bangle.js device.

It comes pre-loaded with features and apps including: GPS, compass, heart rate monitor, maps, games and gesture-control of PC applications over Bluetooth.

“Bangle.js is not just about a single device, codebase or company. I believe it has the potential to bootstrap a community-driven open health platform where anyone can build or use any compatible device and everyone owns their own data. Machine Learning is a critical aspect of health technology and we’re so pleased to be further involved in the TensorFlow open source project,” said Conor O’Neill, chief product officer for NearForm.

County Waterford headquartered NearForm is known for its professional technology consultancy work with both local Irish and international companies spanning a range of industries. “Everything we do emanates from open source,” insists the company.

Watch-makers

The team took a reasonably powerful off-the-shelf smartwatch and ported the Espruino code to the device so that all of its sensors were accessible to JavaScript programmers.

This first Bangle.js device can also be easily disassembled with just a screwdriver for ease of fixing and replacing its parts.

The teams also ported the Micro version of Google’s TensorFlow Lite to the watch to give it Machine Learning capabilities with input from Google’s TensorFlow community. They then designed an ML gesture detection algorithm which is built into every watch and enables the user to control applications, including PowerPoint, with hand gestures.

The companies explain that even ‘lapsed’ and non-programmers can also interact with Bangle.js using Blockly or low-code Node-RED.

NodeConf EU 2019. Kilkenny, Ireland.

 

October 28, 2019  2:27 PM

Tibco adds extra sauce to open source

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Tibco is focused on open source and Agile this month.

The integration and analytics specialist has upped the toolset in a group of its products with a key focus on Agile agility for cloud-native deployments.

The company says it is putting AI inside (who isn’t?) its enhancements to the TIBCO Connected Intelligence platform

Matt Quinn, chief operating office at Tibco says that his firm’s vision is that customers should use Tibco as their ‘data foundation’.

In terms of cloud-native, Tibco’s API management software TIBCO Cloud Mashery is available in cloud-native deployments in public clouds, private clouds and on-premises. The company’s Mashery Local Developer Portal is now also available as a fully cloud-native deployment.

Quinn says that IT teams are faced with the increasing complexity of metadata governance — and the firm’s Cloud Metadata tool runs Tibco EBX master data management to address this.

NOTE: Metadata governance is used most often in relation to digital media, but older forms of metadata are catalogues, dictionaries and taxonomies.

Extra open source sauce

The company also continues to develop capabilities to support open source and is weaving more open offerings into its product mix.

The introduction of Tibco Messaging Manager 1.0.0, including an Apache Kafka Management Toolkit, provides a predictive and auto-completing command-line interface (CLI), which aims to simplify the setup and management of Apache Kafka. As readers will know, Kafka is used for building real-time data pipelines and high-throughput low-latency distributed streaming applications. Tibco Messaging components feature a common management plugin, use a common interface and allow for easier continuous integration and deployment. Tibco Messaging Manager extends the company’s support for Apache Kafka and enables the Tibco Connected Intelligence Cloud platform to take advantage of Kafka for integration, event processing and real-time messaging with historical context.

“In addition, Tibco now offers support for IoT-based machine-to-machine communication via OPC Foundation Unified Architecture in Tibco Streaming software. In support of open-source Project Flogo, Tibco announces the Project Flogo Streaming User Interface. Integrating with Tibco’s existing solutions, the Project Flogo Streaming User Interface lets developers build resource-efficient, smarter real-time streaming processing apps at the edge or in the cloud, improving the productivity of expert IT resources,” noted the company, in a press statement.

Also here Tibco’s AutoML extension for its Data Science software via Tibco LABS facilitates the development and selection of AI workflows. In addition, new Process Mining capabilities via Tibco LABS enable users to discover, improve, and predict process behaviour from data event logs produced by operational systems.

Lastly, to further strengthen Tibco contribution to the open-source community, the company says it has introduced an open source specification in the shape of CatalystML to capture data transformations and consume machine-learning artifacts in real-time for high-throughput applications.

Image Source: Tibco

 


October 23, 2019  4:28 PM

Apache Tinkerpop rocks DataStax support for Gremlin

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

DataStax offers a commercially supported ‘enterprise-robust’ database built on open source Apache Cassandra.

As such, DataStax has told Computer Weekly Open Source Insider that it is actively engaged with supporting a variety of live, working, growing open source projects.

Among those projects is Apache Tinkerpop… and inside Tinkerpop is Gremlin.

What is Tinkerpop?

Apache TinkerPop is a graph computing framework for both graph databases that work with OnLine Transactional Processing (OLTP) and graph analytic systems that work with OnLine Analytical Processing (OLAP).

For extra clarification, TinkerPop is an open source, vendor-agnostic, graph computing framework distributed under the commercial-friendly Apache2 license.

According to Apache, “When a data system is TinkerPop-enabled, its users are able to model their domain as a graph and analyse that graph using the Gremlin graph traversal language. Furthermore, all TinkerPop-enabled systems integrate with one another allowing them to easily expand their offerings as well as allowing users to choose the appropriate graph technology for their application.”

TinkerPop supports in-memory graph databases through to distributed computing databases that can run in parallel across hundreds of nodes, so you can scale up as much as your data set requires you to.

What is Gremlin?

Gremlin is the most common query language used for graph – it’s used across multiple graph technologies so provides a common framework for working with graph data.

In terms of use, Gremlin works for both OLTP-based graph databases as well as OLAP-based graph processors — and its automata (abstract machine status) and functional language foundation enable Gremlin to naturally support imperative and declarative querying.

Gremlin is a functional open source graph traversal language and it works like Java in that it is composed of a virtual machine and an instruction set.

DataStax on Gremlin

DataStax says that getting used to Gremlin can make it easier to understand how graphs work and how to query data.

According to an official company statement, “At DataStax, we support this project wholeheartedly – for example, the Gremlin project chair works at DataStax and the DataStax team contributes the vast majority of the commits. We will continue to support this project as it has organically grown to be the most widely adopted traversal framework for the whole community around graph.”

DataStax offers a free DataStax Academy course entitled Getting Started with TinkerPop and Gremlin at this link. The company also notes that in order to be familiar with Gremlin traversal syntax and techniques, developers need to understand how the language works… consequently, DataStax has provided a free Gremlin recipes series to offer some insight into Gremlin internals.


October 16, 2019  4:24 PM

Ripple invests in Swedish open source start-up Towo Labs

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Towo Labs, a Swedish startup aimed at simplifying ‘crypto self-custody’, has announced an investment from Xpring, Ripple’s developer initiative.

Xpring is described as an initiative by Ripple that will invest in, incubate, acquire and provide grants to companies and projects run by entrepreneurs.

Ripple Labs Inc. itself develops the Ripple payment protocol and exchange network.

According to Crypto Digest News, a [cryptocurrency] custodian holds and keeps assets safe — its goal is to minimise the risk of loss or theft and usually also provides additional services like account administration.

Towo Labs is now focused on the development of hardware wallet firmware with support for all XRP Ledger (XRPL) transaction types and a trustless, non-custodial web interface to the XRP Ledger.

NOTE: The XRP Ledger is a decentralised cryptographic ledger powered by a network of peer-to-peer servers to process XRP digital assets — Towo Labs founder Markus Alvila is the creator of the existing XRP Toolkit.

At the outset, Towo Labs will focus on developing hardware wallet firmware with full XRPL support for Ledger Nano S, Ledger Nano X and Trezor T with the aim of making it easier to securely sign transactions.

Open source contributions

All open source code contributions will be subject to the normal code and security reviews of the involved repository maintainers.

Today’s existing firmware only supports XRP payment transactions, which in some cases block further XRPL and Interledger innovation. The new firmware, however, will support the signing of cross-currency payments, trust lines, escrows, orders, payment channels, account settings and so forth.

“With full XRP support among leading hardware wallets, transactions can be prepared from untrusted devices and applications (for example, over the web) before being reviewed and securely signed inside a hardware wallet,” noted the company, in a press statement.

This added support also enables new applications like trustless, non-custodial trading interfaces to the XRPL decentralized exchange, improved self-custody for DeFi applications and hybrid multi-signing schemes requiring signatures from both hardware and software wallets.

The coming updates to the XRP Toolkit seek to achieve a trustless, non-custodial XRP Ledger web interface, where you can prepare and submit any transaction type from any device, signing using the wallet of your choice or one generated with the XRP Toolkit.

In addition to the leading hardware wallets, XRPL Labs’ signing platform Xumm will also be integrated as a signing option.

 


October 12, 2019  12:02 PM

What to expect from Scylla Summit 2019

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network and Open Source Insider team are big fans of Greek classics, San Francisco clam chowder shared-nothing architectures and open source-centric real-time big data databases.

Luckily then, we’re off to Scylla Summit 2019, staged in San Francisco on November 5 and 6.

Scylla (the company) takes its name directly from Scylla [pronounced: sill-la], a Greek god sea monster whose mission was to haunt and torment the rocks of a narrow strait of water opposite the Charybdis whirlpool.

Outside of Greek history, Scylla is an open source essentially distributed NoSQL data store that uses a sharded design on each node, meaning each CPU core handles a different subset of data.

TECHNICAL NOTE: Sharding is a type of database partitioning that separates very large databases the into smaller, faster, more easily managed parts called data shards. Technically speaking, sharding is a synonym for horizontal partitioning.

Scylla is fully compatible with Apache Cassandra and embraces a shared-nothing approach that increases throughput and storage capacity as much as 10X that of Cassandra itself.

Yay, for users

Scylla Summit is heavily focused on users and use cases. As such, the 2019 Scylla User Award Categories will look to recognise the most innovative use of Scylla, the biggest node reduction with Scylla and the best Scylla cloud use case.

Other commendations in the company’s awards will include: best real-time use case; biggest big data use case; Scylla community member of the year; best use of Scylla with Kafka; Best Use of Scylla with Spark; and best use of Scylla with a graph database.

“There’s nothing we enjoy more than seeing the creative and impressive things our users are doing with Scylla. With that in mind, we presented our Scylla User Awards at last week’s Scylla Summit, where we brought the winners up on stage for a big round of applause and bestowed them with commemorative trophies,” wrote Scylla’s Bob Dever in his capacity as VP of marketing.

According to the company’s official event statement, Scylla Summit features three days of customer use cases, training sessions, product demonstrations and news from ScyllaDB.

Best & brightest

Developers share best practices, product managers learn how to reduce the cost and complexity of their infrastructure and entrepreneurs connect with the best and brightest in the community.

Scylla CEO Dor Laor insists that this year’s Scylla Summit is shaping up well and he notes that the company will roll-out first-of-their-kind features.

“We will announce our lightweight transactions and CDC capabilities, dig into our new Scylla Alternator API for DynamoDB users and hear from some of the world’s most innovative companies about how they’re putting Scylla to work. We’ll also unveil results of a major new performance test. If you thought it was something when we hit one million OPS per node — you haven’t seen anything yet. As always, Scylla Summit will be a great place to get inspired, learn from your colleagues and keep up to date with the latest advances in big data and NoSQL,” said Laor.

TECHNICAL NOTE: Laor mentions CDC in the above quote… so, in the world of databases, Change Data Capture (CDC) is a set of software design patterns used to determine (and track) the data that has changed so that action can be taken using the changed data.

“At past events, people have told us that the conversations they have at Scylla Summit have helped shaped their big data strategies,” said Dor Laor, CEO of ScyllaDB. “This is a chance to share ideas with the smartest, most plugged-in members of the NoSQL and big data communities. It’s completely relaxed, massively useful and lots of fun.”

The company is also likely to announce a note of new customers and spell out its wider roadmap.

Scylla tweets at @ScyllaDB, the event hashtag is #ScyllaSummit and a note of speakers can be found here.


October 10, 2019  12:58 PM

From Russia with OLAP: Percona uses ClickHouse analytics

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

In our voyages from software conference to software conference, we technology journalists often find stories that are developing and worth sharing.

At Percona Live Europe last week, one such example came up around the open source scene that is developing in Russia and how one of the projects that is now starting to open up to international use.

Think about Russia typically… and you may not automatically think about open source software. However, the country has a strong software developer community that is looking to expand the number of projects that are used internationally.

An example of this is ClickHouse, an open source data warehouse project found on GitHub here. The technology was originally developed at Yandex, the Russian equivalent of Google.

As defined on TechTarget: a data warehouse is a ‘federated repository’ for all the data collected by an enterprise’s various operational systems – and the practice of data warehousing itself puts emphasis on the ‘capture’ of data from different sources for access and analysis.

ClickHouse’s performance claims to exceed that of comparable column-oriented database management systems (DBMS) currently available. As such, it processes hundreds of millions (to more than a billion) of rows — and tens of gigabytes of data per single server, per second.

Cluster luck

According to its development team, ClickHouse allows users to add servers to their clusters when necessary without investing time or money into any additional DBMS modification.

According to the development team notes, “ClickHouse processes typical analytical queries two to three orders of magnitude faster than traditional row-oriented systems with the same available I/O throughput. The system’s columnar storage format allows fitting more hot data in RAM, which leads to a shorter response times. ClickHouse is CPU efficient because of its vectorised query execution involving relevant processor instructions and runtime code generation.”

The central go-to-market proposition here is that by minimising data transfers for most types of queries, ClickHouse enables companies to manage their data and create reports without using specialised networks that are aimed at high-performance computing.

Speedy OLAP

The technology, which is essentially aligned for Online Analytical Processing (OLAP), uses all available hardware to process each query as fast as possible, which amounts to a speed of more than 2-terabytes per second.

The project is starting to expand and get more adopters. As part of its monitoring product launch at the event, database monitoring and management company Percona announced that it will use ClickHouse for load testing and to monitor accessibility and other performance KPIs. Percona’s leadership team originally hails from Russia, so there are a lot of relationships there as well.

Alongside Percona, Altinity is also looking to expand use of ClickHouse over time. Robert Hodges, CEO at Altinity describes the company as a provider of the highest ClickHouse expertise on the market to deploy and run demanding analytic applications. The company also provides software to manage ClickHouse in Kubernetes, cloud and bare-metal environments.

Explaining how his firm has developed alongside the core ClickHouse technology proposition, Hodges says that the enterprise version of ClickHouse can run on laptop, yet be ready to scale up for significant enterprise workloads.

“ClickHouse is very efficient at processing and handling time-series data…. and it has SQL features which are great at monitoring specific cloud issues such as ‘last point query’ [a way of looking at the last thing that happened in a cloud application]. Say for example you had a bunch of Virtual Machines (VMs) running in the cloud and you wanted to know the CPU load on them, ClickHouse is good at that getting that measure to you. It’s good to know to the ‘current state’ of VMs, because from that point you can then drill-in a look at the load over (for example) the last two weeks and so get a sharper idea of performance status,” said Altinity’s Hodges.

Hodges also explains that Percona is interested in ClickHouse because it is so similar to MySQL – it has ‘surface similarities’ and has good abilities to load data into it and pull data down from it.

This project is an example of how open source communities can expand with new approaches to existing problems.

Altinity is a service and software provider for ClickHouse.

 

 


October 8, 2019  1:01 PM

IFS drives ‘open’ into app suite, delivers new native API model

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprise applications company IFS has used its annual user conference to detail work carried out on its major application suite.

The company wants to put the ‘open’ in service management, enterprise resource management (ERP) and enterprise asset management (EAM).

The company says it has ‘evolved’ its technology foundation with 15,000+ native APIs to open paths to extensibility, integration and flexibility… because that what APIs do, obviously.

OpenAPI Initiative

Of some significance here, IFS has noted that it is a new member of the OpenAPI Initiative (OAI), a consortium of experts who champion standardization on how REST APIs are described.

According to OAI official statements, the group has an open governance structure under the Linux Foundation – and the OAI is focused on creating, evolving and promoting a vendor-neutral description format.

IFS insists that it is promoting open applications to allow freedom to develop and connect data sources to drive value in a way that is ‘meaningful’ to enterprises.

“By prioritising open applications, IFS is upping the ante in terms of innovation and customer-centricity while decisively turning away from platform coercion and lock-in,” noted the company, in a press statement.

IFS offers native OData-based RESTful APIs across its entire suite of ERP, EAM and service management products, to make connecting, extending or integrating into the IFS core quicker and easier.

OData-based RESTful APIs are defined on Stack Overflow as, “A special kind of REST where we can query data uniformly from a URL. REST stands for REpresentational State Transfer which is a resource-based architectural style. OData is a web based protocol that defines a set of best practices for building and consuming RESTful web services.”

The APIs at IFS have been engineered in tandem with IFS’s new ‘Aurena’ user experience brand, which is now available across the full breadth of IFS applications.

“With this approach, IFS is giving its customers 15,000 new ways to flex,” IFS CEO Darren Roos said. “It goes without saying that, as excited as we are about reaching this milestone, the driving force behind our deliveries is our unwavering commitment to offer choice and value to our customers. Providing ‘open’ solutions is a critical factor in making good on this promise. The quality, pace, and focus of our product development speaks to a business that is outperforming the legacy vendors in the enterprise software space.”

Aurena user experience

The IFS Aurena user experience offering has now been extended across the entire IFS Applications suite for Service Management, ERP and EAM. It uses the same set of APIs, which are now generally available and provides a browser-based user experience optimised for each role and user type, with a focus on employee engagement and productivity.

“IFS Aurena provides customers with a truly responsive design, allowing the entire suite to automatically adapt to different form factors as well as capabilities to design and build truly native applications targeted across iOS, Android and Windows, with support for offline scenarios and device-specific capabilities such as GPS and camera,” noted IFS chief product officer Christian Pedersen.

Among the more significant industry updates here is support for International Traffic in Arms Regulations (ITAR) compliance initiatives in the cloud.

Customers who have ITAR obligations, such as those operating in or trading with the U.S. aerospace, defence or government sectors, can deploy and use IFS software to support their ITAR compliant business needs, in an independently validated environment hosted in the Microsoft Azure Government Cloud, fully managed by IFS.


October 6, 2019  3:31 AM

DataStax offers bidirectional data dexterity for Apache Kafka

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

DataStax has opened up ‘early access’ to its DataStax Change Data Capture (CDC) Connector for Apache Kafka, the open source stream-processing (where applications can use multiple computational units, similar to parallel processing) software platform.

As a company, DataStax offers a commercially supported ‘enterprise-robust’ database built on open source Apache Cassandra.

Stream processing is all about speed and cadence, so, the DataStax CDC Connector for Apache Kafka gives developers ‘bidirectional data movement’ between DataStax, Cassandra and Kafka clusters.

In live deployment, CDC is designed to capture and forward-insert, update and delete activity applied to tables (column families).

Bidirectional dexterity

So what does bidirectional data dexterity bring forward? It makes it easier for developers to build globally synchronised microservices.

Apache Cassandra already offers a core cross-datacentre replication capability for data movement between microservices. This then, is an augmentation of that replication power.

The connector enables bidirectional data flow between Cassandra and Kafka, ensuring that data committed to a developer’s chosen system of record database can be forwarded to microservices through Kafka.

Developers are invited to use the early release of the CDC source connector in DataStax Labs in conjunction with any Kafka offering and provide feedback before the product is finalised.

Kathryn Erickson, senior director of strategic partnerships at DataStax says that her firm surveys its customers every year and, currently, more than 60% of respondents are using Kafka with DataStax built on Cassandra.

“Any time you see a modern architecture design promoted, whether it be SMACK, LAMDA, or general microservices, you see Cassandra and Kafka. This CDC connector is an important technical achievement for our customers. They can now work with the utmost confidence that they’re getting high quality and proven high performance with Cassandra and Kafka,” said Erickson.

Gold plated connectors

As an additional note, the DataStax Apache Kafka Connector recently earned the Verified Gold level by Confluent’s Verified Integrations Programme.

This distinction is supposed to assure connectors meet technical and functional requirements of the Apache Kafka Connect API – an open source component of Kafka, which provides the framework for connecting Kafka with external systems.

According to DataStax, “By adhering to the Connect API, customers can expect a better user experience, scalability and integration with the Confluent Platform. The initial DataStax Apache Kafka Connector enables developers to capture data from Kafka and store it in DataStax and Cassandra for further processing and management, offering customers high-throughput rates.”

Why it all matters

The questions you should perhaps now be asking are: does any of this matter, has DataStax done a good thing… and is stream processing a hugely important part of the still-emerging world of Complex Event Processing (CEP) and the allied worlds of streaming analytics and modern fast-moving cloud native infrastructures?

The answer is: look at your smartphone, laptop, nearest kiosk computer, IoT sensor or (if you happen to be sat in a datacentre) mainframe or other machine — the reality of modern IT is a world with a perpetual stream of data events all happening in real time. If we sit back and store data through aggregation and batch processes, then we will miss the opportunity to spot trends, act and apply forward-thinking AI & ML based processes upon our data workloads.

DataStax obviously knows this and the company appears to be pushing Kafka advancements and augmentations forward to serve the needs of the always-on data workloads that we’re now working with.

Developers are able to test the early release of the DataStax CDC Connector for Apache Kafka immediately and can access the connector here.

 


October 1, 2019  4:05 PM

Percona details ‘state’ of open source data management

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source database management and monitoring services company Percona has laid down its state of open source data management software survey for 2019.

Surveys are surveys and are generally custom-constructed to be self-serving in one sense or another and so convey a message set in their ‘findings’ that the commissioning body (or in this case company) has wanted to table to media, customers, partners and other related bodies.

This central truth being so, should we give any credence to Percona’s latest market assessment?

Percona is an open source database software and services specialist that now offers the latest version of its Percona Monitoring and Management 2 product (aka PMM2) – this tool provides ‘query analytics’ for database administrators and sysadmins to identify and solve performance bottlenecks, thus preventing scaling issues and potential outages.

Given its position of ‘overseeing database’ operations and provision of database monitoring and management services, it is perhaps permissible to allow Percona to survey the open source data management market.

The company reminds us that there is no ‘one size fits all’ in terms of database needs… but why is this so?

There is no one size for all (says Percona) because although software vendors have worked hard to add features, users gravitate towards the best database (and best tool) for the job, using the right database for the right application in database environments that can exist either on-premises, in the cloud, or a hybrid of the two.

So therefore, we end up with multi-database environments running on multi-cloud backbones.

Our survey (respondents) said

So who responded?

The US represented the biggest base of respondents that were happy to share their views on database management challenges with 26%. However, the remaining 74% are spread widely across the globe, giving us an arguably quite broad mix of global replies.

We might also argue that this breadth demonstrates the diversity and reach of the open source community.

In total 836 techies responded from 85 countries. The larger the company size, the more likely they are to have multiple databases. Larger company adoption of a multi-database environment jumps 10-15% over small companies.

As a quirk in part of the results, Percona notes that it was somewhat surprising how many people use both MySQL and PostgreSQL. The overlap of these two databases in a single environment is much higher than that of MySQL and MongoDB.

According to Percona, “Most survey respondents are well-informed about using open source technology in the cloud and do so. Interestingly, however, these passionate open source evangelists championing cost-effectiveness, flexibility and freedom from vendor lock-in often find themselves tied to cloud vendors with a single solution and large monthly costs.”

As company size grows, it is much more likely that companies are hosting databases in multiple clouds. The larger the organisation, the more complex the hosting environment. Larger organisations have a 10-15% swing in hybrid cloud, private cloud and on-premises.

Big company, complex hosting

Also, we see that the larger the organisation, the more complex the hosting environment… which one might argue is good news for Percona as it will more an opening to attempt to sell its management services.

Percona also notes that AWS continues to dominate the public cloud provider market, with over 50% of respondents using its cloud platform. Google Cloud and Microsoft Azure show similar numbers of respondents using their technologies and obviously offer alternatives to companies resistant to using Amazon, or won’t use Amazon due to a competitive clash.

According to Percona, “The multi-cloud usage for databases is about a third of our respondents, with 41% of larger companies using multi-cloud deployments (close to 10% over smaller companies). Smaller companies are more likely to use Google than Microsoft, but larger companies prefer Microsoft to Google (which could have something to do with startup businesses need for cost-effectiveness and agility).”

Over 25% of respondents are using containers, but not necessarily to run databases. This could be due to some early bias against running databases in containers. Many respondents aren’t aware if they use containers for their databases or not.

Adoption of open source

The top two responses to this question are the same ones that continue to dominate discussion on the benefits of open source: cost savings (79.4%) and avoiding vendor lock-in (62%). The benefit of having a community also scored highly (over 50%).

There are some interesting differences between management and non-management answers to these questions.

There is an 8% uptick in responses from management on avoiding vendor lock-in. It looks like larger enterprise management and non-management are on the same page about vendor lock-in. In small to medium companies, however, there seems to be a disconnect.

There is a 6% uptick in those looking for additional security.

Those who list vendor lock-in as a critical reason to adopt OSS are on average 10% less likely to buy support from a vendor (potentially viewing support as another form of lock-in). Note that in medium-sized companies, the number jumps up 19% for management being more likely to pay for support versus non-management. In large companies, management “support” for ‘support’ drops 16% and is 4% below non-management.

Overall… there is an industry-wide shift happening in open source. Well, OF COURSE there is… now that Microsoft loves Linux, IBM buys Red Hat and all the other major developments we’ve seen with GitHub, the .Net framework and more besides. What happens next is a bigger shift in terms of the way the open source world is moving and the how commercial open source stands up against its hobbyist programmer roots.

Readers can find the rest of the survey results here.

 


September 18, 2019  7:00 PM

Code One showcases Oracle developer tools & platform

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Oracle used the Code One section of its Oracle Open World 2019 conference to detail some of the more developer-centric elements of its total platform.

Code One derives its name from JavaOne, the developer conference staged every year by Sun Microsystems until its acquisition by Oracle in early 2010.

Key among the announcements was Oracle Cloud Free Tier, an offering of ‘always-free’ services for data-developers to test run the company’s self-driving Autonomous Database and Oracle Cloud Infrastructure for an unlimited time.

The promise to developers, students, hobbyists (and indeed others) is an option to explore the full functionality of the database and cloud infrastructure layer including Compute Virtual Machines (VMs), use of both block storage and object storage… as well as access to Oracle Load Balancer – a combination that is said to offer all the essentials for developers to build complete applications on Oracle Cloud.

The technology on offer here remains free for as long as it is used.

“We are thrilled to offer Always Free Oracle Autonomous Database and Oracle Cloud Infrastructure,” said Andrew Mendelsohn, executive vice president, Database Server Technologies, Oracle. “This enables the next generation of developers, analysts, and data scientists to learn the latest database and machine learning technologies for developing powerful data-driven applications and analytics on the cloud.”

This initiative enables developers to build applications using any language and framework — they can get started quickly without waiting for IT to provision and learn new technologies such as artificial intelligence and machine learning.

Keynote zones

In the Oracle Code One keynotes zone, the company detailed what it likes to call ‘intelligent applications’ that use machine learning and data from multiple sources to make predictions and suggestions.

Developers and data scientists were told how they can build cloud native, serverless apps using machine learning algorithms in a transactional-analytical database and cloud infrastructure.

We also heard the story of Oracle Cloud Todd Sharp, who took his 13-year-old daughter to the hospital for a test to find out that she had Type 1 diabetes. He has worked with Oracle on technology to improve her life.

“My 13-year-old daughter was recently diagnosed with diabetes. She is one of 400 million people worldwide who has to live with this disease for the rest of her life — and I am determined to be with her every step of the way. Thanks to technology, I can help her to easily calculate her carbohydrate consumption to help her determine her next insulin dose. This is my attempt as a father and a developer to address a real-world problem with technology, and hopefully inspire others to do the same.

According to a Forbes write up of his story, Sharp tackled his technology for health challenge using Simple Oracle Data Access (SODA), Oracle REST Data Services (ORDS), Micronaut Data and Helidon (for coding microservices), Oracle Functions to help perform serverless calls and the Oracle Autonomous Database as the core data store.

Also of note was the data science presentation – this session highlighted the essential role of data as the source of inputs, insights, and intelligent actions for numerous digital technologies, including IoT/IIoT, 4D printing, AR/VR, digital twins, blockchain, autonomous data-driven application systems, and learning systems (cities, farms, healthcare).

According to Oracle, “Data science is the methodology and process by which a business culture of experimentation empowers developers to transform data into value. Transforming data-to-value includes insights discovery, informed decision support, and intelligent next-best action.”

Oracle says it wants developers to connect with the company as they ‘building the value chain’ that connects data (inputs) to emerging AI technologies (outcomes).

Oracle API Gateway

Aside from the keynote, Oracle developers detailed the company’s work to build Oracle API Gateway.

This is a platform for managing, delivering and securing web Application Programming Interfaces (APIs). It provides integration, acceleration, governance and security for API and SOA-based systems and is available on Windows, Linux and Solaris.

Oracle API Gateway includes features dedicated to looking after identity management, scalability, REST APIs, data enrichment, governance, reporting and the fabulously named act of ‘traffic throttling’ to smooth out spikes.

We know that every company in every vertical now has to reinvent itself as a technology company… and, in a world where even your grandmother knows what an app is, every company has to be a developer-centric software-driven cloud-first company… and that’s pretty much exactly where Oracle has positioned itself. Larry might still love his database, but even he knows that the code is king.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: