Open Source Insider


October 12, 2019  12:02 PM

What to expect from Scylla Summit 2019

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network and Open Source Insider team are big fans of Greek classics, San Francisco clam chowder shared-nothing architectures and open source-centric real-time big data databases.

Luckily then, we’re off to Scylla Summit 2019, staged in San Francisco on November 5 and 6.

Scylla (the company) takes its name directly from Scylla [pronounced: sill-la], a Greek god sea monster whose mission was to haunt and torment the rocks of a narrow strait of water opposite the Charybdis whirlpool.

Outside of Greek history, Scylla is an open source essentially distributed NoSQL data store that uses a sharded design on each node, meaning each CPU core handles a different subset of data.

TECHNICAL NOTE: Sharding is a type of database partitioning that separates very large databases the into smaller, faster, more easily managed parts called data shards. Technically speaking, sharding is a synonym for horizontal partitioning.

Scylla is fully compatible with Apache Cassandra and embraces a shared-nothing approach that increases throughput and storage capacity as much as 10X that of Cassandra itself.

Yay, for users

Scylla Summit is heavily focused on users and use cases. As such, the 2019 Scylla User Award Categories will look to recognise the most innovative use of Scylla, the biggest node reduction with Scylla and the best Scylla cloud use case.

Other commendations in the company’s awards will include: best real-time use case; biggest big data use case; Scylla community member of the year; best use of Scylla with Kafka; Best Use of Scylla with Spark; and best use of Scylla with a graph database.

“There’s nothing we enjoy more than seeing the creative and impressive things our users are doing with Scylla. With that in mind, we presented our Scylla User Awards at last week’s Scylla Summit, where we brought the winners up on stage for a big round of applause and bestowed them with commemorative trophies,” wrote Scylla’s Bob Dever in his capacity as VP of marketing.

According to the company’s official event statement, Scylla Summit features three days of customer use cases, training sessions, product demonstrations and news from ScyllaDB.

Best & brightest

Developers share best practices, product managers learn how to reduce the cost and complexity of their infrastructure and entrepreneurs connect with the best and brightest in the community.

Scylla CEO Dor Laor insists that this year’s Scylla Summit is shaping up well and he notes that the company will roll-out first-of-their-kind features.

“We will announce our lightweight transactions and CDC capabilities, dig into our new Scylla Alternator API for DynamoDB users and hear from some of the world’s most innovative companies about how they’re putting Scylla to work. We’ll also unveil results of a major new performance test. If you thought it was something when we hit one million OPS per node — you haven’t seen anything yet. As always, Scylla Summit will be a great place to get inspired, learn from your colleagues and keep up to date with the latest advances in big data and NoSQL,” said Laor.

TECHNICAL NOTE: Laor mentions CDC in the above quote… so, in the world of databases, Change Data Capture (CDC) is a set of software design patterns used to determine (and track) the data that has changed so that action can be taken using the changed data.

“At past events, people have told us that the conversations they have at Scylla Summit have helped shaped their big data strategies,” said Dor Laor, CEO of ScyllaDB. “This is a chance to share ideas with the smartest, most plugged-in members of the NoSQL and big data communities. It’s completely relaxed, massively useful and lots of fun.”

The company is also likely to announce a note of new customers and spell out its wider roadmap.

Scylla tweets at @ScyllaDB, the event hashtag is #ScyllaSummit and a note of speakers can be found here.

October 10, 2019  12:58 PM

From Russia with OLAP: Percona uses ClickHouse analytics

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

In our voyages from software conference to software conference, we technology journalists often find stories that are developing and worth sharing.

At Percona Live Europe last week, one such example came up around the open source scene that is developing in Russia and how one of the projects that is now starting to open up to international use.

Think about Russia typically… and you may not automatically think about open source software. However, the country has a strong software developer community that is looking to expand the number of projects that are used internationally.

An example of this is ClickHouse, an open source data warehouse project found on GitHub here. The technology was originally developed at Yandex, the Russian equivalent of Google.

As defined on TechTarget: a data warehouse is a ‘federated repository’ for all the data collected by an enterprise’s various operational systems – and the practice of data warehousing itself puts emphasis on the ‘capture’ of data from different sources for access and analysis.

ClickHouse’s performance claims to exceed that of comparable column-oriented database management systems (DBMS) currently available. As such, it processes hundreds of millions (to more than a billion) of rows — and tens of gigabytes of data per single server, per second.

Cluster luck

According to its development team, ClickHouse allows users to add servers to their clusters when necessary without investing time or money into any additional DBMS modification.

According to the development team notes, “ClickHouse processes typical analytical queries two to three orders of magnitude faster than traditional row-oriented systems with the same available I/O throughput. The system’s columnar storage format allows fitting more hot data in RAM, which leads to a shorter response times. ClickHouse is CPU efficient because of its vectorised query execution involving relevant processor instructions and runtime code generation.”

The central go-to-market proposition here is that by minimising data transfers for most types of queries, ClickHouse enables companies to manage their data and create reports without using specialised networks that are aimed at high-performance computing.

Speedy OLAP

The technology, which is essentially aligned for Online Analytical Processing (OLAP), uses all available hardware to process each query as fast as possible, which amounts to a speed of more than 2-terabytes per second.

The project is starting to expand and get more adopters. As part of its monitoring product launch at the event, database monitoring and management company Percona announced that it will use ClickHouse for load testing and to monitor accessibility and other performance KPIs. Percona’s leadership team originally hails from Russia, so there are a lot of relationships there as well.

Alongside Percona, Altinity is also looking to expand use of ClickHouse over time. Robert Hodges, CEO at Altinity describes the company as a provider of the highest ClickHouse expertise on the market to deploy and run demanding analytic applications. The company also provides software to manage ClickHouse in Kubernetes, cloud and bare-metal environments.

Explaining how his firm has developed alongside the core ClickHouse technology proposition, Hodges says that the enterprise version of ClickHouse can run on laptop, yet be ready to scale up for significant enterprise workloads.

“ClickHouse is very efficient at processing and handling time-series data…. and it has SQL features which are great at monitoring specific cloud issues such as ‘last point query’ [a way of looking at the last thing that happened in a cloud application]. Say for example you had a bunch of Virtual Machines (VMs) running in the cloud and you wanted to know the CPU load on them, ClickHouse is good at that getting that measure to you. It’s good to know to the ‘current state’ of VMs, because from that point you can then drill-in a look at the load over (for example) the last two weeks and so get a sharper idea of performance status,” said Altinity’s Hodges.

Hodges also explains that Percona is interested in ClickHouse because it is so similar to MySQL – it has ‘surface similarities’ and has good abilities to load data into it and pull data down from it.

This project is an example of how open source communities can expand with new approaches to existing problems.

Altinity is a service and software provider for ClickHouse.

 

 


October 8, 2019  1:01 PM

IFS drives ‘open’ into app suite, delivers new native API model

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprise applications company IFS has used its annual user conference to detail work carried out on its major application suite.

The company wants to put the ‘open’ in service management, enterprise resource management (ERP) and enterprise asset management (EAM).

The company says it has ‘evolved’ its technology foundation with 15,000+ native APIs to open paths to extensibility, integration and flexibility… because that what APIs do, obviously.

OpenAPI Initiative

Of some significance here, IFS has noted that it is a new member of the OpenAPI Initiative (OAI), a consortium of experts who champion standardization on how REST APIs are described.

According to OAI official statements, the group has an open governance structure under the Linux Foundation – and the OAI is focused on creating, evolving and promoting a vendor-neutral description format.

IFS insists that it is promoting open applications to allow freedom to develop and connect data sources to drive value in a way that is ‘meaningful’ to enterprises.

“By prioritising open applications, IFS is upping the ante in terms of innovation and customer-centricity while decisively turning away from platform coercion and lock-in,” noted the company, in a press statement.

IFS offers native OData-based RESTful APIs across its entire suite of ERP, EAM and service management products, to make connecting, extending or integrating into the IFS core quicker and easier.

OData-based RESTful APIs are defined on Stack Overflow as, “A special kind of REST where we can query data uniformly from a URL. REST stands for REpresentational State Transfer which is a resource-based architectural style. OData is a web based protocol that defines a set of best practices for building and consuming RESTful web services.”

The APIs at IFS have been engineered in tandem with IFS’s new ‘Aurena’ user experience brand, which is now available across the full breadth of IFS applications.

“With this approach, IFS is giving its customers 15,000 new ways to flex,” IFS CEO Darren Roos said. “It goes without saying that, as excited as we are about reaching this milestone, the driving force behind our deliveries is our unwavering commitment to offer choice and value to our customers. Providing ‘open’ solutions is a critical factor in making good on this promise. The quality, pace, and focus of our product development speaks to a business that is outperforming the legacy vendors in the enterprise software space.”

Aurena user experience

The IFS Aurena user experience offering has now been extended across the entire IFS Applications suite for Service Management, ERP and EAM. It uses the same set of APIs, which are now generally available and provides a browser-based user experience optimised for each role and user type, with a focus on employee engagement and productivity.

“IFS Aurena provides customers with a truly responsive design, allowing the entire suite to automatically adapt to different form factors as well as capabilities to design and build truly native applications targeted across iOS, Android and Windows, with support for offline scenarios and device-specific capabilities such as GPS and camera,” noted IFS chief product officer Christian Pedersen.

Among the more significant industry updates here is support for International Traffic in Arms Regulations (ITAR) compliance initiatives in the cloud.

Customers who have ITAR obligations, such as those operating in or trading with the U.S. aerospace, defence or government sectors, can deploy and use IFS software to support their ITAR compliant business needs, in an independently validated environment hosted in the Microsoft Azure Government Cloud, fully managed by IFS.


October 6, 2019  3:31 AM

DataStax offers bidirectional data dexterity for Apache Kafka

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

DataStax has opened up ‘early access’ to its DataStax Change Data Capture (CDC) Connector for Apache Kafka, the open source stream-processing (where applications can use multiple computational units, similar to parallel processing) software platform.

As a company, DataStax offers a commercially supported ‘enterprise-robust’ database built on open source Apache Cassandra.

Stream processing is all about speed and cadence, so, the DataStax CDC Connector for Apache Kafka gives developers ‘bidirectional data movement’ between DataStax, Cassandra and Kafka clusters.

In live deployment, CDC is designed to capture and forward-insert, update and delete activity applied to tables (column families).

Bidirectional dexterity

So what does bidirectional data dexterity bring forward? It makes it easier for developers to build globally synchronised microservices.

Apache Cassandra already offers a core cross-datacentre replication capability for data movement between microservices. This then, is an augmentation of that replication power.

The connector enables bidirectional data flow between Cassandra and Kafka, ensuring that data committed to a developer’s chosen system of record database can be forwarded to microservices through Kafka.

Developers are invited to use the early release of the CDC source connector in DataStax Labs in conjunction with any Kafka offering and provide feedback before the product is finalised.

Kathryn Erickson, senior director of strategic partnerships at DataStax says that her firm surveys its customers every year and, currently, more than 60% of respondents are using Kafka with DataStax built on Cassandra.

“Any time you see a modern architecture design promoted, whether it be SMACK, LAMDA, or general microservices, you see Cassandra and Kafka. This CDC connector is an important technical achievement for our customers. They can now work with the utmost confidence that they’re getting high quality and proven high performance with Cassandra and Kafka,” said Erickson.

Gold plated connectors

As an additional note, the DataStax Apache Kafka Connector recently earned the Verified Gold level by Confluent’s Verified Integrations Programme.

This distinction is supposed to assure connectors meet technical and functional requirements of the Apache Kafka Connect API – an open source component of Kafka, which provides the framework for connecting Kafka with external systems.

According to DataStax, “By adhering to the Connect API, customers can expect a better user experience, scalability and integration with the Confluent Platform. The initial DataStax Apache Kafka Connector enables developers to capture data from Kafka and store it in DataStax and Cassandra for further processing and management, offering customers high-throughput rates.”

Why it all matters

The questions you should perhaps now be asking are: does any of this matter, has DataStax done a good thing… and is stream processing a hugely important part of the still-emerging world of Complex Event Processing (CEP) and the allied worlds of streaming analytics and modern fast-moving cloud native infrastructures?

The answer is: look at your smartphone, laptop, nearest kiosk computer, IoT sensor or (if you happen to be sat in a datacentre) mainframe or other machine — the reality of modern IT is a world with a perpetual stream of data events all happening in real time. If we sit back and store data through aggregation and batch processes, then we will miss the opportunity to spot trends, act and apply forward-thinking AI & ML based processes upon our data workloads.

DataStax obviously knows this and the company appears to be pushing Kafka advancements and augmentations forward to serve the needs of the always-on data workloads that we’re now working with.

Developers are able to test the early release of the DataStax CDC Connector for Apache Kafka immediately and can access the connector here.

 


October 1, 2019  4:05 PM

Percona details ‘state’ of open source data management

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source database management and monitoring services company Percona has laid down its state of open source data management software survey for 2019.

Surveys are surveys and are generally custom-constructed to be self-serving in one sense or another and so convey a message set in their ‘findings’ that the commissioning body (or in this case company) has wanted to table to media, customers, partners and other related bodies.

This central truth being so, should we give any credence to Percona’s latest market assessment?

Percona is an open source database software and services specialist that now offers the latest version of its Percona Monitoring and Management 2 product (aka PMM2) – this tool provides ‘query analytics’ for database administrators and sysadmins to identify and solve performance bottlenecks, thus preventing scaling issues and potential outages.

Given its position of ‘overseeing database’ operations and provision of database monitoring and management services, it is perhaps permissible to allow Percona to survey the open source data management market.

The company reminds us that there is no ‘one size fits all’ in terms of database needs… but why is this so?

There is no one size for all (says Percona) because although software vendors have worked hard to add features, users gravitate towards the best database (and best tool) for the job, using the right database for the right application in database environments that can exist either on-premises, in the cloud, or a hybrid of the two.

So therefore, we end up with multi-database environments running on multi-cloud backbones.

Our survey (respondents) said

So who responded?

The US represented the biggest base of respondents that were happy to share their views on database management challenges with 26%. However, the remaining 74% are spread widely across the globe, giving us an arguably quite broad mix of global replies.

We might also argue that this breadth demonstrates the diversity and reach of the open source community.

In total 836 techies responded from 85 countries. The larger the company size, the more likely they are to have multiple databases. Larger company adoption of a multi-database environment jumps 10-15% over small companies.

As a quirk in part of the results, Percona notes that it was somewhat surprising how many people use both MySQL and PostgreSQL. The overlap of these two databases in a single environment is much higher than that of MySQL and MongoDB.

According to Percona, “Most survey respondents are well-informed about using open source technology in the cloud and do so. Interestingly, however, these passionate open source evangelists championing cost-effectiveness, flexibility and freedom from vendor lock-in often find themselves tied to cloud vendors with a single solution and large monthly costs.”

As company size grows, it is much more likely that companies are hosting databases in multiple clouds. The larger the organisation, the more complex the hosting environment. Larger organisations have a 10-15% swing in hybrid cloud, private cloud and on-premises.

Big company, complex hosting

Also, we see that the larger the organisation, the more complex the hosting environment… which one might argue is good news for Percona as it will more an opening to attempt to sell its management services.

Percona also notes that AWS continues to dominate the public cloud provider market, with over 50% of respondents using its cloud platform. Google Cloud and Microsoft Azure show similar numbers of respondents using their technologies and obviously offer alternatives to companies resistant to using Amazon, or won’t use Amazon due to a competitive clash.

According to Percona, “The multi-cloud usage for databases is about a third of our respondents, with 41% of larger companies using multi-cloud deployments (close to 10% over smaller companies). Smaller companies are more likely to use Google than Microsoft, but larger companies prefer Microsoft to Google (which could have something to do with startup businesses need for cost-effectiveness and agility).”

Over 25% of respondents are using containers, but not necessarily to run databases. This could be due to some early bias against running databases in containers. Many respondents aren’t aware if they use containers for their databases or not.

Adoption of open source

The top two responses to this question are the same ones that continue to dominate discussion on the benefits of open source: cost savings (79.4%) and avoiding vendor lock-in (62%). The benefit of having a community also scored highly (over 50%).

There are some interesting differences between management and non-management answers to these questions.

There is an 8% uptick in responses from management on avoiding vendor lock-in. It looks like larger enterprise management and non-management are on the same page about vendor lock-in. In small to medium companies, however, there seems to be a disconnect.

There is a 6% uptick in those looking for additional security.

Those who list vendor lock-in as a critical reason to adopt OSS are on average 10% less likely to buy support from a vendor (potentially viewing support as another form of lock-in). Note that in medium-sized companies, the number jumps up 19% for management being more likely to pay for support versus non-management. In large companies, management “support” for ‘support’ drops 16% and is 4% below non-management.

Overall… there is an industry-wide shift happening in open source. Well, OF COURSE there is… now that Microsoft loves Linux, IBM buys Red Hat and all the other major developments we’ve seen with GitHub, the .Net framework and more besides. What happens next is a bigger shift in terms of the way the open source world is moving and the how commercial open source stands up against its hobbyist programmer roots.

Readers can find the rest of the survey results here.

 


September 18, 2019  7:00 PM

Code One showcases Oracle developer tools & platform

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Oracle used the Code One section of its Oracle Open World 2019 conference to detail some of the more developer-centric elements of its total platform.

Code One derives its name from JavaOne, the developer conference staged every year by Sun Microsystems until its acquisition by Oracle in early 2010.

Key among the announcements was Oracle Cloud Free Tier, an offering of ‘always-free’ services for data-developers to test run the company’s self-driving Autonomous Database and Oracle Cloud Infrastructure for an unlimited time.

The promise to developers, students, hobbyists (and indeed others) is an option to explore the full functionality of the database and cloud infrastructure layer including Compute Virtual Machines (VMs), use of both block storage and object storage… as well as access to Oracle Load Balancer – a combination that is said to offer all the essentials for developers to build complete applications on Oracle Cloud.

The technology on offer here remains free for as long as it is used.

“We are thrilled to offer Always Free Oracle Autonomous Database and Oracle Cloud Infrastructure,” said Andrew Mendelsohn, executive vice president, Database Server Technologies, Oracle. “This enables the next generation of developers, analysts, and data scientists to learn the latest database and machine learning technologies for developing powerful data-driven applications and analytics on the cloud.”

This initiative enables developers to build applications using any language and framework — they can get started quickly without waiting for IT to provision and learn new technologies such as artificial intelligence and machine learning.

Keynote zones

In the Oracle Code One keynotes zone, the company detailed what it likes to call ‘intelligent applications’ that use machine learning and data from multiple sources to make predictions and suggestions.

Developers and data scientists were told how they can build cloud native, serverless apps using machine learning algorithms in a transactional-analytical database and cloud infrastructure.

We also heard the story of Oracle Cloud Todd Sharp, who took his 13-year-old daughter to the hospital for a test to find out that she had Type 1 diabetes. He has worked with Oracle on technology to improve her life.

“My 13-year-old daughter was recently diagnosed with diabetes. She is one of 400 million people worldwide who has to live with this disease for the rest of her life — and I am determined to be with her every step of the way. Thanks to technology, I can help her to easily calculate her carbohydrate consumption to help her determine her next insulin dose. This is my attempt as a father and a developer to address a real-world problem with technology, and hopefully inspire others to do the same.

According to a Forbes write up of his story, Sharp tackled his technology for health challenge using Simple Oracle Data Access (SODA), Oracle REST Data Services (ORDS), Micronaut Data and Helidon (for coding microservices), Oracle Functions to help perform serverless calls and the Oracle Autonomous Database as the core data store.

Also of note was the data science presentation – this session highlighted the essential role of data as the source of inputs, insights, and intelligent actions for numerous digital technologies, including IoT/IIoT, 4D printing, AR/VR, digital twins, blockchain, autonomous data-driven application systems, and learning systems (cities, farms, healthcare).

According to Oracle, “Data science is the methodology and process by which a business culture of experimentation empowers developers to transform data into value. Transforming data-to-value includes insights discovery, informed decision support, and intelligent next-best action.”

Oracle says it wants developers to connect with the company as they ‘building the value chain’ that connects data (inputs) to emerging AI technologies (outcomes).

Oracle API Gateway

Aside from the keynote, Oracle developers detailed the company’s work to build Oracle API Gateway.

This is a platform for managing, delivering and securing web Application Programming Interfaces (APIs). It provides integration, acceleration, governance and security for API and SOA-based systems and is available on Windows, Linux and Solaris.

Oracle API Gateway includes features dedicated to looking after identity management, scalability, REST APIs, data enrichment, governance, reporting and the fabulously named act of ‘traffic throttling’ to smooth out spikes.

We know that every company in every vertical now has to reinvent itself as a technology company… and, in a world where even your grandmother knows what an app is, every company has to be a developer-centric software-driven cloud-first company… and that’s pretty much exactly where Oracle has positioned itself. Larry might still love his database, but even he knows that the code is king.


September 17, 2019  2:04 PM

Oracle Java SE 13, lucky for faster cadence feature preview fans

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Oracle has announced the availability of Java SE 13 (JDK 13) during the company’s Oracle Open World conference and exhibition in San Francisco.

Java Platform Standard Edition (Java SE) is used by software application developers to develop code for desktop and server environments – Java purists laud the language’s user interface, performance, versatility and, as always, its portability.

In a world where many have decried Oracle’s commitment to truly open community-centric open source technologies, many have worried about the health and wealth of the Java platform and language.

Oracle, in response, continues to insist that its commitment is real.

Release cadence

The company says that its drive to fuel the developer community is validated by its steadfast resolve to deliver a predictable release of enhancements as part of the comparatively new (faster) six-month feature release cadence.

The six-month release cadence has driven five releases since its adoption in September of 2017.

The latest release also includes two preview features: Switch Expressions, which extends ‘switch’ (see below) so it can be used as either a statement or an expression (JEP 354), and the addition of text blocks to the Java language (JEP 355).

As noted on TutorialsPoint, “A switch statement allows a variable to be tested for equality against a list of values — each value is called a case and the variable being switched on is checked for each case.”

“Java continues to be an important technology for Siemens as many legacy applications are based on Java and also new developments are done with Java. Therefore we need to always receive the latest patches in order to improve security,” said Hans-Martin Schulze, IT strategist at Siemens Information Technology.

Preview proof points

Oracle insists that ‘preview features’ are an important part of the new release model and allow for greater community input prior to reaching a final design for new features. These also improve quality and performance when the features become GA.

Oracle JDK 13 now supersedes Oracle JDK 12 and offers a smooth transition because of its incremental nature to this latest release. Oracle plans to deliver a minimum of two updates to this release per the Oracle CPU schedule before being followed by Oracle JDK 14, planned for March 2020.

“The JDK 13 release is the result of industry-wide development involving open review, weekly builds and extensive collaboration between Oracle engineers and members of the worldwide Java developer community via the OpenJDK Community and the JCP,” said Georges Saab, vice president of development, Java Platform at Oracle. “The goal is always to make the latest innovation in the Java SE Platform and the JDK easily accessible to developers globally. We invite the community to share their experience with Java SE 13 and continue to contribute and help make Java even better in future releases.”

Oracle says it also continues to offer Oracle Java SE Subscriptions and flexible options for customers to receive Java SE license and support for the systems they need and for as long as they need it.

Follow our Java Platform Group Product Management Blog for more technical information on the latest release.


September 13, 2019  5:23 PM

ScyllaDB powers up Alternator: an open Amazon DynamoDB API

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going.

Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way.

The company has just announced Alternator, an open source software project designed to enable application-level and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB.

Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

Alternator allows DynamoDB users to migrate to an open source database that runs anywhere i.e. on any cloud platform, on-premises, on bare-metal, virtual machines or Kubernetes.

Reversing a trend

“Cloud vendors routinely commercialise open source software,” said Dor Laor, CEO and co-founder, ScyllaDB. “With Alternator, we’re reversing that trend by creating open source options for a commercial cloud product. Open source software is all about disrupting the existing model and creating new opportunities for users. True to our roots, we’ve first released the Alternator source upstream for feedback and exploration; later this year we’ll incorporate it in our free open source distribution, followed by our enterprise and hosted products.”

Both Scylla and DynamoDB have their roots in the Dynamo paper, which described a NoSQL database with ‘tunable’ consistency.

Scylla’s close-to-the-hardware design claims to improves on DynamoDB’s price/performance ratio, which is meant to democratize access to real-time big data.

Alternator gives developers greater control over large-scale, real-time big data deployments, starting with costs. A typical Scylla cluster will cost 10%-20% the expense of the equivalent DynamoDB table.

Cluster luck

Alternator also frees developers to access their data with fewer limits by eliminating payment per operation — they can run as many operations as their clusters support.

Let’s also note that Alternator gives developers the ability to control the number of replicas and the balance of cost vs. redundancy to suit their applications. They can set and change the replica number per datacentre, the number of zones and the consistency level on a per-query basis.

 

 

 

 


September 11, 2019  9:17 PM

Sumo Logic eyes open source disruption (in 4 of 6 stack levels)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software is eating the world… and open source software is creating a new set of recipes, chewing it up and sticking it all into a completely different kind of sandwich with a whole new set of condiments and relishes.

Sumo Logic says that there is open source disruption in as many as four of the six key levels of the of the traditional IT stack.

The machine-generated data logs & metrics management company made the statement at its annual Illuminate developer conference held in California this September.

The company states that open source has disrupted the modern application stack – and that: today we see that four of the six tiers that make up the modern application stack have been disrupted by open source – and open source solutions for containers, orchestration, infrastructure and application services are leading this transformation.

The above statement comes from Sumo Logic’s report entitled Continuous Intelligence: The State of Modern Applications and DevSecOps in the Cloud aims to pinpoint the key directions for growth across modern software application stacks.

This is perhaps a good point to stand back and ask what those six levels might be according to the Sumo Logic view of the world.

Six pack stack

  1. DevSecOps management
  2. Application services
  3. Custom application code
  4. Application runtime infrastructure
  5. Database and storage services
  6. Infrastructure, container and orchestration

Staying in open source, the company says that as customers adopt multi-cloud, Kubernetes adoption significantly rises. Enterprises are betting on Kubernetes to drive their multi-cloud strategies.

“Multi-cloud and open source technologies, specifically Kubernetes, are hand-in-hand dramatically reshaping the future of the modern application stack,” said Kalyan Ramanathan, vice president of product marketing for Sumo Logic. “For companies, the increased adoption of services to enable and secure  a multi-cloud strategy are adding more complexity and noise,  which current legacy analytics solutions can’t handle. To address this complexity, companies will need a continuous intelligence strategy that consolidates all of their data into a single pane of glass to close the intelligence gap.”

The logical question we must ask, then, is… how long will it be before open source exerts massive disruption levels every level of the 6-layer stack?

The two areas least affected (as per Sumo Logic’s yardstick at least) are application services and, slightly quirkily, the infrastructure layer… even though this layer is essentially born of open source technologies in the first place, the point is that the disruption factor here is lower.


September 4, 2019  2:04 PM

What is confidential computing?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The recent Open Source Summit was held in the balmy climes of San Diego and, among the news emanating from the event itself, the Computer Weekly Open Source Insider team were made aware of announcements made by The Linux Foundation itself.

The foundation announced its intent to form the non-profit Confidential Computing Consortium.

Companies committed to this work include Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.

But what is confidential computing anyway?

First let’s start with a home truth.

Across industries, computing is moving to span multiple environments, from on premises to public cloud to edge. As companies move these workloads to different environments, they need protection controls for sensitive IP and workload data and are increasingly seeking greater assurances and more transparency of these controls.

Current approaches in cloud computing address data at rest and in transit — but encrypting data-in-use is considered the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data.

What is confidential computing?

Confidential computing will enable encrypted data to be processed in memory without exposing it to the rest of the system and reduce exposure for sensitive data and so, it is claimed, this will provide greater control and transparency for users.

The first project to be contributed to the Consortium is the Open Enclave SDK, an open source framework that allows developers to build Trusted Execution Environment (TEE) applications using a single enclaving abstraction. Developers can build applications once that run across multiple TEE architectures.

The Confidential Computing Consortium will bring together hardware vendors, cloud providers, developers, open source experts and academics to accelerate the confidential computing market; influence technical and regulatory standards; and build open source tools that provide the right environment for TEE development. The organisation will also anchor industry outreach and education initiatives.

“Confidential computing provides new capabilities for cloud customers to reduce trusted computing base in cloud environments and protect their data during runtime. Alibaba launched Alibaba Encrypted Computing technology powered by Intel SGX in Sep 2017 and has provided commercial cloud servers with SGX capability to our customers since April 2018. We are very excited to join CCC and work with the community to build a better confidential computing ecosystem,” said Xiaoning Li, chief security architect, Alibaba Cloud.

Google VP of security Royal Hansen added to this story by noting that for users to make the best choice in terms of how to protect their workloads, they need to be met with a common language and understanding around confidential computing.

“As the open source community introduces new projects like Asylo and OpenEnclave SDK, and hardware vendors introduce new CPU features that change how we think about protecting programs, operating systems, and virtual machines, groups like the Confidential Computing Consortium will help companies and users understand its benefits and apply these new security capabilities to their needs,” said Hansen.

The proposed structure for the Consortium includes a Governing Board, a Technical Advisory Council and separate technical oversight for each technical project.

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: