Open Source Insider

Page 20 of 103« First...10...1819202122...304050...Last »

May 6, 2016  11:36 AM

Basho: why the IoT needs a time series database

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

NoSQL databases (if plural is permissible through versioning) company Basho is in update mode for its Riak TS product, which is now freely available on an open source basis — for free.

Basho: you don't like-a my time series data, I puncha your face ok?

Basho: you don’t like-a my time series data for IoT applications — I puncha your face ok?

Basho Riak TS (time series) version 1.3 arrives now claiming to be the only enterprise-grade NoSQL database optimised for Internet of Things (IoT).

The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS.

NOTE: A time series database (TSDB) is a software system that is optimised for handling time series data, arrays of numbers indexed by time (a datetime or a datetime range).

What makes an IoT optimised database?

What makes a database optimised for the Internet of Things? Part of the answer there lies in a) the time series capabilities of Riak and b) the fact that version 1.3 expands support for SQL commands, enhances API support and provides support for shell commands.

Riak TS Enterprise 1.3 also includes Multi-Cluster Replication.

Increasingly, IoT is generating time series data — data marked with a timestamp — from a variety of sources, such as sensors.

The sheer volume of time series data these sources generates requires databases that can efficiently and reliably store and query time series data.

Basho claims that Riak TS delivers on this need and allows both open source and commercial customers to perform the associated queries and transactions on time series data, while maintaining high availability and scale.

May 5, 2016  12:40 PM

Hortonworks certifies EnterpriseDB advanced

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Postgres database company EnterpriseDB has confirmed that its mission-critical special edition release of EDB Postgres Advanced Server (an enhanced version of PostgreSQL) has been certified for the Hortonworks Data Platform (HDP).

Simplifies integration of traditional enterprise systems with Hadoop clusters to support a seamless data center fabric for analytics

Simplifies integration of traditional enterprise systems with Hadoop clusters to support a seamless data center fabric for analytics

The play here is as follows: combining EDB Postgres with HDP provides capabilities to integrate existing data management systems with data in Hadoop clusters.

Why bother? For data analytics, obviously… this why we do these things.

EDB’s Hadoop Data Adapter for EDB Postgres Advanced Server brings the ability to combine data in traditional corporate systems with big data from multiple data sources stored in Hadoop installations.

What is driving Hadoop’s multiple data sources?

  • Customer sentiment data from social media,
  • Clickstream data from online properties,
  • Machine and sensor data,
  • Global logistics data commonly stored in Hadoop clusters.

These sources generate huge volumes for traditional, relational systems and the data is collected in an unstructured fashion because analysis tends to follow collection and storage.

“Providing a certified integration with Hortonworks Data Platform removes complexity and risk from the task of integrating the strengths of Hadoop for analysis of Big Data with the system of record in EDB Postgres Advanced Server,” said Lenley Hensarling, senior VP for strategy and product management, EDB.

Combining this data with corporate transactional information for analysis, using EDB Postgres Data Adapter (based on Postgres Foreign Data Wrapper technology) is an opportunity for finding new efficiencies in operations or customer-driven revenue programs.


April 25, 2016  8:18 AM

Google open sources Chromium browser bug tracker

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
API, bugs, Chrome, Chromium, Google, Open source

Google has moved Monorail — the bug tracker used by the Chromium open source browser — to a newly open sourced status.

1flying_browser.png

NOTE: Chrome is a proprietary software application development product… and Chromium is open source. Google draws its source code for Chrome from the Chromium project once it is happy with the stability and functionalities of features in production.

Monorail is the issue/bug tracker software used inside Chromium — its predecessor was called crbug.com, features from which have subsequently been migrated to Monorail.

Developer website I Programmer reports that, “Issues logged with crbug.com have been migrated with full fidelity to Monorail, which has been designed as a nearly identical drop-in replacement.”

Alex Denham also notes that the Chromium wiki content has been moved from the wiki into the repository.

According to Google’s initial Monorail blog posting, “We believe in APIs and we will launch with support for a very limited API that should allow the small number of existing code.google.com API clients to transition to Monorail. We will be designing and implementing a new comprehensive API in the future.”

1423513231012.jpg

Image credit: Australian Financial Review


April 22, 2016  10:08 AM

Snap happy? Ubuntu 16.04 is easy, not difficult

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Ubuntu

Canonical will release Ubuntu 16.04 LTS on 21st April, but is it all good news?

ubuntu_16_04_beta_desktop_unity_menu_mover.jpg

One press headline reads… have you downloaded @ubuntu 16.04 LTS yet? — and one well-known tech writer replied no!

He then went on to cite how hard it can be (allegedly) to know where all your files are and how much ‘tinkering’ around there has to be when using Ubuntu.

But how can this be?

This is (and we quote) the latest version of the world’s most widely used Linux platform across desktop, IoT and cloud computing — or so Canonical would have us believe.

Robot love

“The leading cloud-based operations and the most advanced robotics run largely on Ubuntu,” claims Mark Shuttleworth, founder of Canonical.

An Ubuntu Long Term Support (LTS) release is supported and maintained by Canonical for five years — this is the 6th such LTS release for Ubuntu.

Ubuntu 16.04 LTS introduces a new application format, the ‘snap’, which can be installed alongside traditional deb packages. These two packaging formats live comfortably next to one another and enable Ubuntu to maintain its existing processes for development and updates.

“The addition of ‘snaps’ for faster and simpler updates and the LXD container hypervisor for ultra-fast and ultra-dense cloud computing, demonstrate a commitment to customer needs for scale,” said Dustin Kirkland who leads platform strategy at Canonical.

Creating snaps is also supposed to be simplified for developers with the introduction of a new tool called “snapcraft” to build and package applications from source and existing deb packages.

Snap happy

Snaps are supposed to enable developers to deliver much newer versions of apps to Ubuntu 16.04 LTS over the life of the platform, solving a long-standing challenge with free platforms and enabling users to stay on a stable base for longer while enjoying newer applications.

The security mechanisms in snap packages allow for much faster iteration across all versions of Ubuntu and Ubuntu derivatives, as snap applications are isolated from the rest of the system.

Users can install a snap without having to worry whether it will have an impact on their other apps or their system. Similarly, developers have a much better handle (says Canonical) on the update cycle as they can decide to bundle specific versions of a library with their app.

Wider press reports have shown much love for the new Ubuntu… the general media reception is also positive.


April 21, 2016  9:00 AM

Forking impressive, devs go nuts for Hazelcast

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Hazelcast, IMDG

Operational in-memory computing company Hazelcast — known for its open source In-Memory Data Grid (IMDG) — has shared its community growth numbers from the Github repository.

BlauweVork.png

Forking fanatics

Hazelcast has documented new contributors and an 80% increase year-over-year in the number of forks in the first quarter of 2016.

In all says the firm, more developers than ever are contributing to Hazelcast developments in the Github repository and increasing the rate of new feature launches.

Maven downloads

The increase in Github activity has resulted in a spike in Hazelcast usage, evidenced by Maven downloads which have increased by 72% year-over-year in Q1 2016.

The Hazelcast community has built critical integration to extend the platform to new programming languages with community developed clients, as well as community developed integrations — paving the way (they promise us) for Hazelcast cloud deployment readiness.

Clojure, Scala, Python, PHP, Node.js and Golang

Specifically, the community has added useful functionality to new clients such as Clojure, Scala, Python, PHP, Node.js and Golang. In addition, Hazecast community members have driven integrations for cloud discovery for Apache jclouds, Azure, consul, etcd, Kubernetes and Zookeeper.

Christoph Engelbert, manager developer relations at Hazelcast has said that for an open source project to succeed you have to positively engage with an active community, which is what the team has strived to do over the last eight years.

“Things have really taken off with the cloud and programming language integrations in the latest 3.6 release. We rely on ideas, contributions and help from the community and always make ourselves available for any requests or queries that come in,” said Engelbert.

We the cats shall HEP

The increase in community adoption runs in parallel with the Hazelcast Enhancement Proposal (HEP), a process which was created to enable more community participation. Using HEP, Hazelcast drives community-based development of features and extensions. Members can submit new ideas or join other HEPs to create new features and help define how projects evolve.


April 15, 2016  8:02 AM

Microsoft aims for ‘lightweight richness’ in Visual Studio Code 1.0

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Code, Developer, Microsoft, Visual Studio

Microsoft doesn’t let a week (sometimes not much more than a day) go by without pushing some new code morsels down the feeding pipe.

This week is no different as the firm comes forward with Visual Studio Code 1.0, its lightweight code editor for Mac OS X, Linux and (of course) Windows.

Redmond says that this tool, which has been around in alpha status since 2013 before this formal 1.0 version, now has over two million users.

What it is, man

Visual Studio Code 1.0 reflects and includes many feature aspects of Microsoft Visual Studio including the IntelliSense context-aware intelligent code completion feature.

The development team behind this product have blogged to say that what started as an experiment to build a production quality editor using modern web technologies has blossomed into a new kind of cross-platform development tool, one that focuses on core developer productivity by centreing the product on rich code editing and debugging experiences.

“Visual Studio Code brings the industry-leading experiences of Visual Studio to a streamlined development workflow, that can be a core part of the tool set of every developer, building any kind of application,” writes the team.

IDC’s Al Hilwa has said that this release from Microsoft is a response to the rising developer interest in all things lightweight.

“Modern development requires a lot of in-and-out in various languages, environments and platforms and so there has been a long-running shift toward basic, though feature-rich, editors,” said Hilwa.

Microsoft asserts that from the beginning, it gas striven to be as open as possible in the roadmap and vision for VS Code — a pledge that last November saw the firm open source VS Code and add the ability for anyone to attempt to make it better through submitting issues and feedback, making pull requests, or creating extensions.

2016_04_14_header.png


April 14, 2016  9:42 AM

Hitachi Group’s Pentaho: ‘metadata injection’ kills big data complexity

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Analytics, Big Data, Pentaho

Open source data analytics player Pentaho has upped its metadata injection feature set.

Metadata injection?

global.png

Yes, metadata injection — the diversity of data and the sheer number of different data sources out there gives us a problem in terms of knowing what data means what — so metadata injection is a means of putting more “information about information into the information”, if you will.

With metadata injection, Transformation logic (the T in ETL) is machine-generated; rather then developers having to hand code it.

Pentaho suggests that the complexity of this process is what has been holding back banks (and other firms) from being able to integrate and analyse diverse and high numbers of data sources (especially unstructured data).

Data onboarding

The firm’s metadata injection feature set is meant to combat so-called “data onboarding” i.e. the process through which we get data into databases for analysis and then, logically, into the big data analytics pipeline.

According to Pentaho, “Modern big data onboarding is more than just data loading or movement. It includes managing a changing array of data sources, capturing metadata, making processes repeatable at scale and ensuring control and governance. These challenges are compounded in big data environments like Hadoop.”

In Pentaho 6.1, data-centric developers and others now have a wider array of options for dynamically passing metadata to Pentaho Data Integration at run time to control complex transformation logic.

Data ingestion & preparation

Teams can now drive hundreds of data ingestion and preparation processes through a few transformations and so accelerating time to delivery of governed analytics-ready data sets.

NOTE: Typically, data onboarding is a highly repetitive, manual and risk-prone process that creates a bottleneck in the data pipeline.

In addition to the new features in 6.1, Pentaho has also introduced a new self-service data onboarding blueprint. This architected process is meant to allow business users to onboard a variety of data themselves — without IT assistance — streamlining the data ingestion process.

“In this latest release, Pentaho streamlines the hand-offs between the different stages of the analytic data pipeline, including onboarding, engineering, preparing, and analysing data,” claims Donna Prlich, senior vice president of product marketing and solutions, Pentaho, a Hitachi Group Company.

Pentaho says that 6.1 also adds enhancements to its data integration and analytics platform to help data pipelines to accommodate greater volume variety, and complexity of data.


April 11, 2016  9:16 AM

Chatty Puppets on Atlassian HipChat

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Atlassian, Puppet

Automation-centric open source configuration management tool company Puppet Labs is integrating with Atlassian HipChat.

chatops-narrowmargins.png

HipChat is a team communications platform that provides ‘persistent’ one-on-one chat, group chat, video chat, file sharing and integrations.

Atlassian has tried to coin the term ‘ChatOps‘ and is emphasising it based upon its collaboration model and the HipChat tool.

The spin is as follows:

 

“ChatOps is a collaboration model that connects people, tools, process and automation into a transparent workflow. This flow connects the work needed, the work happening and the work done in a persistent location staffed by the people, bots and related tools. The transparency tightens the feedback loop, improves information sharing, and enhances team collaboration. Not to mention team culture and cross-training.”

“The hallmarks of DevOps and modern software delivery are automation, culture and collaboration,” claims Nigel Kersten, CIO of Puppet.

This new integration makes it possible to direct change using the Puppet Orchestrator, see change as it occurs, then discuss changes in real time as a team.

“Through HipChat Connect, customers can benefit from Puppet’s new integration, which includes the latest direct-change orchestration capabilities to continuously monitor all deployments directly within HipChat,” said Joe Lopez, head of HipChat engineering.

A preview of the Puppet Enterprise integration with HipChat will be available in the coming weeks.


April 8, 2016  12:58 PM

Software ‘developer fatigue’, it’s now a thing

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Uncategorized

This is a guest post for the Computer Weekly Open Source Insider blog written by Wayne Citrin in his role as CTO of JNBridge.

JNBridge’s core message is as follows: it’s not Java OR Microsoft .NET, it’s Java AND .NET — working on this premise, JNBridge has become a supplier of Java/.NET interoperability tools for software developers.

Citrin writes as follows…

Developer fatigue

390px-US_Navy_010915-N-3995K-014_A_tired_search_dog_finds_time_to_rest_as_rescue_efforts_at_the_World_Trade_Center_in_New_York_City_continue_just_a_few_feet_away.jpg

Software developers have historically embraced diversity in programming languages, databases, operating systems and other technologies.

After all, increased complexity is no reason to ignore the best tool for a job and developers have always had great appetites for new technologies.

But this comes at a price: developer fatigue, a by-product of having to learn and adjust to new languages and technologies, while also trying to remain productive.

Not a new idea

Developer fatigue is nothing new, but it’s getting much worse for several reasons. First, full-stack developers — developers with a specialised knowledge in all stages of software development — are naturally in great demand.

As you can imagine, achieving and maintaining this distinction is quite a mental feat, given the level of proficiency he or she must attain in any number of areas.

Additionally, developers must contend with new projects created by open source and new hardware platforms (like mobile) that host new software. With all these emerging technologies, it’s no wonder developers are struggling.

How to combat the problem

One way to alleviate developer fatigue is to make the most of the languages and technologies that developers already know and then gradually introduce new ones.

Many new languages, platforms and software technologies are rooted in .NET or in Java. If both the familiar and the new technologies are based on the same platform you’re familiar with, you’re already ahead of the game. Integration is likely straightforward, as it’s already designed into the technology. But if not, there’s still hope.

Spin and sell the sizzle

Citrin understandably (almost allowably) goes for the jugular and attempts to validate the reason for his firm talking about this subject by saying, “Interoperability tools like JNBridgePro can bridge Java and .NET technologies, providing welcome relief from developer fatigue.”


April 7, 2016  9:02 AM

US Federal Source Code Policy: embrace more open source to save taxpayer dollars

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Uncategorized

The United States White House and the federal government have already been widely reported to have adopted a degree of open source software, tools and platforms — but this trend is now officially set to increase.

obamasecret-620x349.jpg

A newly released paper entitled “Federal Source Code Policy — Achieving Efficiency, Transparency and Innovation through Reusable and Open Source Software” has called for US governmental agencies to take a more proactive and positive approach to open sourcing and sharing their code form this summer.

Chiding words

The paper chides current policy and actions by saying that even when [government] agencies are in a position to make their code available on a government-wide basis, they do not routinely make their source code discoverable and usable to other agencies in a consistent manner.

“These shortcomings can result in duplicative acquisitions for the same code and inefficient spending of taxpayer dollars. This policy seeks to address these challenges by laying out steps to help ensure that new custom-developed federal source code be made broadly available for reuse across the Federal Government,” reads the paper.

The US administration says it will launch Project Open Source, an online repository of tools, best practices and schemas to help implement this guidance.

Read more: http://sdtimes.com/white-house-proposes-government-open-source-software/#ixzz457wdVelR

Follow us: @sdtimes on Twitter | sdtimes on Facebook

The efforts here will manifest themselves on the project-open-source.cio.gov/ portal when it becomes live.

The complete paper is viewable at this link.

Image credit: madworldnews


Page 20 of 103« First...10...1819202122...304050...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: