The challenges of designing software that supports international business are many, manifold, multiplex and… crucially, multilingual.
NetSuite is aiming to facilitate ‘multilingual’ multi-channel ecommerce with a new version of its One World business management suite.
The software aims to power to “any” business model in a world where any means B2B, B2C and B2Anything.
The new version of One World supports tax reporting in 100 countries.
There is configurable tax compliance, support for 20 languages and 190 currencies and multi-subsidiary management.
The firm says that this software has capabilities for global businesses which have enabled NetSuite customers to transact in more than 200 countries and dependent territories around the world.
“Businesses seeking to enter new markets, manage mergers, acquisitions and divestitures and fast-growing companies looking to expand globally often find themselves held back by software systems siloed by department, geography or legal entity structures,” said the company, in a press statement.
This software is intended to bridge the gap between traditional ERP and external tax engines — as such, it provides access to the key data required to satisfy those tax authorities that are increasingly adopting standards-based eGovernment and audit methodologies.
According to a news report, “NetSuite OneWorld’s global tax is a key capability for supporting omni-channel, omni-country business operations, taking into account where businesses are shipping from and where they are shipping to in order to calculate the correct tax for the correct jurisdiction for a given transaction.”
Clearly there is a challenge here when we look at the complexity faced by developers when assembling data for international ecommerce apps — and understanding of regulations, integration to national tax systems APIs, understanding local accounting conventions etc.
Okay so omni-lingual is not really a term as such, but it kinds of works in this context.
Editorial disclosure: Adrian Bridgwater has worked on NetSuite blogs at its annual SuiteWorld conference and exhibition.
When we say Aruba, we now say Aruba, a Hewlett Packard Enterprise company. The networking solutions firm’s corporate status may have changed, but the brand appears to have made it through intact and the firm now started to be more vocal about its product and service releases.
In the product aisle, there’s Aruba Mobile Engagement which includes the introduction of the ‘industry’s first’ cloud-based beacon management solution designed for multivendor Wi-Fi networks & beacon analytics.
In the ‘initiative’ division — Aruba says it has expanded its app developer partner programme for the Meridian Mobile App Platform.
But what kind of development is this? — this is software engineering to accelerate development around location-based mobile apps.
On the road to Disneyworld
If you happen to pass through Orlando International Airport you may have been a user of Aruba Mobile Engagement powered by Aruba Beacons and the Meridian Mobile App platform.
It’s all about interacting with users via their mobile devices based on the customers’ in-venue location and their personalised preferences.
According to a press statement, “As the Aruba Mobile Engagement solution grows in popularity, deployments are growing larger in scale, increasing IT management complexity and challenges. The new Aruba Sensor is designed to dramatically reduce this IT overhead, making it easy to manage all beacons from a single location. Aruba estimates approximately 48 hours of time savings in a 1,000 beacon deployment during a single maintenance window.”
38 million Mickey & Donald fans
Orlando International Airport (MCO) which hosts nearly 38 million travellers annually, implemented Aruba’s Mobile Engagement solution in late 2014 and has since seen over 26,000 downloads of its MCO mobile app.
“The 1200+ Aruba Beacons deployed throughout our terminals have allowed us to provide travellers with indoor navigation to airline check-in, gates, baggage claim and hundreds of other locations including elevators and restrooms,” said John Newsome, director of information technology for Greater Orlando Airport Authority.
“Our mobile app not only provides navigation and important airport and flight-related information, it also helps drive sales for our concessionaires and retailers by providing both their location as well as links to their own websites for more in-depth information on their offerings such as menus for our restaurants.”
The firm says that Aruba Meridian is designed to power an unlimited number of location-based applications — and so the only barriers to entry are the creativity of mobile app development partners and a strong partnership with IT.
To attempt to remove these obstacles and accelerate the development of new mobile apps, Aruba’s partner program is intended to allow both Independent Software Vendors (ISV) and Custom App Development Agencies (CADA) to use the Meridian Mobile App Platform quickly.
Nutanix has been described as lots of things.
The firm has been called an upstart hyper-converged infrastructure vendor who gets its kicks taking a swing at virtualisation giant VMware with its own hypervisor offering.
The firm has been called a storage vendor that produces a hyper-converged storage system and now focused on its Acropolis Hypervisor (AHV), which it claims to be the ‘next generation’ of hypervisor.
The firm has also occasionally been mistaken for a new form of health food supplement (‘keep your gut bacteria happy with new Nutanix’) — but that’s just a misguided minority, so let’s move on fast.
What does Nutanix do?
As we have stated before on the CWDN blog, the firm has developed a hyperconverged solution intended to simplify the creation of enterprise datacentre infrastructures by integrating server and storage resources into a turnkey platform.
As independent trainer and consultant Sander Van Vugt put it, “Nutanix with Acropolis offers a nice option to deliver applications in an easy way, through a complete and open stack.”
But Van Vugt argues that Nutanix may be more appealing to firms that don’t have any virtualisation capabilities or software-defined networking approach.
His beef is that, “Most companies already have an infrastructure in place and are looking for a something that easily integrates with their existing setup — not something that’s going to offer just an alternative.”
But that view may be outdated… let’s remember that the firm used its .NEXT event in Miami this summer to launch the free-of-charge hypervisor, named Acropolis, a move quite definitely aimed at taking a swing at virtualisation environments from VMware and Microsoft.
Nutanix today states that Acropolis reflects the intersection of web-scale engineering and consumer-grade-design i.e. every Nutanix node includes the Nutanix Prism management interface.
We’re ‘so’ over legacy hypervisors
So here’s the thing… virtualisation has moved so fast that we’re now talking about “legacy hypervisors”… already.
Legacy hypervisors were designed for a world of monolithic non VM-aware storage arrays and switch fabrics and were built to accommodate thousands of combinations of servers, NICs and drivers.
According to Nutanix, those clunky old legacy hypervisors require multi-pathing policies and complex designs to mitigate issues such as storage congestion and application resource contention.
Acceptable performance in the legacy hypervisors world often requires silos such as segregating VDI from server workloads — I mean, can you just imagine?
The firm boldly asserts, “Nutanix’s Acropolis Hypervisor (AHV) was built from the ground up to provide a much simpler and more scalable hypervisor and associated management platform by leveraging the software intelligence of the hyperconverged architecture. AHV changes the core building block of the virtualised datacenter from hypervisor to application and liberates virtualisation from the domain of specialists – making it simple and easily manageable by anyone from DevOps teams to DBAs.”
7 reasons listicle
By way of an early Chrimble pressie to those who want to gorge more deeply on why Nutanix thinks legacy hypervisors as so last season, the firm has produced a ‘listicle’ (list-article, get it?) to cover off why it thinks Acropolis is so progressive.
4. Analytics (data driven management)
5. Support for the whole virtualisation stack
7. Economic benefits
So now you know, don’t get caught wearing last season’s legacy hypervisor out in public.
Wipro’s Kumudha Sridharan dropped the Computer Weekly Developer Network blog a line this week and insisted she had a point to make.
You say Bengaluru…
The Bengaluru (we used to say Bangalore) based technologist works for India’s biggest services consultancy and IT outsourcing company.
Sridharan argues that while machine learning and analytics have taken major strides, both have received relatively little attention in the QA (Quality Assurance) function but the two have the ability to inject intelligence, dynamically.
“The case for using cognitive computing in QA is rock solid. QA can fix the data (sources, types, extraction, sample size, labels etc.) and cognitive systems can continue to use the data to train the system and continuously improve quality levels,” she said.
Why is this useful?
Because says Sridharan, it can be used to pro-actively monitor the health of an application.
“Using cognitive computing, the health of an application can be pro-actively monitored by a variety of bots. The bots observe patterns in the data, check on trends and then use algorithms and models to predict the impact of an application on related infrastructure, along with the allied risks and the vulnerabilities,” she said.
We can apply some of the same ideas here when we look deeper at software application testing.
According to Sridharan, today’s statistical techniques that are applied to optimise testing, by reducing the number of test cases and eliminating redundancies, tend to become inadequate, especially when changes to applications are frequent.
“Manual intervention becomes problematic and poses a major challenge to QA. Cognitive Computing, that uses continuous learning systems, can be applied to dynamic, risk-based testing, solving the problem,” she said.
Sridharan sums up
The following commentary is attributed directly to Wipro’s Kumudha Sridharan.
“Defect management is time-consuming. It takes enormous effort in terms of daily calls, meetings, communication and an exchange of updates between teams to accurately identify, isolate and fix defects — typical ones being tickets raised by users for IT applications. Applying learning-based systems, which look for patterns in the past and leverage them, is the equivalent of making ticketing systems intelligent.
“Enable self-healing: Cognitive Computing can be used to identify situations where self-healing processes can be developed and applied. This would eliminate a huge amount of effort that current QA systems necessarily entail.”
PayPal and Braintree today have announced the £65,593.13 pence British Pound winner of its 2015 BattleHack Series.
… some Essex connection?
Ah yes, sorry — Braintree provides “payment processing” options for devices.
Braintree’s full-stack payment platform provides businesses with the ability to accept payments online or within their mobile application.
Essentially, it replaces the traditional model of sourcing a payment gateway and merchant account from different provide
BattleHack World Finals
The 24-hour BattleHack World Finals took place at PayPal HQ in Silicon Valley and hosted 14 teams of developers from across the globe, all winners of their regional BattleHack competitions.
Competitors were tasked with building an application that incorporates the PayPal, Braintree or Venmo APIs, encouraging hacks that include an element of social good.
The US $100,000 USD prize (see exchange rate above) was awarded to the team from Venice, Italy.
Team Venice’s winning hack, called ifCar, tapped both hardware and software to make cars smarter using a combination of sensors, environmental and contextual data and user preferences.
“We’re thrilled with what Team Venice built. Their technology has the potential to democratise access to the technical features of high-end cars, making it possible to turn any car into a smart car,” said John Lunn, senior director of developer and startup relations at Braintree.
Picture credit: TopSpeed
Informatica launches industry’s first integrated platform for big data management said the press release headline in what is, arguably, something of an overstatement all round.
There are of course several types, layers, breeds, sizes and species of big data management platforms out there — and almost all of them are fairly ‘integrated’ in one form or another.
Where Informatica is going with this spin is its own-brand ‘Big Data Management’ offering — a set of software intelligence intended to offer big data integration, big data quality and governance and big data security in a single integrated solution.
Which, as a combined combination set of big data ‘things’ is, arguably, more rounded than some.
The new product claims to be able to reduced the need for hand-coding and big data skill sets that are expensive and hard to come by.
“Data is the lifeblood of business, and only Informatica does end-to-end data management for big data,” said Anil Chakravarthy, acting chief executive officer, Informatica.
“Big data represents the next frontier of competitive differentiation, superior customer experiences and business innovation. From driving rapid project implementations to ensuring confidence in the data and the safety of sensitive information, Informatica Big Data Management empowers business and IT leadership with unparalleled automation, pre-built tools and optimised capabilities. This allows for quick experimentation and seamless, mission-critical production deployments that deliver maximum business value from big data.”
As reported here on Computer Weekly, after what is only just a few days after AWS set out its plans to open a UK datacentre, Microsoft announced a move to support the delivery of its commercial cloud services at its Future Decoded event in London on 10 November 2015.
What does this mean for developers?
In simple terms this does at least mean that there are more on the ground cloud-developer related resources for UK-based (or perhaps even UK-centric) programmers to connect with.
Microsoft CEO Satya Nadella presented part of the keynote at this year’s Future Decoded event with the following points of note:
• Nadella emphasises that this latest datacentre expansion effort centres around Microsoft’s desire to build and provide the breadth of cloud infrastructure and resources needed for programmers in the UK to drive truly cloud-native applications.
• Nadella used his time on stage to present actual hands-on demo materials.
• Nadella talks about how (with that whole cloud centricity factor in mind) Windows 10 will now represent the first stage of the firm moving to provide what we could call an Operating System-as-a-Service.
• Nadella wore a suit – well, we had to mention it.
“At Microsoft, our mission is to empower every person and organisation on the planet to achieve more,” said Satya Nadella, chief executive officer of Microsoft.
“By expanding our data centre regions in the UK, Netherlands and Ireland we aim to give local businesses and organisations of all sizes the transformative technology they need to seize new global growth,” he added.
As one of the largest cloud operators in the world, Microsoft insists that it has invested more than $15 billion in building a resilient cloud infrastructure and cloud services that deliver high availability and security while lowering overall costs.
There are now 24 Azure regions around the world.
Where’s the cloud, there’s IoT
Where’s the cloud, there’s always the Internet of Things — so Microsoft showcased some of the ‘coolest’ IoT technology around with its robot bartenders. The machines worked making attendee cocktails to order with very little spillage.
Shaken, not stirred, but definitely ‘positive disrupted’ as they would say.
Rackspace has announced the free beta offering of ‘Carina by Rackspace’ — an instant-on native container environment.
Did you say Ai No Corrida?
No… it’s Carina.
This is a technology offering that focuses on portability i.e. allowing a customer to create and deploy a cluster for their containerised applications faster (claims Rackspace) than them doing it themselves.
Carina is a container cluster service offering that ‘leverages’ bare-metal performance, Docker Engine, native container tooling and Docker Swarm for orchestration that make container clusters accessible to everyone.
Why is portability so important to developers?
As recently explained here, “Containers are means of transporting software (in a reliable state) from one computing environment to another.”
The developer factor
In practice this could mean software that has to move from a development team’s server into a test environment or from a staging environment into ‘live production’ i.e. full deployment — containers could also be used in the journey from a physical machine to a virtual machine in a private or public cloud.
Rackspace reminds us that container technology consumes a fraction of the compute resources of typical virtual machines, allowing for near-instant availability, application scaling and increased application density, allowing customers to save time and money.
“At Rackspace, our mission is to give customers industry-leading service and expertise on the world’s leading technologies. Carina extends this mission as part of our strategy to support OpenStack’s position as a leading choice for enterprise clouds,” said Scott Crenshaw, SVP, strategy and product at Rackspace.
“Carina design makes containers fast, simple and accessible to developers using native container interfaces, while leveraging the infrastructure capabilities of OpenStack,” added Crenshaw.
With Carina, developers get a ‘zero infrastructure’ container environment where Rackspace manages the infrastructure and Docker environment for customers.
IBM tells us it likes developers, but don’t be fooled — everyone has been saying that since a certain bald headed CEO started bouncing around the stage screaming the word.
But down at the guts level, we know IBM’s intentions are pure enough i.e. the firm has spent years now validating its work with the Rational brand and has a solid developer stream running through almost every perceptible aspect of its entire stack from its Z-systems hardware beasts upwards to its Watson cognitive computing ‘decision engine’ platform.
Stu-Stu-Studio Line (not from L’oreal)
The firm has this week announced further expansion of the ‘IBM Studios‘ across Europe with the opening of new facilities in Dublin and Hursley (UK).
By the end of 2015, new IBM Studios in Europe will also open in Warsaw, Prague, Hamburg and Paris, adding to IBM’s more than 20 Studios around the world.
But what are IBM Studios?
These places are meant to play host to what IBM calls ‘multi-disciplinary teams’ and that means:
- coders and
- other industry experts.
… all of whom come together to develop products and digital marketing services around cloud, analytics, Watson and collaboration.
Experts from IBM Design and IBM Interactive Experience will work together at these places.
“People’s expectations of enterprise tech have changed because of innovative design they see in devices and apps used at work and play,” said Phil Gilbert, general manager, IBM Design. “These studios will join a global network that is transforming how tech is created with user experience at its core.”
Focus, fire a design-gun
Each new IBM Studio will have a core focus…
… so what this means is that Dublin and Hursley will focus on designing IBM products and user experiences around Watson, security, collaboration and Internet of Things.
The Hamburg and Paris Studios will focus on mobile and web application and digital transformation projects.
“This crop of new Studios in Europe reinforce IBM’s continuing commitment to great design and innovation,” said Matt Candy, Vice President & European Leader, IBM Interactive Experience. “IBM has been at the forefront of design-led thinking for decades and is now busy building the biggest design team in the world. With these six new openings -and more to come next year–we’ll continue to break old models and create a new way to work.”
IBM Interactive Experience’s 9,700 designers, developers and consultants work with IBM clients to create data-driven design for everything from virtual showrooms, immersive customer experiences, business apps, content and more.
DevOps (as the coming together of both the ‘developer’ and IT ‘operations’ functions) has been unfortunately propelled upwards by the force of the technology trigger and driven onwards towards the peak of inflated expectations (to coin a phrase from Gartner).
As Gartner would now warn us… the so-called trough of disillusionment is the next logical stage.
So is DevOps heading for a fall?
The problem with DevOps is that tangential ancillary IT vendors have sought to nail their worthy-in-their-own-right technologies to the DevOps mast as a tactic to:
• Hype the PR cycle for their own brand
• Follow current tech trends
• Get ‘developer-centric’, coz that’s always good
• Some other less than valid spin-related reason
This level of insubstantial peddling has led us to a natural level of apprehension when we hear about DevOps today.
JFrog is a firm that (by most people’s yardstick) does what we can arguably call real DevOps i.e. its Mission Control product exists to accelerate software delivery with monitoring and management functions over all binary artifact repositories.
Binary artifact repositories
NOTE: By way of definition — binary artifact repositories (and binary artifact repository managers) are elements of software designed to manage, look after the version control of and ultimately store binary ‘artifacts’ i.e. those parts (models, use cases, diagrams) software that describe and denote functions (and architecture form) of software.
Tough road to cloud scale
JFrog claims that it has “discovered” a common set of issues that tends to bog down software development and DevOps teams as they scale up to thousands of developers and engineers in multiple teams leveraging multiple datacenters around the world.
These issues include:
• maintaining a clear real-time inventory of binary artifact repositories;
• managing binary artifact workflows among multiple global teams;
• locking down security, user entitlement, permissions and provisioning policies;
• and ensuring highly reliable storage of and access to artifacts.
With the thousands of binaries that often go into a software release and the explosion in binary artifact types, monitoring and managing each binary repository separately has become a huge challenge.
JFrog says that Mission Control saves time and effort with a unified dashboard-view and centralised control of binary artifact repositories.
JFrog CEO Shlomi Ben Haim says that, “JFrog’s vision is to fill a critical need by providing an executive dashboard providing transparency into their global software development organisations based on the reality of builds, releases, distribution and consumption of software packages.”
JFrog Mission Control is a downloadable product, offered free of charge for JFrog Artifactory users — a Universal Artifact Repository, which manage all binary artifacts regardless of the programming language or technology used to create them.
“It combines high availability, a secure Docker registry, npm repository and support for Maven, Gradle, Nuget, Yum, PyPI and other technologies,” said Ben Haim.
He continues, “JFrog’s Bintray gives developers and organisations full control over how they store, publish, download, promote and distribute software with advanced features that fully automate the software distribution process.”
A DevOps spin test
Next time you look at DevOps news, look for functions like quantifiable tasks metrics, call stack analysis technology or binary artifact repository control… the rest of it might be trying to spin you round.