Mitaka is not only the latest release of the OpenStack cloud infrastructure service; it’s also a city in Japan.
In a webinar Thursday detailing the 13th version of the open source platform, Brad Topol, IBM engineer and a member of OpenStack’s worldwide project team, explained that OpenStack release names are typically related to the cities the planning conferences are held in. Releases come out every six months, ahead of the semiannual meetups.
The conference before the current release was in Tokyo, in October. Mitaka is in the Japanese capital’s metro area, 8.6 miles from the city proper. Fans of Hayao Miyazaki, director of Spirited Away and other animation fantasies, may know the city as home to the Ghibli Museum, which displays the work of film company Studio Ghibli.
“It’s always a fun exercise trying to get everyone excited and look up the places and things that are nearby,” said Davanum Srinavas, a software engineer at Mirantis, which develops and supports OpenStack. In the webinar, Srinavas ticked off technical details of the Mitaka release, which has a wealth of new features designed to make the cloud software easier to install and manage.
Names are suggested by OpenStack community members, who help plan and design new versions of the cloud infrastructure, then voted on and given legal clearance, Topol said.
OpenStack release names are alphabetical, so they started with Austin, named for the location of the first conference, in Texas’ capital, in 2010. They went on in 2011 to Bexar, the county where the second conference city, San Antonio, is situated, and in fall 2015 made it all the way to Mitaka. The next release will be called Newton. That’s the name of a historical home in Austin, where the OpenStack gathering was held a second time, in late April.
The OpenStack folks may have done better to name the next version, due in October, Navasota. The town, 115 miles from Austin, has an exotic-sounding name — and it’s home to the star of the übermensch Internet meme, actor Chuck Norris.
I hope he’s not offended.
Read details about OpenStack’s newest release, Mitaka.
The newest release of the OpenStack cloud infrastructure is designed to be easier to install, easier to use and easier to manage.
That could be big news for CIOs. The cloud platform is delivering flexibility and processing power at lower cost to big-name companies such as AT&T and eBay. But calling for lots of installation, maintenance and development support, OpenStack has come to be known almost as much for its DIY-style complexity as it has for its innovative potential.
OpenStack Mitaka, the 13th release from the OpenStack Foundation, came out in April. Brad Topol, an engineer at IBM and “core contributor” to the free, open source software, gave an overview of new features in Mitaka in a Cloud Standards Customer Council webinar Thursday.
OpenStack “controls a large pool of compute, storage and networking resources throughout a data center,” Topol said. Everything is managed on a dashboard, eliminating the need to separately order up an application server and a database and configure networking to build a Web application, which could take more than half a year.
It’s essentially an open source version of public cloud offerings such as Microsoft Azure and Amazon Web Services, so “no vendor lock-in” is a big part of the pitch. Beside its public cloud option, OpenStack is designed to allow organizations with the right resources build their own private clouds as well.
New in OpenStack Mitaka
Releases come out every six months, each building on the last with tweaks from a worldwide circle of OpenStack members and developers. OpenStack Mitaka comes with “lots of growth, lots of intensity,” Topol said.
The OpenStack Client is the centerpiece of the latest version. It’s a tool that lets users manage all of the operating system’s components — not just core computation, networking and storage but also “advanced services” such as data processing and workflow management. Subprojects — the term OpenStack uses for all its services — were difficult to manage, especially when the software first came out, back in 2010.
“Each subproject had its own little command-line tool, which all worked slightly different, all used slightly different syntax, would drive our operators nuts,” Topol said.
Services will also be easier to set up in this release, he said, including Nova, the computational engine, and Keystone, the identity management service. Neutron, OpenStack’s software-defined-networking function, lets users build a network, attach a server and assign an IP address — all in one step.
(Get used to the catchy names for the components — Sahara, Tempest, Cinder — they run up and down OpenStack.)
The release also aims to improve scalability, Topol said. That means big, complex applications are easy to launch using the orchestration component, Heat. It’s also designed to quickly maintain and update resources apps need, such as database and Web servers, networks and attached storage.
Jacked-up computational and security components also perform better in big applications in Mitaka, Topol said.
A turning point?
The infusion of user-friendliness happens at a key time in OpenStack’s young life. The cloud operating system grew from a joint initiative by NASA and cloud computing company Rackspace into an open source project that has spanned the globe, with nearly 180 countries, 589 companies — Cisco, Dell and VMware among them — and tens of thousands of people contributing to its planning and design, Topol said.
But people aren’t just adding to it; they’re also using it. European research lab CERN, for example, smashes particles together to learn about the universe and in the process generates reams and reams of data that’s then fed into OpenStack, Topol said. Retailers such as Walmart are using it for e-commerce — expanding on its elastic infrastructure in times of intense demand — say, Black Friday.
And telecoms AT&T, Swisscom and South Korea’s SK Telecom have hooked up to OpenStack for its network function virtualization, which takes network services away from proprietary hardware and puts them on virtual machines.
At the most recent OpenStack Summit, held in Austin, Texas, Donna Scott, an analyst at market research outfit Gartner who was once less-than enthusiastic about OpenStack, recommended the platform for businesses with cloud data applications. And Forrester Research in a September brief called OpenStack “a credible platform on which to grow.”
In a video, the OpenStack Foundation’s Chris Hoge said the new Mitaka release was influenced by the need for simplicity, consistency and transparency in interface design. Those same principles have put Apple at the top of the tech heap and catapulted apps like Box.com and Dropbox into the mainstream. Make it easy. What an idea.
Find out where OpenStack release names come from.
The problem: The National Blood Authority is a statutory agency that provides blood products to healthcare facilities in Australia. Australia’s geography makes blood delivery challenging: The country is comparable to the continental U.S. in size and has remote areas hundreds of miles from the coastal population centers. Maintaining adequate blood supplies is a life saver when it can take a couple of days to transport blood to some regions. The authority’s staff, however, wasn’t able to access blood data when working remotely. As a result, personnel would spend days or weeks preparing data before leaving the office. The data was often out of date by the time it was used.
The Technology: The authority decided to upgrade its IT, deploying virtual desktop infrastructure (VDI) on hyper-converged appliances from Nutanix. The VDI environment, which replaces a storage-area network, runs blood management and patient registry systems. VDI provides secure remote access, so authority staff can log into the agency’s systems when they are working outside the office at remote clinics or other locales. Staff members can obtain up-to-date data “on the spot, in real time,” noted Peter O’Halloran, the authority’s CIO. That real-time access means agency personnel are better prepared to help healthcare facilities optimize blood inventory levels.
The Results: More timely data in the field has helped the authority avoid blood wastage costs to the tune of about $10 million per year. The revamped infrastructure, meanwhile, also saves 34 minutes per week on log-in times and reduces the time spent on pre-trip data and document preparation. The savings contributed to a pay-back period within the first five months of installation. “The availability of real-time, remote access to the information — as enabled by VDI technology — and the productivity improvements delivered by VDI” are the primary reasons “we delivered the wastage reduction and enhanced efficiencies,” O’Halloran said.
Last week I wrote about the 13% fall in Apple revenue after 13 years of growth, surveying opinions on whether the news says something about Apple — and the product taking the blame for the slide, the iPhone — or about the market as a whole.
John-David Lovelock, analyst for market researcher Gartner, said the market for smartphones is saturated. People have their devices, whether Apple or Android, and for now they’re holding off on buying replacements. When they decide to buy them, Apple will return to revenue growth.
There are real signs of a smartphone market slowdown: Market researcher IDC declared sales largely flat year on year, while a study by another outfit, Strategy Analytics, shows shipments have fallen 3%, from 345 million units to 335 million.
Tech sales lag
But the Apple revenue slump also plays into the sluggish-technology-market narrative of 2016, Lovelock said. Gartner predicted in early April that IT spending would contract 0.5% from 2015 to $3.49 trillion. That owes partly to the trend toward digital business models. Organizations are going into “cost-cutting mode” to fund them, putting money toward cloud-based services, which have lower upfront costs.
“And of course discretionary spend on things like mobile phones, PCs, tablets, storage arrays are the things we’re seeing suffering first,” Lovelock said.
CIOs are, of course, already hip to this. They’ve been moving away from supplying phones for employees to the model known as bring your own device, or BYOD, for some time. That’s good in this market, Lovelock said.
“This is a great opportunity for CIOs to continue that move — cost optimization means that they’re going to push BYOD and extending lifecycles more.”
Calls for bigger, better, newer
Yet the Internet rings with expectations for more innovation from Apple.
“Apple has had a fine, long run, but changes are constant, competition is everywhere and consumers are fickle,” wrote reader Norman C. Burns, who goes by ncberns on TechTarget’s community forum IT Knowledge Exchange, where this post can be found.
“Everyone has already bought their phone. Since evolution is far less interesting than revolution, Apple needs the next game-changer.”
One of the questions Stuart Madnick will ask of a panel of CIOs at the upcoming MIT Sloan CIO Symposium is who should the company’s CISO report to. Madnick, a professor of information technologies at MIT Sloan, is interested in the organizational and managerial factors that give rise to cyber break-ins, including the role CISOs and CIOs play in security.
MIT Sloan research shows that while CISO reporting structures “are all over the place,” with security officers reporting to CIOs, CFOs, chief risk officers and directly to the CEO, one trend seems firmly fixed: more board interest in cybersecurity.
“I’ll give you a quote I had from a CISO recently. He said that in the previous 10 years, he had met with his company’s board of directors once. In the past year, he’s had three briefings with the board,” Madnick said. “We’re actually seeing in a few cases where the CISO reports directly to the board.”
MIT Sloan research: TJX Cos.
The fact that boards are focusing on cybersecurity roles and relationships is a positive sign. Madnick, who is also the director of the MIT Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity (IC)3, believes that companies — and federal government security programs– pay too little attention to the organizational structures and incentives that make companies vulnerable to cyber attacks.
“I’ll give you just one quick example,” Madnick said. “We did a detailed analysis of the TJX break-in, which was at that time the largest credit card break-in in 2005.” His group compared its analysis with analysis coming out of the FTC and other investigations and “found all kind of issues in the organization that had not been covered.”
“There was an email from the CIO of TJX to his staff. And the email said something to the effect that, ‘We are currently not PCI [Payment Card Industry Data Security Standard] compliant. It will take quite a bit of effort and cost to do so. This is now November. We’re entering into our Christmas rush. This has been a tough year financially. Don’t you all think it would be fine if we deferred becoming PCI-compliant until next year?'” Madnick recounted, referring to an email sent by then-CIO Paul Butka in 2005.
“This is called an email where the answer is embedded in the question. It may shock you to realize that almost no one on the staff saw any problem with doing that,” Madnick said.
Disclaimer: The information in this blog post is for general-information purposes only. Any reliance you place on such information is strictly at your own risk.
Did you just wish I were wherever you are so you could sock me? Or perhaps you covered your ears and yelled, “Nah-nah-nah-nah-nah!”
I can’t blame you. Legal disclaimers aren’t fun to read: They’re typically solid bricks of gray text, and the sentences are stuffed with so many legal abstractions that it’s hard to connect subject and predicate.
He was at the recent Fusion 2016 CEO-CIO Symposium in Madison, Wis., to talk to business and technology leaders about the legal questions raised by the network of connected devices known as the Internet of Things: Who owns and controls the data? Who’s responsible for the security of customer information? What happens if the code in a device hooked up to the Internet is defective and harms someone?
Organizations don’t want to go to court to find out the answers, so they have a lot to think about before plugging into this emerging technology, including the use of time-tested tools.
Disclaimers set boundaries around the rights that parties, specifically your customers, can exercise to take you to court. Lawyers, of course, know how to use them. At Foley’s talk, an audience member said his company has a disclaimer on a map application for mobile devices. He wanted to know how effective disclaimers are. Foley said, “Can I begin with a disclaimer? I’m not your lawyer.”
The audience chuckled, and then listened for the real answer. Legal disclaimers are “important from a legal perspective to protect yourself,” Foley said. But — and it’s a big but — they have little effect on their main audience: customers.
“Because they don’t, or they don’t care to, absorb it, or they don’t understand it, or they’ve seen it so many times that it goes right past. It’s unconscious to them now,” he said.
Ironclad? No. Necessary as businesses increasingly turn to digital business models? Yes.
Perhaps echoing the legal uncertainty in an uncharted technology terrain like the Internet of Things, Foley asked an open question to the audience.
“Has anyone successfully sued an apps services company — Google or iPhone — for driving somebody off a cliff?”
The answer that came back to him was, “I haven’t seen anybody succeed.”
Not yet, anyway.
Who says working in an IT department can’t be like vacationing on a cruise ship?
Along with ridding the office of seven-foot high cubicles and assigned desks, one of the experimental policies Michael McKiernan, vice president of business technology at Citrix Systems Inc., introduced during a workplace redesign was beach toweling.
“It’s similar to a policy you see at a hotel or on a cruise line,” he said at the Fusion CEO-CIO Symposium in March. But it’s not exactly a vacation policy you’re likely to write home about. On most cruise ships, guests who leave towels or books behind in an attempt to reserve a deck chair are given a time limit to return before those items are removed and the chair is made available to another guest.
The same goes for Citrix employees who work in offices where the beach-toweling policy is in effect: If employees leave a desk unoccupied for more than two hours, they are to take everything with them. Otherwise, “you’re taking that resource out of the common pool so that it can’t be leveraged by others,” McKiernan said.
Beach-toweling police: 120-minute egg timers
As with the major cruise lines, enforcement measures also needed to be introduced for the policy to work. On a cruise ship, reserved deck chairs are sometimes tagged by a cruise ship employee; if a guest doesn’t come back within the allotted time, the items are removed. At Citrix, McKiernan introduced 120-minute egg timers. Employees can grab one, wind it up and place it on a desk to signal when someone’s not following the beach-toweling rule.
“It’s not punitive in terms of [we’re going to] take your stuff and throw it in the garbage,” McKiernan said. “But it’s a carrot and stick. We use a little bit of shame with people.” Plus, it’s a way of introducing beach toweling to workers who aren’t steeped in the Citrix culture, such as third-party contractors.
Will beach toweling stick? Only time will tell. At Citrix, McKiernan has taken an almost Agile approach to introducing new workplace redesign measures, so that a policy like beach toweling is often referred to as a prototype and not a finished product. That leaves the door open to tweak and change the policy to reflect the office culture. “We’ve had many different failures,” he said. But learning from those failures, admitting when policies don’t work and changing them so that they do is an important part of the redesign process, he said.
Plus, McKiernan said, what works in California may not work in, say, France or Germany. An iterative approach allows for workplace redesign policies to remain flexible.
The CIO-CFO relationship, as noted here over the years, has a built-in tension. As the senior executives responsible for company finances, CFOs must keep a close eye on expenses, especially those that are large and promise no short-term payback, as is often the case with IT investments. For CIOs charged with using IT as a strategic force, the CFO’s focus on cost and ROI can seem shortsighted, or worse, like a brake on the company’s ability to compete. In this guest post, Mike Sheldon, president and CEO of Curvature, an IT infrastructure and services provider headquartered in Santa Barbara, Calif., offers his perspective on why a strong CIO-CFO partnership is so important now and lays out five ways to build a working relationship that will serve the business well.
Five tips for forging better CIO-CFO partnerships
by Mike Sheldon
CFOs are teaming more with CIOs, according to a recent EY survey on the CIO-CFO relationship. More than 60% of the nearly 700 financial leaders surveyed said they’ve been collaborating more with their CIOs in the last three years, while more than 70% also reported having greater involvement in the IT agenda. As companies continue to transform their businesses to meet an ever-changing digital economy, it’s crucial to nurture strong CIO-CFO relationships. Here are five tips for how CIOs can forge more mutually beneficial IT-finance partnerships.
1. Speak the same language
Typically, CIOs don’t understand finance while CFOs don’t understand technology. Sure, that’s painting the CIO-CFO portrait in broad brushstrokes, but this is one of the biggest barriers to getting CIOs and CFOs on the same page. Both IT and finance need to develop a greater understanding and deeper appreciation for the pressures they face individually and collectively. Technology is changing and growing faster than ever, so it’s nearly impossible for CIOs to know every nuance and tech breakthrough. Likewise, CFOs face more intense scrutiny than ever to forecast wisely and budget judiciously. Sharing challenges — and gaining insight into each other’s worlds — is a great way to form a meaningful collaboration.
2. Use the tools of the trade.
Traditional CIO-CFO relationships are based on the CFO coming up with a budgetary number for the technology spend and CIOs then doing the best/most with what they are given. But what if IT took a page from finance and built a five-year technology roadmap using the tools CFOs use in their financial planning & analysis (FP&A)? Finance has the methodology and FP&A tools to bring substantial insight and discipline to strategic technology planning. This goes beyond equipment refresh and upgrade plans to development of full lifecycle management strategies that can lead to major savings for IT and finance.
3. Capex and Opex decisions should be “we” not “me” issues.
Determining capital expenses (Capex) and operating expenses (Opex) is probably where the CIO gets closest to the CFO. There are plenty of strategies for dealing with these expenditures, and these are best addressed from a “we” — and not “me” — perspective. Some companies want to effectively eliminate Capex altogether by embracing managed services and infrastructure-as-a-service options. There are other organizations where the exact opposite is true because they have cash for technology investments but are striving to reduce ongoing operating expenses. The best answer may lie in-between, so the CIO and CFO can craft the most appropriate strategy.
4. Learn how to negotiate like a finance pro.
In most organizations, finance handles leases, capital purchases and all procurements except IT. This can be unfortunate for the tech team as no one typically negotiates for the best deal better than a finance person. IT guys typically don’t have a background in real estate or negotiations and therefore don’t have full appreciation for the negotiating required to cut costs dramatically. CIOs should turn to their finance counterparts to learn the important tricks of the trade. This partnership may also shine light on current IT buying practices, such as relying on a single vendor or value-added reseller, which could be impacting your ability to get competitive pricing. Or, refreshing your technology infrastructure based on OEM timetables and not your own. There are many opportunities to reevaluate IT options and negotiate a better deal for your business.
5. Get more creative about saving money.
Too often, the CIO is focused on spending every penny of the IT budget instead of looking to cut costs. IT should be encouraged, compensated and rewarded for devising creative solutions to its procurement challenges. There tends to be little incentive for the CIO to bring opportunities to the table to save money or defer spending in keeping with changing priorities. CIOs play a critical role in explaining which technology investments will help the business survive and thrive while CFOs are invaluable in identifying opportunities to reduce spending. My favorite quote of all time goes something like this: “People go crazy together, but they get sane one by one.” The sanity will come to companies one at a time, as CIOs and CFOs team up and start asking — and answering — the tough questions together.
About the author:
Mike Sheldon is president and CEO of Curvature, an IT infrastructure and services provider based in Santa Barbara, Calif. He joined the company in 2001 as vice president of sales and was named CEO in 2006. Under his leadership, Curvature continues to post record revenues and now employs more than 650 people worldwide. Sheldon attended MIT, where he studied philosophy and game theory.
Microsoft’s announcement of a partnership last week with a group of big banks that includes Citigroup and Wells Fargo to do experiments on blockchain technology must have bewildered at least a few people.
Some may have wondered what on earth blockchain is. Others may have puzzled over why banks that normally compete for business are working together on anything.
They’re not unreasonable things to think. Let’s start with the emerging technology that forms the basis of the digital currency bitcoin. A blockchain database is distributed among a network of computers, instead of being centralized on a server cluster. Built on top of that is a shared ledger; changes that get made to the ledger are made in a way that ensures security and are updated on all computers that are part of the blockchain.
That’s useful in industries like financial services, which often rely on a central clearinghouse to verify transactions. A blockchain would eliminate that middleman, slashing administration time and costs. Banks are looking at the technology to see what else it can be used for — eyeing new ways of handling stocks, derivatives and loans. But they can’t do that kind of testing alone — hence partnerships like the one between Microsoft and startup R3, which leads a consortium of more than 40 banks.
Together, said Martha Bennett, an analyst at Forrester Research, they can work out a panel of issues, including industry standards. To join a blockchain network, organizations need to agree on the technology stack and protocol to put to use.
“There is no competitive advantage for a bank or anybody else in trying to do blockchain on your own — unless you can somehow convince everybody to adopt your standards, and how likely is that?” Bennett said. “So it does require industry collaboration.”
It also helps to pull together resources to try out a complex new technology and see what works and how that might scale outside of the research-and-development labs.
Microsoft’s end of the bargain is to lend tools and its cloud service Azure for the banks to do their testing. In return, it hopes to run their in-production blockchains when they’re done.
The R3 group isn’t the only gang looking to develop uses and standards for blockchain database technology. There is also the open source Hyperledger Project, led by the Linux Foundation. It is looking at ways of doing other kinds of business transactions, too, not just financial ones, so its ranks include multinationals like IBM and Hitachi alongside banks BNY Mellon and State Street. That group overlaps with a growing number of blockchain startups and even with R3.
That’s a good thing, Bennett said, because the contributions each of the players brings with it can feed into the larger development of the technology.
“It is very much an ecosystem play,” she said.
Should a blade of grass move when we nudge it? If it doesn’t, should we assume we’re dreaming? Or in some alternate reality? “I would think I might be in The Matrix,” said Michael Facemire in a recent webinar presentation on the importance of mobile performance.
Facemire, principal analyst for application development and delivery professionals at Forrester Research Inc., believes mobile devices, by virtue of their touchability, have fundamentally changed customer expectations about technology performance. Just as when we touch a blade of grass we expect something to happen immediately, so too with apps and websites accessed through mobile devices. If these digital artifacts don’t respond immediately, we flee. Facemire cited stats from Google and others showing a majority of smartphone users will abandon a “touch activity” after just 2 seconds of inaction.
In a marketplace where transactions are increasingly digital and executed via smartphone, Facemire argues that building high-performance mobile experiences (the title of his Akamai Technologies-sponsored webinar) is paramount to keeping customers and promoting brand loyalty.
The problem is that many companies — and IT organizations, in particular — have not adapted their software development processes to this new reality, said Facemire, a developer and computer scientist by training.
Mobile performance low on developer totem pole
“Speaking on behalf of a lot of developers, when it comes to performance, this is generally not the first thing we think of when presented with a problem,” he said. The challenges that keep development teams up at night are figuring out how to build the software, what components and tools are needed, and what the user interface (UI) should look like, he said.
“Performance is one of those things that you just check a day or two before you ship code.” Indeed, in the traditional waterfall development method, performance review was one of the last stages, he recalled, “right up there with making sure that the legal paperwork had been signed.”
Yet, as Google’s and other companies’ data show, “performance is as important as, if not more important than, the user interface” in ensuring a great user experience, he said.
So what do IT organizations need to do to ensure high-quality mobile performance?
High-performance mobile experiences: ‘Full-stack game’
The first step is to stop making the UI the scapegoat for low-quality mobile experiences, Facemire said. Performance is a “full-stack game,” with the delivery layer, API layer and network connections all playing a part, he said. Content being delivered from a back-end content management system to the front end has to be transformed so that it fits appropriately on the device screen. The API layer has to do its bit: During peak mobile access times — Black Friday for retailers, end-of-quarter for travel and expense companies are two instances — it’s essential that database administrators are not in the middle of some task (for example, indexing the database) that will compromise users’ access to the data they want (a retailer’s product list, an employee’s expense account). Network performance is context-dependent: A 4G connection for customers at a football stadium with 59,000 fans can’t be counted on for high-quality mobile performance. So, ideally, data should be cached as close as possible to the device. But, unlike caching for the Web, caching for mobile is “an area as an industry that we are still trying to figure out,” Facemire said.
Speed and performance
Adding to the problem of delivering high-quality mobile performance is the tremendous pressure developers are under to deliver new material. A development process that once upon a time took 12 to 18 months now happens in two to four months and is rapidly becoming a “zero-day event,” Facemire said.
The good news is that developers are catching up to demand. When Forrester recently asked enterprise developers how fast their teams released applications, nearly a third (32%) said they’re releasing applications monthly or faster. The bad news is that to meet that timetable, teams take shortcuts.
“Unfortunately, a lot of folks simply cut off the back-end part of it,” Facemire said. Only 23% of developers and professionals surveyed by Forrester said they incorporate performance or load testing tools in their software development lifecycle and 15% of them use these tools less than monthly. That’s asking for mobile performance issues – and customer dissatisfaction.
“Quality has to be a part of everything you do from Day 1 — not at the end,” Facemire said. “We need to have testing and we need to ensure our mobile experiences have the enterprise quality customers have come to expect — but to do it more quickly.”