Scouring the Internet reveals a number of heated debates about devices like the Pebble smartwatch. Is it useful? Useless? Will wearables become the way of the future – or an embarassing fad of the past?
“Alpha WatchBench allows all developers – not just iOS developers – to experiment with how Apple Watch can fit into their business processes and determine its best use cases for improved productivity,” said Bricklin. “This is a powerful tool that will give companies a head start on exploiting opportunities opened by Apple Watch.”
But why wearables? Are devices like smartwatches something that will lend itself to more efficient business? The sentiment is certainly there, it seems.
“Apple Watch creates an entirely new paradigm and category of hands-free, ‘glance-able’ apps that will create huge opportunities for enterprises to increase efficiencies,” said Ben Bajarin, Principal Industry Analyst, Creative Strategies, Inc.
“As organizations look to quickly roll out Apple Watch apps to their employees, developers will seek fast solutions for prototyping this new breed of app.”
Bricklin says he believes in this new paradigm and that these devices reek of “useful for business,” empowering organizations across a wide range of industries the chance to ramp up productivity.
“People are so enamored by the fact that it’s stylish that they forget it’s useful,” he says. Bricklin believes that in industries where the flow of information is increasingly critical, the ability to access information on something you wear has the potential to help businesses save a significant amount on their operational costs.
“Every extra minute a plane is on the ground costs money,” he says, citing an aviation-based example. The point is that any amount of time that can be saved – even if it’s just the time it takes an employee to reach into their pocket and pull out their smartphone – can have a significant impact financially.
So how has the use of WatchBench played out in the real world? A few places, actually – including the NHL. Developer David Kates was able to use WatchBench to create a simple app – NHL Stats – which connects to an Amazon S3 table containing JSON about the Stanley Cup Finals, delivering real-time scores and other game information to fans on their wearable devices. And while this might seem like a big undertaking, it took Kates less than two hours to construct his application – with absolutely no prior experience building a smartwatch app.
And as far as Bricklin is concerned, he is happy just to have the platform available to developers. In a field that he feels suffers from consumer-level fickleness, he hopes this platform will can help spurn a new wave of usefulness in the arena of wearable technology,
“People are so enamored by the fact that it’s stylish that they forget it’s useful,” he says, insisting that the value is there, no matter how vertical the application may be. And he is happy to have provided a tool developers prototype the new generation of device useful-ness.
“Prototyping is something I’ve done all my life,” says Bricklin. “Whatever developers do with it is gravy.”
Perhaps Bricklin has saved us from the fiery, seemingly never-ending debate over the significance – or lack thereof – of wearables.
Last week, the creators of Alpha Anywhere hosted a user meetup to talk about mobile application development – specifically, tablet applications for use in the field and workplace. One of the major pain points discussed was the issue of moving data from user to user across systems in real time in order to achieve the fastest business results.
While some may refer to this as the “latency” problem, Britt Whitaker, implementation consultant at Manufacturers ERP Services LLC, notes that it’s worthwhile to distinguish technology-based latency issues and “workflow-related” latency issues. He claims that within their own organization, even though their APIs are reconstructed to optimize the transfer of data, “it doesn’t really solve the problem: the data got in, but it wasn’t reacted to in real time.”
According to Whitaker, this issue is directly tied to how one has constructed their “workflow link” – the method by which applications alert users that new data has arrived on their tablet or that data they’ve sent has been received. Not only does IT have to worry about how long it takes that data to continue to the next user, but also how quickly is that data processed by the next user in line.
So how did Whitaker and his team tackle this issue? Ultimately, they decided that they were not going to try to develop workflow inherently in an interactive tablet application, but rather would actually construct unique workflow links for individual types of data and applications. To illustrate this point, he provided an example where if one specific user – let’s call her “Britta” – is not in the office, another user, “Alan,” would act as proxy.
“Those are things that, frankly, you don’t want to code into the application itself because people and their roles change – which is, again, a workflow issue.”
The point is, while your APIs may be designed to streamline the flow of data, IT will always have to consider the “people” factor. The solutions to these issues may not always be obvious, especially when people-based processes and roles often change at a rate much faster than IT can appropriately amend their “hardwired” applications.
“Companies should develop tablet apps to eliminate the delay in collecting field data compared to using paper forms, but that isn’t a complete solution,” says Adam Green of 140dev. “We have to change the systems – the ‘people systems.’ Just getting the data there is not enough.”
Have you struggled with latency issues related to “people systems” within your own organization? If so, how did you deal with it, or how do you plan to deal with it? Let us know with your comments.
Something old, business process management, has been made new again by the addition of a lowercase i. This three-part guide shows how and why adding cloud services and analytics to BPM will make it intelligent BPM, a must-have technology for many businesses. Why? iBPM promises to smooth business process workflow in hybrid computing settings where business logic and data run across many platforms.
Veteran IT reporters Christine Parizo and George Lawton explore the evolution of iBPM and its many uses for businesses. Parizo explains why iBPM is poised for widespread adoption and describes how it’s already being used. Lawton looks into iBPM’s backstory and many vendors’ plans for iBPM products.
Intelligent BPM adds data analytics and workflow collaboration capabilities to BPM’s traditional process workflow management system, say IT veterans in Parizo’s article on what is driving iBPM adoption. By bringing together various forms of intelligence capabilities, iBPM facilitates easier business process design, monitoring, analysis, execution and optimization.
As iBPM, BPM rises from the hype ashes, where it resided with SOA for a spell, say industry watchers in George Lawton’s article on the emergence of new cloud iBPM products. As recently as last year, IT pundits proclaimed that both SOA and BPM would be killed off by cloud. Instead, both technologies are viable today because they leverage services in business, on-premises, cloud and hybrid cloud environments to support business needs.
Finally, i buyers beware. That lowercase i is a popular way to hype a technology enhancement, but the i can stand for many things. The fad kicked into warp drive with Apple’s iMac, or Internet Mac, and has continued to thrive even though Internet connectivity is ubiquitous across devices today. The little i stands for intelligent in iBPM, for integration in iPaaS or IBM’s iSeries, for invisible in iMail and so on. The fad won’t stop, even though iWant it to.
IoT is the latest thing in embedded systems technology, and it’s not a fad. IT research firm Gartner Inc. projects installation of 26 billion IoT units by 2020, resulting in $1.8 trillion in revenue. IoT is not a consumer-only technology, either. Enterprise architects should plan now for the entrance of connected device technologies into their IT environments, according to experts quoted in SearchSOA’s feature, “The enterprise IoT wave rolls in: How to prepare”. Those who hesitate will soon be behind the curve. “It’s not a matter of where IoT is entering but where isn’t IoT going to push into the enterprise,” said Mike Walker, an analyst at Gartner, in this article.
In recent articles, SearchSOA contributors have provided a wealth of security tips and tricks for IoT development. This post links to those pieces, putting experts’ and users’ advice at your fingertips.
Developing IoT applications will require rethinking quality standards when building embedded and Internet of Things (IoT) applications. For example, enterprise executives release flawed software daily, a developer told me recently, in order to compare the cost of defects with the cost of delaying release. That luxury isn’t an option in these types of software releases, where failure and security breaches can put user safety at risk. That approach won’t work with IoT.
Failure is much less acceptable when it comes to embedded software. Unfortunately, people expect software and networks to fail or get hacked. Their expectations for mechanical devices are much higher, however. “You shouldn’t have to worry about a blue screen of death on your toaster oven,” said Carnegie Mellon University’s Philip Koopman, in the lead story of SearchSOA’s three-article handbook, “Embedded software, IOT development demand careful scrutiny”.
Likewise, people shouldn’t have to worry about a software hacker turning off his auto-ignition system. Billions of connected devices present a big target, so we’ve gathered advice on creating hack-breaking IoT approaches in this article, “How to cook up the right IoT security strategy for your enterprise”. A huge challenge in beating hackers is that IoT devices bypass firewalls and create ongoing connections to third party services, reported IT security consultant Mark Stanislav, Sr. of Rapid7. In this tip, he and other experts give advice on how to create outside-the-firewall security strategies.
Our sister site, SearchCloudApps, digs into the cloud side of IoT development. Asked how IoT will alter developers’ application strategies, resident expert Chris Moyer pointed to integration as a challenge. A sample of Moyer’s advice here is: “If you create APIs and integrate with popular IoT integration services like IFTTT, you’ll be better able to take advantage of all the devices in a user’s life.”
Watch SearchCloudApps for updates on cloud services and tools for IoT development and deployment, too. A recent report describes a new mobile and IoT application development toolset from Embarcadero, called RAD Studio XE8.
What do you need to know about IoT? Let us know. Our resident experts will address your questions.
Has DevOps adoption hit the mainstream? Well, IT analyst firm Gartner Inc. predicts that 66% of enterprises will be using DevOps tools and practices in the cloud – as will 25% of Global 2000 companies – by 2016. Gartner also expects sales of DevOps tools to reach $2.3 billion this year. That doesn’t look like a niche market.
First posited in 2009, the concept entails merging organizations’ software development, QA and IT operations groups. Cloud computing is driving the need for DevOps adoption by enabling faster development and deployment of applications and complicating what was once a simple housebound IT environment, according to Gartner’s and other reports such as 451 Research’s 2014 study.
Need drives the decision to find a solution, but evaluation leads to the purchase. To help IT and Ops managers make DevOps decisions, TechTarget’s app dev sites have published a slew of expert advice articles on that subject. Here are just a few:
- Cloud consultant Tom Nolle explores how cloud management tools are both influencing and incorporating traditional DevOps features in “The evolution of DevOps in the cloud”. In this advice article, he surveys some current cloud DevOps tools, such as Puppet, and explains the benefits of using frameworks and specifications. In the latter category, he recommends evaluating TOSCA (Topology and Orchestration Specification for Cloud Applications), an open standard which facilitates describing complex application structures.
- Let’s not forget mobile development. The SearchSOA article “Integration tools that bridge the mobile DevOps gap” examines how enterprises are using mobile DevOps integration tools services, such as PagerDuty, BigPanda and VictorOps. Here, you’ll find out how eHarmony used PagerDuty to streamline IaaS alerts.
- SearchAWS offers a handbook on DevOps called “Survive and thrive in cloud DevOps”. Even though DevOps adoption is on the rise, managers of siloed IT, operations and business departments may be hard to sell on the concept. This collection includes:
- Advice on pitching DevOps to department managers in the article “Gaining acceptance for DevOps in the cloud”.
- Various approaches for using AWS OpsWorks, a DevOps automation and management toolset, to enhance application security in “Using OpWorks’ configuration automation”. Here, expert Dan Sullivan examines the toolset’s uses for improving app security and reducing app policy and procedure missteps. The key is using OpWork’s standardized configuration options. “Automating configuration and update operations with OpsWorks can eliminate inconsistencies in application policies and procedures,” he writes.
- An explanation of why and how to implement security operations management in AWS by contributor George Lawton. The SecOps approach calls for continuous threat testing and monitoring in a secure software development lifecycle practice. The article describes several tools that automate those functions in AWS environments and shares steps for removing vulnerabilities during the application design phase.
Stay tuned – there are more DevOps tips to come! Meanwhile, let us know if there are DevOps challenges you’d like addressed by SearchSOA’s resident experts.
A photograph from a Mars Rover may be breathtaking, but it will not deliver the complex data space scientists seek. Scientists like Washington University-St. Louis computer systems manager Thomas Stein need broader sets of data in formats that work with modern data analysis software. Stein helped create Analyst’s Notebook, a tool that documents geological findings from space missions and organizes that data in an online offering that is made accessible to scientists and the public.
Look at the data coming from just one instrument, say, the Mars Rover Opportunity. Some scientists are focused on a certain type of data from that one instrument, some on others. Meanwhile, said Stein, the general science community may want broader data from that instrument to do research in other disciplines. In addition, many scientists are doing cross-instrument and cross-mission searches and correlations to study a variety of topics.
“Today’s scientists cannot simply convert an image to a .JPG and use it, because you lose so much of the science quality of the data,” said Stein, who works in the University’s Department of Earth and Planetary Sciences. Analyst’s Notebook helped enable replay and archiving mission images and data, but that information must still be archived in formats accessible to scientists using many different software applications and devices.
Stein’s group works with NASA (National Aeronautics and Space Administration) to archive planetary data for the long term – as in the next 50 to 100 years. “We wanted to develop a value-added tool that helps scientists bind this data in a meaningful way,” said Stein. “By giving them data previews, we’d help them understand what they’re getting before they actually hit the download button.”
Developing software for geological studies of space rocks wasn’t Stein’s intention when he got an after-college job in in the Smithsonian Institution’s Mineral Sciences Department. Yet, it was there that he was asked to develop software for a traveling exhibit on volcanoes. The success of the three applications he delivered led to more projects for the Smithsonian.
After achieving success in these geological software projects, Washington University contacted Stein about programming software for scientists studying “space rock” data from the Giant Magellan Telescope. The immediate problem Stein addressed was a flaw in the way scientists were doing field-testing. “Nobody was taking notes about the decision-making process,” he said. “After a week of field-tests, they realized, ‘Hey, we don’t even remember why did we decided to look at this rock instead of that rock.’”
Of the many challenges for building scientific applications, two in particular really perplexed Stein and the NASA team: unpredictability of data from Rovers and feature glut.
For an orbital mission, an obvious objective is to map the planet systematically, but the Rovers don’t make this process easy, because they, well, rove. “Scientists often don’t know where the rover will drive and what it’s going to find,” said Stein. Another goal is determining the characteristics of natural objects, such as rocks. The scientists need to know where and in what context, which is hard to tell from an image. To deal with this problem, the development team used Microsoft Image Composite Editor, which was built on Microsoft SQL Server. The Editor can be used to create images that aggregate the surroundings of a finding in context mosaic image.
The feature glut issue comes from the length of today’s space missions. “Keeping up with what our users need over 10-15 years is unbelievably hard,” Stein said. “Think of how different the expectations of software users were 15 years ago – nobody asked for one-click ordering online.”
The development team, focused on Opportunity and other NASA Rovers, sought an automated development platform that set up the back end so they could devote more time to building value-added tools specific to planetary data coming from Rovers. “We shouldn’t be building basic code, laboring over documentation and doing cross-platform testing,” he said. Telerik Platform, a cross-platform development suite, was chosen to help the software teams focus on high-level challenges and bypass earlier phases of software development.
A web-based application running on the Microsoft ASP.NET platform, Telerik Platform provides a user interface (UI) that NASA uses for framework controls. In addition, Telerik’s automated test and quality assurance tools reduce the time needed to build a feature. An example is a documentation feature Stein’s team built that enables rapid online searches. “Documentation becomes very difficult when doing rapid application development and dealing with such huge sets of data,” he said. Telerik’s toolset helped him build a feature that enables a user looking for images of a certain target find it quickly online “at the push of a button, instead of the user having to do literature searches.”
Being able to react quickly to user needs is a necessity today, one that automated test and development platforms makes possible. “In reality, I’m still not a computer scientist, I’m a geologist,” he said. “A foundation development tool really helps me not worry so much about the computer science side and focus on the science side.”
Any organization NOT doing an application modernization project this year is in the minority. Over 70% of businesses worldwide are modernizing their application environments to handle mobile, cloud and other emerging digital platforms in 2015, according to a recent survey.
Not only are most businesses engaging in application modernization (also called legacy application modernization) efforts, at least 61% consider it a strategic asset to drive business forward and a tool to help maintain market position and even survival. This is according to a recent survey of CIOs by app modernization vendor CSC.p
Application modernization projects will be a higher priority for businesses in Europe (80%) and Asia (73%) in 2015, largely because the technology refresh cycles have been slower in those areas than in North America, where 55% will invest this year.
A challenge in tallying application modernization projects is that they can be a part of larger projects, such as cloud modernization and portfolio modernization, both of which focus largely on app modernization. In some minds, the latter sounds old school, identified only with mainframe-to-server migrations, which are not irrelevant — as Moorcroft Debt Recovery Group’s recent project affirms.
Of course, the rapid rise in business use of cloud services is driving many businesses worldwide to modernize their app portfolios; that and the fact that their competitors are doing the same. Likewise, the rise in workers’ use of enterprise apps on mobile devices is exceeding IT’s ability to support either.
This cloud rush encourages a “fools rush in” modernization mentality. The sober but savvy enterprise architect approaches modernization from more than the cloud apps angle. By examining their app portfolios and on-premise IT infrastructure, they’re considering which applications should be re-platformed to hybrid or public cloud models. Which applications should be replaced by software as a service? Which strategic applications should be rewritten to take advantage of microservices architecture and platform as a service?
The complexity of modernizing applications and the importance and risk of these projects to business success is boggling. From my observations, the most successful projects started with intense application portfolio assessments. They choose app modernization products and services that are designed to be used in a systematic fashion and are flexible, the latter because every business will have different business needs, app portfolios and infrastructures.
There are many approaches to modernizing software, more than I can cover in this post. Fortunately, we’ve put together a library of articles on app modernization. Enjoy!
With 2014 coming to an end, we close the door on an eventful year. Technologies such as Hadoop, Docker and microservices made their way into our everyday lexicon. Some tried and true predictions held on for another year.
For example, in an interview with Forrester’s principal analyst Brian Hopkins at the end of 2013 he asserted that mobile applications should make waves in 2014. “Everybody is suddenly realizing that the platform is not the differentiator, it’s the apps,” he said.
In many ways, Hopkins’ prediction was correct. Over the course of the year, mobile application development proved to be important, especially in corporate environments. Several enterprise architects and developers shared with SearchSOA.com how they selected tools to help them gain an edge in the mobile sphere.
Indeed, it appears that mobile technology, big data and the Internet of Things, took the spotlight in 2014 – a position they will likely hold for quite some time. Some experts assert that SOA should have been right up there in the limelight too, but ended up in the wings. In an interview with Christine Parizo, 451 Research’s Carl Lehman said, “SOA is a wallflower … It was brought to the dance, but it’s not on the dance floor.”
Poor SOA. Will 2015 be the year SOA is crowned king?
Given what some industry insiders recently said, it doesn’t sound like SOA per se is going to be voted most popular — yet again. It appears that flashy mobile technology and the cloud will be most thought of, even though SOA may be the true underpinning of how everything is synched together.
Although the term mircorservices may be relatively new, some experts, like Gartner vice president and senior analyst Anne Thomas, believe it’s going to become increasingly prevalent. “I think that a small number of people, maybe 10% of organizations, will start trying to play with microservices and bounded context during the next year,” she said.
In fact, Thomas said she feels that microservices really is just SOA, but under a different name. So maybe SOA really will edge upwards in popularity, just under a different guise.
What are your SOA trend predictions for 2015? What do you think should have happened in 2014 that didn’t?
Integrating applications deployed in traditional enterprises or data centers with those in the cloud is a common headache enterprise architects face. Red Hat recently released OpenShift Enterprise 2.2 and new cloud services, JBoss Fuse for xPaaS (integration) and JBoss A-MQ for xPaaS (messaging) to make it easier for developers to update applications and integration platforms.
The cloud-based messaging tools aim to speed-up application development, particularly in organizations with a hybrid IT architecture. With it, enterprise customers can use PaaS for applications running in their own data centers and private clouds, according to Joe Fernandes, OpenShift director of product management.
We’ve all heard the terms iPaaS, IaaS, and SaaS, but what the heck is xPaaS? In short, xPaaS is a term Red Hat coined for uniting various integration and application-centric tools under one offering. “A lot of traditional middleware solutions are becoming available as a service,” noted Fernandes.
The new release of OpenShift Enterprise 2.2 with the addition of private iPaaS was done to help organizations with future development in mind. “It’s not traditional 2005 architecture,” said Pierre Fricke, director of product marketing for Red Hat JBoss Middleware. “It’s a 2015 type of architecture for microservices with a center piece around Apache Camel.”
Microservices are small, highly-distributed applications composed of logic and services that have to be connected and wired together, said Fernandes. “In many ways it’s the new SOA.”
Microservices were a hot topic at JavaOne 2014 this year. During that event, Java Champion and consultant Jeff Genender and developer Rob Terpilowski said that microservices offered a streamlined means of integrating cloud services.
Apache Camel brings standardized integration to the xPaaS offerings. “Camel actually implements the book that everyone uses, that makes it the closest thing to a standard for integration,” said Fricke. “It’s almost the de facto emerging standard for integration than anything else.”
Bridging the gap between development and operations to support applications is really where xPaaS comes in to play, according to Fernandes. “As you get in to enterprise application, inherently they tend to be more complex than some of the applications you see on the consumer side running in the public cloud today,” he said.
Some Fortune 1000 companies, such as statistic tools provider Fair Isaac Corporation (FICO) have already leveraged this Red Hat technology, Fernandes said. He noted, however, that xPaaS can be used by SMBs who need to reduce the amount of times it takes to develop and deploy applications.
It’s not uncommon for IT and business leaders to want to reap the benefits of having employees collaborate amongst each other, but the same perspective isn’t always seen when it comes to sharing technology resources. Those decision-makers need an attitude adjustment when it comes to shared workloads, according to Susan Eustis, president and CEO, co-founder of WinterGreen Research.
It may seem like common sense, but not everyone seems to get it. “It’s far more efficient to share a resource than it is to build and not use it all the time,” Eustis said. “It’s a message people don’t want to hear, but in fact, people who invested in shared workloads are the leaders in their industry segment.” Such organizations include Wal-Mart and Travelers Insurance, she noted.
In an era where organizations are trying to stay afloat, or simply get off the ground, looking towards the cloud can seem like a logical move. By sharing a workload in the cloud, costs can edge downwards. This can lead to a competitive advantage because organizations that adopt such a model can afford to offer products and services at a lower price point.
In a traditional set-up, every department within an organization would have its own set of servers and thus individually pay for the service. That however, is starting to change. Now, some companies are only paying for the portion of servers that are in use. “The virtualization workload moves on and off the cloud in the way it hasn’t happened before,” Eustis said.
The cloud isn’t the only area cost savings can be found. IT leaders might be surprised to learn that mainframes may not be the money-draining resource they have a reputation for being. “I’ve done a lot of work over the years and I’m showing the mainframe is 10 times cheaper than the servers,” Eustis said.
The message from Eustis is clear – archaic thinking isn’t going to get an organization ahead. “People have to stop being afraid of losing their job and start looking at what the reality is,” she said.
Have you seen stubborn thinking and practices hinder an organization’s ability to succeed? What are some common pitfalls you’ve seen leaders take when it comes to making IT decisions?