While collecting and working with data is a burning passion for some, for many it is simply another critical business requirement that is only getting more complicated. In this session, data guru Evelina Gabasova showed participants how the F# language can help make the task of working with data a little more manageable.
Gabasova is definitely a member of the passionate group of data workers. She is a postdoctoral researcher at MRC Cancer Unit, and works with a lot of data. But she is just as interested as anyone else in making the task of working with that data easier.
“I study the genome…and that can get very complicated,” she said. Gabasova said that she relies heavily on the use of F# to work with the massive amounts of data she comes across in her research, particularly because of its strong ability to parse scripts through active patterns. Active patterns allow coders to define input data with names that can be used in a pattern matching expression.
The features are strong with F#
In order to demonstrate how effective F# is, Gabasova demonstrated how, using publically available copies of the Star Wars scripts and an API called SWAPI – the “Star Wars API” – she was able to determine exactly who the most important character in the Star Wars universe is. SWAPI, which calls itself the “world’s first quantified and programmatically-accessible data source for all the data from the Star Wars canon universe,” was able to provide Gabasova with detailed information like characters’ heights, birth dates and other intimate details.
One of the biggest reasons Gabasova advocates F# is because of the availability of built-in type providers that can automatically provide the types, properties and methods you need to work directly with tables in, say, a SQL database. In this way, those working with diverse information sources without having to manually write repetitive lines of code or add on files with a code generator. And she proved how this worked by showing us how easy it was for her to determine the average height of a stormtrooper (5′ 9″, I believe?) and verify that Luke actually was a little short for a stormtrooper.
“F# makes it easy to specify certain attribute,” Gabasova said. “Type providers are amazing!”
May the visualizations be with you
One of Gabasova’s major points was the importance of visualizations. Without visualizations, she said, it is not possible to glean the insights you may want from your data. She then went on to reveal the in-depth visualizations she put together outlining the key “social network analysis” factors that determine the importance of a character, such as centrality and density of network.
“Whenever you do data analysis, always visualize it,” Gabasova said. “Always.”
…who is the most important character in Star Wars according to Gabasova’s research? In her words, “Darth Vader still rules the universe”.
Gabasova then went on to explain how these same F# techniques can be used within organizations to analyze the network connections that exist within their own companies. By taking data from things Slack communications, email and other social platforms, it may be a lot easier to garner serious insights into the social structure of your company with the help of F#’s unique features. However, she insists that F# can really be used for all types of data and projects.
“I would just encourage you to play with the data you have,” she said.
Sameer C. Thiruthikad, a software developer at the Qatar Foundationan and attendee of Gabasova’s talk, said he enjoyed the talk and the way Gabasova chose to present it.
“It was good,” he said. “It was really interesting because they used Star Wars to tell the story.”
Thiruthikad’s team currently makes use of C#, and said they face the very common challenge of organizing and visualizing large sets of data. But he said the session encouraged him to try and use F# going forward as a solution.
“I will surely get into F#,” he said. “I’ll just test the waters and see if there’s anything interesting there.”
When it comes to testing software, many of today’s organizations rely heavily on comprehensive testing, especially unit testing, to minimize the risk of outages. But in this session, Michalis Zervos of Microsoft talked to audience members about what some consider the “next generation” of creating software resiliency: actually taking those anticipated faults and forcing them to occur to your software.
“Fault injection,” as Zervos refers to it, can be performed on everything from virtual machines, to custom applications to hardware. And this is a practice Zervos’ team at Microsoft actively uses and promotes in order to see not just how particular services and such are affected by certain unwanted events, but also how the dependent services and software are affected as well.
“We create ‘storms in the cloud’ to see how it performs under pressure and failure and use that to create resiliency,” he said. And according to Zervos, fault injection can be used for more than just testing resiliency. It can also be used for things like testing new features, training and verifying staged deployments.
Zervos covered the numerous faults that teams could consider injecting, including creating a kernal panic, “hooking” and disrupting critical service code, crashing critical processes and even pulling the power plug on your data center. He also suggested a few publically available tools that development teams can use to make the process easier, such as Consume.exe, Sysinternals tools and “managed code fault injection” through TestApi, a library of test and utility APIs.
Zervos did warn audience members that fault injection cannot be performed without certain precautions and considerations in order to achieve accurate results and avoid creating more problems. He cautioned that teams need to still follow fundamental security principles such as the least-privilege principle, make extensive use of code signing, create a “safety net” for the automatic removal of faults should they get out of a tester’s control and have a “kill switch” available, which he said can save developers and testers “a lot of grief.”
Zervos also stressed this importance of extensive verification and reporting when it comes to fault injection. He also instructed audience members that it is useful to manage fault injection from a centralized location.
“If you are not able to verify what happened, you don’t get the most out of your system,” he said.
One of Zervos’ final points was that it is not enough to simply perform fault injection every now and again. He stressed that teams need to integrate fault injection as a continuous part of the production cycle and find creative ways to encourage teams to adopt its practice. One suggestion he made was the idea of “recovery games,” in which one team member simulates an attack on a particular system and another team member, often a trainee, must record what occurs and take the proper steps to mitigate the risks of an outage. By implementing these types of programs, Zervos said his organization was able to increase adoption of fault-injection and also garner helpful insights about the behaviors of team members, such as that some spent too much time debugging and not enough time actually mitigating the problem.
“It needs to be part of the engineering process and part of the culture of the company,” Zervos said.
John Billings, technical lead on one of the infrastructure teams at Yelp and attendee of Zervos’ talk, said he thoroughly enjoyed the session and believes that fault injection is “the next step in actually testing resiliency of production systems,” he said.
Billings, who also held a talk at QCon on the “human side of microservices,” said he particularly liked the fact that Zervos spent his time discussing the general principles of fault injection rather than talking about specific technologies. And while his company does already make use of fault injection techniques, he is hoping to push the adoption of this strategy even further within his company and hopes that others will as well.
“Tests can only cover so much that you’ve thought about beforehand,” he said. “If you actually have fault injection happening all the time in production, you get that additional level of reliability that otherwise would be very difficult to achieve.”
Billings also said he liked the idea of introducing “fault injection games” as an approach to encouraging the adoption of this strategy, but believes that these adoption strategies must be align with a company’s individual culture. For instance, he noted hearing about the idea of a “badge-based system” that awards teams particular badges for completing and adopting certain testing and production techniques.
“You have to experiment and just see what works for your particular culture and your company,” he said.
“Incident response – what makes it so terribly difficult?” – John Allspaw at QCon New York
“Anomaly response does not happen the way we might imagine it does,” John Allspaw, CTO at Etsy, said in his opening keynote presentation at QCon New York, “Incident Response: Trade-offs Under Pressure.”
Can we trust tools?
One of the first notes that Allspaw made is that organizations cannot simply rely on tools to make it easier to understand how and why incidents are occurring. Instead, teams need to rely on processes and reasoning in order to truly respond to anomalies. And they cannot, he said, treat these outages as a mystery that is constantly developing over time.
“An outage is not a detective story,” Allspaw said. “It’s static, and it’s there.”
A model of reasoning
In order to properly deal with outage-causing anomalies, Allspaw recommended that organizations implement a “model of reasoning” that does not “distinguish between diagnosis and therapy.”
Avoiding “cognitive fixation”
Listeners were also warned not to fall into the traps of “thematic vagabonding” and “cognitive fixation” – meaning that those debugging the code can become so wrapped up in simply fixing bugs and symptoms that they fail to delve further into discover the actually root cause of the issue.
“As one thread of diagnosis comes in, you start running to more,” Allspaw said. He said that avoiding this requires developers and testers to communicate about what they are seeing and not get stuck alone on a path of just fixing bug after bug.
In fact, he provided a list of “prompts” that teams can use to frame particular question, dividing the questions into four “stages” of incident response: observations, hypotheses, coordination and suggesting actions. By asking these questions, team members may be able to avoid “cognitive fixation” and get to the root of the problem.
Allspaw also talked about the importance of linking anomalies to any known, recent changes in the code or application and, more so, of having peers review your hypotheses.
“Validate the hypothesis that most easily comes to mind,” he said, while also adding that anyone who begins to build confidence about discovering a certain cause of an outage should always check that confidence with a peer review.
Day two is over, and now it’s time to get yourself ready for the third and final day of the QCon New York 2016 (#qconnewyork) tracks. But with so many sessions to choose from again, where do you go? Assuming you don’t have a schedule set in mind, here are my session picks for QCon NY, day three.
Be sure to view the full schedule and explore all your options, but these are the sessions that really stick out to me as either potentially very educational or simply just interesting.
10:35am, Dumbo/Navy Yard
Want to get fired up for your third day of QCon? I think this talk might be a way to do it.
Cory House, software architect at VinSolutions, will be talking about how to transform from an “average” developer into an “outlier” developer. I’m not sure exactly what that means, but part of it is about increasing your paycheck. That’s a good enough reason to go, right?
“So many of us don’t think about the fact that we have to be deliberate about self-promotion and that it’s not necessarily selfish,” House said in an interview with QCon reps. “If no one knows what we were good at, what are the chances that anyone else is going to get to benefit from our skill set?”
11:50am, Salon D
There’s a lot of hype out there when it comes to microservices. So it’s time for a solid, down-to-earth discussion about what you can expect when you’re expecting the introduction of microservices into your environment.
Daniel Rolnick is the CTO at Yodle, and he warns that microservices is a “buzz word,” and that developers need to be careful about jumping on the band wagon without considering the consequences.
“When you start going the microservices route, people don’t always realize that there are other things that have to happen, necessarily will happen and it can quickly spiral out of control,” Rolnick said in an interview with. “Everything is built on trade-offs and you have to be willing to evolve as your systems evolve.”
But at the end of the day, he will still tell you it’s worth it.
1:40pm, Dumbo/Navy Yard
Did you ever think a Meyer’s Briggs Type Indicator (MBTI) assessment could help you build a better development team? Well, in this session, you’ll find out how it can.
Heather Fleming, VP of product and program management at GILT, will share the framework they use to build productive teams — something they call a “team ingredients” framework — and how empathy with other team members plays a vital role in cultivating a “psychologically safe environment.”
If years of coding have hardened developers’ hearts, maybe Heather can help soften them.
2:55pm, Salon D
This is a true “monolith to microservices” story. Emily Reinhold, a software engineer on Uber’s Money team, is going to share with audience members the lessons they learned breaking up their huge, Python-based monolith into new microservices. This includes not only what they did right, but what they could have done better — including aligning with consumers.
While I’ve traditionally produced content that advises against large migrations to microservices and encourages small iterations, it’s still fun to watch David(s) take on Goliath.
4:10pm, Salon D
I know, I’m overdoing it on the microservices here…oh well.
Assuming Daniel Rolnick didn’t convince you to turn away from microservices, you need to understand what not to do when building those microservices. Here you’re going to learn about “some of the nastiest antipatterns in microservices” and how to take those antipatterns down before they ruin your project.
I have to give a shout out to the speaker Daniel Bryant on this one. I’ve had the chance to sit in on sessions of his before and chat with him one-on-one, and I can say that you always learn something new talking to Daniel.
5:25pm, Salon A/B
Dan will also explain how this project also led to the creation of Nyanpollo, show us the practical difficulties of interfacing with hardware and share videos of and data captured from the mission.
And now you can go to the reception, mingle and watch the screening of Blade Runner.
So you may have your schedule for day one of QCon 2016 down, but are you ready for day number two? For today, I’m going into the weeds on Java, but also taking a good look at containers, dabbling with microservices, viewing some coding competition and learning how developers can be instrumental in social change.
Be sure to view the full schedule and explore all your options, but these are the sessions that really stick out to me as either potentially very educational or simply just interesting.
10:35am, Salon A/B
Hey, I started day one with Netflix, why not day two?
What was behind the 42 billion hours of content streamed to customers from Netflix last year? Containers. In this talk, Netflix engineers Andrew Spyker and Sharma Podila will talk about why they decided to make the jump to AWS EC2 as well as details about their aptly named “Titus” project.
Come ready to learn about implementing container scheduling systems, building and operating a container cloud and what you need to know before building a cluster management system. Time to Netflix and build.
11:50am, Salon C
With all this .NET talk going on, it might be time for some Java – Java 8 specifically.
In this one, Trisha Gee, a Java expert and developer advocate at JetBrains, will show attendees what they may be missing out on in Java 8, particularly things like Lambda Expressions and the Streams API. She’ll also dive into a talk about refactoring code, including how you can automate your refactoring and when you should actually put the brakes on your refactoring efforts.
It’s geared towards intermediate developers, but beginners are encouraged to come, too. Just be prepared to learn.
1:40pm, Salon D
My third session pick for day two is focused on assuring the availability of services through fault-injection. Michalis Zervos is a software engineer at Microsoft, and he will explain how his team uses fault injection to test and break services, identifying key potential failure points.
Even if you’re not a tester, an increasing movement towards DevOps and knowledge of the entire application lifecycle seems to make this a valuable session for who has a hand in producing their company’s software. Its difficulty level is slated at intermediate.
“It’s in the best interest of IT companies and their customers for engineers and managers to embrace the failure testing culture and understand the importance of it,” Zervos said in an interview with QCon representatives. “With that in mind, we want to share our learnings on the subject and try to promote fault injection as a crucial tool for achieving high-availability.”
2:55pm, Salon C
We’ve talked about Java 8 today, so let’s talk about Java 9.
Rossen Stoyanchev is a Spring Framework committer at Pivotal, and he’s going to show us how to approach reactive programming in Java. Specifically, he’s going to talk about the JDK 9 java.util.concurrent.Flow class, which implements features from Reactive Streams.
Intermediate discussion; recommended for anyone who wants to improve their Java knowledge or anyone who wants to learn about reactive programming principles.
4:10pm, Salon C
Am I obsessed with Java today? Probably. But I’m also choosing this session because I’m a sucker for learning about microservices.
Here attendees will get a chance to learn about the main differentiators between microservices and monolith architectures — a topic that never seems to get old — and how JVM can help developers manage large-scale microservice deployments. The speaker, Peter Lawrey, CEO of Higher Frequency Trading Ltd, will also show us make asynchronous messaging simpler and how to handle failure from large scale, low latency implementations of microservices.
This will be a pretty tech-heavy talk, and is recommended for advanced developers.
5:25pm, Salon A/B
Ok, let’s break away from the Java. On to containers — and how to use them appropriately.
Michael Venezia, the principal architect of engineering at Viacom, will explain what people need to know before they decide to start “bringing containers home,” as Venezia puts it. His big thing? You need to have a plan.
Got your QCon session schedule ready yet? If not, it’s worth making a plan to make sure you’re getting the most out of this conference that you possibly can.
Of course I recommend going to both of the Keynote talks: Incident Response: Trade-Offs Under Pressure and Engineering the Red Planet. But there are a lot of tracks and sessions to choose from in the meantime.
Here’s a sample schedule that takes you from the day’s first keynote to the last one. Be sure to view the full schedule and explore all your options, but these are the sessions that really stick out to me as either potentially very educational or simply just interesting.
10:35am, Salon D
Even the magicians at Netflix struggle with API management.
Katharina Probst is presenting this one — she’s the engineering manager at Netflix who leads their API team. She’ll be talking about how their taking their Groovy scripts and helping them pack up to live on their own in isolated containers while still communicating with their API via a data platform provider called Falcor.
While most of us don’t have to worry about streaming millions of people their favorite movies and shows, hopefully this will provide developers and architects some ideas about how they can solve some of the many challenges associated with building a truly cohesive API architecture.
11:50am, Dumbo/Navy Yard
Joe Duffy, a director of engineering at Microsoft and former architect for the Midori OS, is leading this talk, and will dive into how he built an entire OS in a C# dialect. He also promises to go over things like garbage collection, low-level code quality and dealing with errors and concurrency robustly.
“I want to talk about the problems and how to solve them in C#,” Duffy said in an interview with QCon representatives. “A lot of it does come down to practices. And the framework, it turns out, is as important as the language.
This one’s recommended for only those already working with C# or some other .NET language. Java coders: These are not the tips you’re looking for. Move along.
1:40pm, Dumbo/Navy Yard
Like Star Wars? Like code? I think you’ll like this.
Evelina Gabasova, a machine learning researcher working in bioinformatics at Cambridge, is giving this talk. She will attempt to prove how easy it is to process large amounts of data using F# and R by using these languages to glean insights about the Star Wars franchise from public datasets.
“I want to make people aware that F# is a nice language for working with data,” Gabasova said in an interview with QCon representatives. “If you are doing any data science or machine learning, 95% of your time is spent getting data into a usable shape and I think F# is a great language for that.”
This talk seems like it’s definitely for those both working with and interested in working with F#. No word yet on whether Gabasova will be appearing in person or via hologram.
2:55pm, Salon C
This session will be presented by Swati Vauthrin, director of engineering at BuzzFeed, and she will discuss how focusing on creating a diverse software engineering team not only makes sense from a social perspective, but also from a business one.
“Not everybody fits the same mold, and I think that for us that’s been really powerful,” Vauthrin said in an official QCON interview. “It allowed us to think differently, to think about how we are engineering products differently.”
I picked out this session, because it’s easy to get wrapped up in the technology and forget that, at the end of the day, people are what matter. It may also be a healthy change in gears from the day’s more tech-heavy conversations.
4:10pm, Dumbo/Navy Yard
You can’t have a truly complete day at a software development conference without at least a little dose of microservices.
Rachel Reese is a senior software engineer at Jet, a spunky startup set on competing with Amazon. She’s going to show us what the team at Jet has done with microservices in the .NET space (F# and Azure). But don’t let the .NET focus keep you from attending this session — it’s really for anyone interested at all in microservices.
“The most important takeaway is, if you already have microservices, consider going home and fix or change what you do into something better.” Reese said in a QCon interview. “For the folks who aren’t there yet, it’s encouraging you to be more aware of what you are getting into.”
5:25pm, Salon E
If you don’t use Spotify, I’m willing to bet you know someone who does. But did you know they nearly reached their limit when it came to data streaming? To handle their growing data rate of 60 billion events per day, their engineers had to figure out how they were going to successfully scale the event delivery system with Spotify’s growth before disaster struck.
Neville Li is a software engineer at Spotify who works mainly on data infrastructure for machine learning and advanced analytics. He’s going to explain how they leveraged the new event delivery system on Google Cloud Pubsub and Google Cloud Dataflow to meet their scaling needs. Li will explain the lessons they learned from dealing with this data streaming issue, including why the cloud matters. He’ll also talk about Scio, a high level Scala API for the Dataflow SDK that made it easier to use.
This talk is a little technology heavy, but any engineers struggling with data streaming issues and trying to simplify things and reduce operational burdens should get something out of it.
Now it’s time for a day two schedule.
There’s an idea consistently being trumpeted on both sides of the election fence this year: shaking up the old established ways of politics and making a change. Whether or not you think that idea is scary, and unless poll numbers lie, you have to admit that people like it. So here’s an idea, developers: What happens if that same ideology is brought to the enterprise?
Today there are plenty of free tools and platforms available to make enterprise-grade apps, that a small pharmaceutical supply chain management company called AntTail was able to create a working, fully mobile business application prototype in days without spending a dime. The provider they leveraged, Mendix, is just one of the companies offering free sandboxing with their application- platform-as-a-service (aPaaS). OutSystems and IBM’s Simplicite are some others who let developers try their aPaaS tools without incurring any costs.
So for any company running, say, an old ESB or utilizing an expensive EDI-based VAN, what’s stopping a developer from throwing up his hands, saying “stop!” and presenting their boss with a working proof-of-concept application that does everything their old, monolithic systems just as effectively – if not better? And maybe – just maybe – that change could be enough to break the established, legacy ways of doing things within an organization and inject a new, more efficient approach to software management.
This does require the developer to take a little bit of a leap of faith when confronting their respective boss with their prototype software, and there certainly can be a lot at stake. What if it works in a small environment, but doesn’t translate so well to large environment? What if the service fails after implementation? What if a major security vulnerability is created?
This may be the thought process that keeps developers from kicking up too much dust, according to Mark Roemers, co-founder of AntTail, who admits that there is certainly a risk factor that should always be taken into consideration.
“People want to make sure they don’t make the wrong decision instead of being ambitious and going for goals,” said Roemers. “They’re risk averse, and I don’t blame them — it takes a certain spirit to be able to do that.”
However, there may not be much to lose by just making your own prototypes with a free aPaaS sandbox anyway. At the very least, you have the blueprints ready when the opportunity to present your idea arises or if you want to jump ship and start your own company. At best, you become the change your enterprise really needs.
Low-code tools may be a developer’s best friend, according to experienced coders-turned-entrepreneurs in the field. Mark Roemers, founder of the pharmaceutical-centric SCM applications and hardware provider AntTail, successfully used new, low-code tools to help him get his company off the ground, and he has some crucial advice for enterprise-based developers.
AntTail’s job is to produce and maintain both the software and sensors that help users keep track of important pharmaceutical deliveries and make sure that they arrive both at the right time and at the right temperature. But by leveraging the support of a low-code solution provider called Mendix, a Boston-based PaaS provider that is now available on IBM Bluemix Roemers said he is able to rapidly develop either apps or portals for his customers that can specifically target the precise needs of their customers and integrates with the sensors they produce.
And even though Roemers is an experienced programmer, he still was able to find value in the capabilities provided in free, low-code tools. He admitted that he had not programmed for close to 20 years, but despite that fact — and his initial skepticism — it did not take him long to learn how to use the tool and produce a quality portal. Now he says he can produce a portal in three to four weeks, all without the typical backend support and overhead that is usually associated with enterprise portals.
Does it really work?
Roemers said that there is a steep learning curve for those even those experienced with languages like Java, but that it took only a few weeks to become extremely productive with Mendix’s services. He advised that other developers should look into these low-code tools as a way to create valuable proof-of-concepts without reaching into either their company or personal budget.
“You can get yourself up to speed and knowledgeable … without incurring any costs,” he said. “As a company, you can build a complete application [and] demo it as a portal or an application on a tablet,” adding that his company did not even purchase any licensing fees until they had their first paying customer.
Roemers said that the time spent developing is brought to a minimum with the tool. His entire “team” consists solely of him and one other partner, and they do not even spend the majority of their time writing code or building the apps.
“I spend about maybe 10% of my time programming, and it’s basically adding little stuff that people ask,” he said. “So we do a release about every four weeks.”
Moving out of the dark ages
Roemers strongly believes that better productivity can only come from utilizing better technology, likening old, large-team development processes to the use of dikes for water as opposed to the use of modern water-management technology.
“That’s a bit how we make code. If you’re coding Java or Angular by hand, let’s say, it’s a bit like shoveling a dike,” he said. “Now we use pumps and sand and water and large shovels, large equipment … it’s more efficient.”
Time for a new image?
The other aspect of development that Roemers stresses is the inclusion of coders in business meetings. He believes it’s important that the coder be able to sit in on a meeting, have someone lay out the business needs and have the coder put together a working prototype as the needs are being laid out. Roemers does, however, recognize the fact that this is a change from the usual “image” developers have maintained.
“It also asks for a different kind of programmer,” Roemers said. “He needs to wear a tie, use some deodorant and sit in a business meeting to understand the logic behind the coding that goes on.”
Roemers also stresses the fact that developers need to be able to show some kind of tangible business value that can be achieved with their creation. Without it, he says that developers are almost making an empty promise. A database that can hold millions of records may be impressive, Roemers said, but unless it can serve a specific business function, it can’t be considered very useful on its own.
Maybe developers don’t need to start wearing a suit and tie just yet, but it may be a good time for today’s coders to think about how much they are communicating with those dictating the business needs. Or, in some scenarios, maybe it’s worth considering the option to “jump ship” and dictate your own business needs.
Portugal Telecom’s implementation of their latest iBPMS tool, SHOP BOX, may be a shining example of how to manage enterprise software change management correctly.
The telecommunications company, which is the largest in Portugal, was featured in our April 2016 Editor’s Choice for the implementation of PNMSoft’s Sequence iBPMS platform. One aspect of the story truly stuck out to me: they were able to push a successful rollout with a technology team that included just four application managers, two coders and one technical support person.
So how does a company successfully roll out a new piece of software to hundreds of locations that serve tens of thousands of customers? With gradual, steady change management.
Determining a plan of attack
According to Gonçalo Mendes, head of retail development and optimization at Portugal Telecom, their team began the project by sorting through 370 “workflows” associated with their shop management and narrowed down a list of what they determined to be the most critical workflows. These 65 “artifacts,” as they call them, became the focus of their Sequence implementation plan.
“They had more than 30 different applications, and more than 400 different processes,” Vasileios Kospanos, marketing manager at PNMSoft said. “That caused a lot of confusion, and they wanted to simplify that by converging all their processes and their systems in something they created [called] ‘SHOP BOX.'”
Keeping it simple…relatively
According to Steve Weissman, analyst and founder of the Holly Group, Portugal Telecom’s success with the SHOP BOX project rests on the fact that they piecemealed their implementation. Portugal Telecom, who’s customer service representatives operate out of 250 unique “shops” that deal directly with customers, began by migrating about 6,000 customers over into the new BPM system. The company, which currently has about 49 of the 250 shops outfitted with the SHOP BOX tool, made sure to gradually increase that number over the year, eventually growing to 20,000 by April, 2016.
“They’re not trying to eat the whole elephant in one bite,” Weissman said. “The fact that they’re taking a longer term view of success dramatically increases the chance that they’ll achieve that success.”
A tangled web of an infrastructure…
Kospanos said that the implementation was not a simple one. Portugal Telecom’s existing architecture made the implementation a challenge, especially since it required coordination between the different vendors and stakeholders that make up Portugal Telecom’s stack.
“The implementation was challenging, and that is also down to a lot of parties being involved,” said Kospanos. “You have Microsoft in one side, Accenture, us with the Sequence [technology] and also Portugal Telecom themselves.”
Yet despite these challenges, the implementation of SHOP BOX has been a success. And Kospanos believes that it is not necessarily because they have a small team that they were successful, but rather that they made the effort to find technology that enabled them to work efficiently.
“You can buy Agile technology, you can buy the latest and greatest,” Kospanos said. “But if you don’t work in an Agile way or in a way that will [at least] make success a possibility, it won’t be successful.”
Enterprises are stuck in the EDI mud, hanging onto old B2B integration technologies. Legacy EDI methods such as value-added networks (VANs) are costing many companies huge amounts of money in service subscriptions when cheaper, cloud- and API-based alternatives are readily available.
That’s the finding of Ovum’s new study, titled “Developing an Agile and holistic B2B integration strategy for digital business success,” which indicated that over half of enterprises interviewed are using in-house or legacy EDI solutions. And more than a third of respondents are using more than three separate solutions for EDI management.
The costs of B2B integration
As I’ve discussed before in a February 2016 article talking about the use of APIs versus traditional B2B integration solutions, many companies find themselves locked in with expensive VAN services — a service whose inefficient data management practices drive up the costs of the service, according to Eric Rempel, CIO at the integrated logistics provider Redwood Logistics. He pointed out that sometimes the EDI VAN service providers that their clients use will translate data into EDI format only to have it translated back to the original XML format once it arrives in Redwood Logistics’ hands. Rempel said that only after showing the company how much this practice was running up costs, they finally abandoned the EDI VAN and began securely sending XML data to Redwood Logistics directly via API-based methods.
The study, which was sponsored by the digital business platform provider Axway, also revealed that even companies that attempt to manage all their B2B integration in-house, using legacy hardware and systems, may not be doing themselves any favors. Ovum points out that “resource-related costs can account for up to 60% of the total cost of ownership for a legacy EDI solution,” and that turning to cloud- and API-based solutions could potentially save these companies immense amounts of money by eliminating certain resource requirements.
Reading the statistics
The study also found some other interesting facts:
- Today’s enterprise takes an average of 23 days to onboard a new trading partner.
- Over 25% of respondent enterprises admitted that onboarding can often take more than 36 days.
- About 10% of respondent enterprises currently utilize cloud-based B2B integration methods.
- About 18% of respondent enterprises are inclined to use cloud-based B2B integration services under a managed services model.
- A little less than half of enterprises surveyed currently have a digital business initiative that requires use of APIs, and another 6% plan to implement an API program within the next year.
Two of those points I find particularly striking. If so many enterprises are willing to admit that their B2B integration efforts are severely inefficient, why is it that so many companies still cling to these legacy services and only 10% have moved to the cloud? Furthermore, if so many companies are already pursuing cloud- and API-based business initiatives, why would they not apply those initiatives to such a critical — and expensive — aspect of their business processes?
Where is this going?
“People are testing the waters,” Ken Yagen, vice president of products at the API provider MuleSoft, said. “I think you’ll see the growth. You won’t see the EDI transactions diminish, but you’ll see a growth in API transactions rather than traditional B2B EDI transactions.”
As is often the case in the enterprise world, I’m sure this will simply take time to catch hold. But hopefully cloud- and API-based B2B integration solution providers — and us tech journalists — can make the effort to show these companies exactly how much money they are potentially throwing out the window for these legacy services and methods.