Cloud and mobile computing are driving demand for a never-ending supply of new applications as enterprises move toward digital transformation, which is driving demand for more developers.
Coding boot camps have entered into the equation, providing an influx of new developers. In fact, Coding Dojo, a popular coding bootcamp, claims to graduate more developers annually than any four-year computer science program in the United States.
Top producers of computer science grads
According to the National Center for Educational Statistics (NCES) 2016 numbers, the University of California at San Diego had the most undergraduate computer science graduates in the country, with 465 graduates. During that same period, Coding Dojo bootcamp had 811 graduates from its campuses across the country and another 176 graduates from its online program for a total of 987 graduates.
NCES data showed that the rest of the top 10 computer science programs in 2016 broke down as follows:
- University of California-San Diego — 465 graduates
- University of California-Berkeley – 380 graduates
- University of Illinois at Urbana-Champaign – 330 graduates
- University of Minnesota-Twin Cities – 314 graduates
- Oregon State University – 313 graduates
- Massachusetts Institute of Technology – 295 graduates
- University of North Carolina at Charlotte – 281 graduates
- University of California-Irvine – 275 graduates
- Stanford University – 259 graduates
- University of California-Santa Cruz – 257 graduates
In 2017, Coding Dojo projects that it will have 1,178 in-person graduates and 474 online graduates for a total of 1,652.
Graduates positioned to build cloud apps
Dan Oostra, lead instructor at Coding Dojo, told TechTarget that he believes Coding Dojo bootcamp students are ideally positioned to address the growing need for cloud apps as nearly every stack that is being taught at Coding Dojo today are focused on web development.
“In other words, we are developing the cloud app developers of tomorrow,” Oostra said. “Students are taught from the beginning to understand the nature and inner workings of cloud development and to create applications that will leverage their data and accessibility.”
Moreover, “The skills that we teach Coding Dojo students are curated based on the needs of the large cloud-based organizations like Amazon that are dominating the industry and forcing old native franchises like Microsoft Office and Adobe Creative Cloud to serve their products nearly entirely from the cloud,” he said. “Coding Dojo is developing the developers that are guiding the transition companies make into a cloud-based world.”
And companies have begun to view Coding Dojo bootcamp as a viable source for competent software development talent. Both traditional enterprises and cloud-native companies have tapped Coding Dojo for coding talent, including Amazon, Apple, Disney, Expedia, JP Morgan Chase and Uber.
Quantity versus quality?
Charles King, principal analyst at Pund-IT, said it’s one thing for Coding Dojo to compare the number of students it graduates to formal computer science programs but it’s something else entirely to suggest it equals four-year schools.
“First and foremost, four-year programs are designed to broaden students’ experiences and how they perceive the world,” King said. “That’s something that technically-focused programs like Coding Dojo don’t spend much time addressing. In addition, universities offer students chances to study in related, complementary curricula, such as business and management programs.”
Moreover, universities in tech-savvy locations offer students opportunities to interact with alumni and local businesses whose interests mirror their own, King noted. Stanford, for example, has long provided a hiring pool for the best and brightest Silicon Valley companies.
“This isn’t meant to knock Coding Dojo but if students hope to transform their love of coding and computer science into a broader range of career opportunities, they should closely consider four-year programs,” King argued.
Coding Dojo teaches students how to build practical applications using the most in-demand programming languages and software frameworks available today, said Jay Patel, head of operation and finance at Coding Dojo.
“We’ve had many of our students go on to successful careers at major companies like Microsoft, Expedia and JP Morgan Chase as well as startups like Alumnify and Sazze,” Patel said.
About 94 percent of Coding Dojo’s onsite boot camp graduates get a technology-related job within 180 days of graduation, he said. And, on average, Coding Dojo students earn $26,000 more in their new jobs following graduation compared to their previous employment, he added.
Target global, adapt local
While, overall, the popularity of coding bootcamps seemed to take a slide in 2017, Coding Dojo officials said the organization has endured because it adapts its program to fit the changing needs of tech employers. For instance, earlier this year, Coding Dojo dropped Ruby on Rails and added a full-stack Java course.
Overall, Coding Dojo, which has campuses in Chicago, Dallas, Los Angeles, Seattle, Silicon Valley, Tulsa, Okla., and Washington D.C., tries to focus on identifying, fostering and adapting to local technology hiring requirements near its various locations. It then tries to teach its students micro skills that local employers are looking for. For instance, in Seattle, Coding Dojo and Amazon launched workshops to train developers on Amazon Alexa.
Rollbar, which provides a real-time error monitoring SaaS offering, recently advanced its cause, securing $6 million in series A funding this week to further build out its engineering and sales operations, among other things.
Bain Capital led the funding round with participation from Cota Capital. Rollbar co-founder and CEO Brian Rue told TechTarget the funding will help the company continue to innovate, to add new integrations and to increase its base of cloud-native customers.
Those customers include the likes of Twilio, Salesforce.com, Blue Apron, Dell, Kayak, One Medical, Instacart and Zendesk. They use the Rollbar error monitoring system to build better software, faster.
Makes building software easier
Describing Rollbar as simply “a tool to help make building software easier,” Rue said the software sits alongside enterprise engineering teams’ CI/CD workflows and works with other logging and monitoring tools. It monitors errors in real-time as new software is deployed and notifies IT staff of any problems it catches.
The software’s real-time capability means that it often catches errors before customers can see them, let alone report them, Rue said.
San Francisco-based Rollbar also provides error grouping and aggregation, prioritization of the most critical errors, telemetry, and support for the AWS Lambda serverless computing system.
“Our telemetry feature gathers all the data on what’s happening in the program before the error occurs,” providing a timeline that is helpful with debugging, he said.
Rue said many of Rollbar’s customers started out using Splunk or Sumo Logic, which both offer log management, analytics and “operational intelligence” features, but they soon run into limitations with those platforms.
Rollbar for regulated industries
Earlier this year, Rollbar delivered a release of its software for users in regulated markets. In April, the company released a version of Rollbar that is compliant with regulations and standards like the Health Insurance Portability and Accountability Act (HIPAA) and ISO 27001. That means that any errors that contain protected health information or other sensitive data will be safe in compliance with these regulations.
Features supporting the compliant version include data encryption at rest, using a different encryption key per customer, Security Assertion Markup Language (SAML) based single sign-on and a suite of audit controls.
The origins of Rollbar
The idea for Rollbar resulted from Rue needing to address an engineering problem he encountered while working as CTO of Lolapps, a social gaming company he co-founded that later was acquired. The problem was that error monitoring began to falter as the system grew.
“The engineering problem back then was we very quickly reached the issue of scale,” he said. Beyond that, “the [error monitoring] tools that were available didn’t really work very well,” he added.
So, Rue and his team set out to build their own tool, perfecting it along the way as he saw an opportunity to help other developers. That effort led to Rollbar as a commercial product and to the new round of VC funding to help advance the technology further.
Salil Deshpande, managing director at Bain Capital Ventures, in a statement, said he believes Rollbar is positioned to become a key error-management platform for engineering teams.
“We saw that many companies developing software rapidly, yet reliably, had turned to Rollbar to provide the visibility they need to remediate issues that inevitably occur with such a rapid pace,” he said in a statement.
Stack Overflow, a popular online community for developers, tapped into the power of Microsoft’s Azure cloud and artificial intelligence technologies to create a new chatbot to help developers in a pinch.
As an online community for developers to learn, share their programming knowledge, and build their careers, Stack Overflow represents a ready reference for developers to go to for how-to information when they encounter programming dilemmas.
“Developers want to be in the zone. Anything that gets you answers without taking you out of that zone is powerful,” said Matt Sherman, engineering manager at Stack Overflow. “The promise of Stack Overflow’s bot, built with Microsoft AI, is to keep developers in the zone while they’re working on their code.”
Microsoft’s AI platform
Microsoft’s AI platform features tools like the Bot Framework, Cognitive Services, Cognitive Toolkit, Azure Machine Learning and other tools. These tools help developers infuse AI into existing applications and to quickly build new ones. The company’s goal is to bring these capabilities to the masses of developers. The Microsoft AI technology also is key to the company’s digital transformation strategy.
“In a market sense, Microsoft is working to democratize use of the technology by simplifying its use,” said Rhett Dillingham, senior analyst for cloud services at Moor Insight & Strategy. “In a competitive sense, Microsoft sees AI as a top opportunity to drive developer and data scientist preference for Azure tools that could influence enterprise I&O [infrastructure and operations] leader infrastructure decision-making towards Azure over AWS and Google.”
Moreover, in a blog post, David Fullerton, CTO of Stack Overflow, noted that the knowledge shared on Stack Overflow includes an ever-growing pool of information. This includes information on AI and related topics such as machine learning, natural language processing and deep learning. Stack Overflow hopes to make the Microsoft AI technology more readily available to developers.
“So, when Microsoft showed us how they were bringing AI to every developer through their platforms and tools, and asked if they could partner with us to create an AI driven experience for developers to use and learn with, we of course said yes,” he said.
Microsoft’s Stack Overflow Bot is the first step in the partnership.
Right information at the right time
Alex Miller, general manager of enterprise at Stack Overflow, added that “Bringing in tools like Microsoft’s advanced AI technologies and cognitive services and making them so accessible to developers through the Azure platform, helps every developer out there — making sure you have the right information at the right time.”
Also, Sherman indicated that the new AI capabilities can help with Stack Overflow’s talent business in terms of matching developers with the right opportunities – which the company has been working on for about seven years now.
“With tools like these advanced AI capabilities, we can take our matching to the next level,” he said.
Moreover, David Robinson, a data scientist at Stack Overflow, said the company already uses machine learning to help figure out what people are looking for so Stack Overflow can help to answer developers’ questions.
However, “With this bot, you can integrate that into the development environment so that developers can immediately get the answer to their question,” he said. “By making these tools readily available through Azure Machine Learning, I think that Microsoft is doing the data science community a great service.”
It’s not often that I read the showbiz bible Variety or the Hollywood Reporter when doing background research for a piece on cloud computing. Yet here we are — again — pondering the pervasive perniciousness of breaking into entertainment-industry data assets.
This time around it’s HBO. A late July break-in by allegedly coordinated forces, targeted, according to the Hollywood Reporter, “specific content and data housed in different locations.” Should the attack actually amount to a possible 1.5 terabytes, that would be roughly equivalent to the infamous Sony Studios hack of 2014 — multiplied by a factor of nearly seven. According to multiple reports, episodes of Game of Thrones and significant other broadcast content assets were downloaded.
It’s not easy to steal 1.5 terabytes of data. Downloading that much, even to multiple destinations, takes time. Should alarm bells have gone off? Did they? For now, I’ll stay away from speculating about the woes of others.
Yahoo had a billion user accounts hacked in 2013. The Sony hack of 2014 stole not just broadcast and theatrical content, but e-mail messages that embarrassed many and led to the ouster of co-chairman Amy Pascal and others in her wake.
CNN reported in June 2017 that government websites in four states, New York, Maryland, Ohio, and, most recently, Washington, were hacked to the extent of having anti-American messages displayed.
Let’s face the reality: Security is little more than wishful thinking. If you believe an application, system, data store, or infrastructure to be secure, you are asking for a world of trouble.
Banks. Software companies, including Adobe. Government agencies. Media titans. Retail giants, including Target and TJX. Even security company RSA itself was breached in 2011. Windows XP, launched in October 2001 and retired in April 2014 is still the object of security patches from Microsoft. Hospitals have had their data held for ransom. So has a guest check-in system at a hotel in Europe.
The problem with security is that no matter how many onion-like layers we pile on, no matter how pervasive and sophisticated two-factor or even biometric authentication becomes, it can never be enough. All it takes is one click on an innocent-looking e-mail message by a well-meaning employee to circumvent years of efforts and millions of dollars invested. Perhaps we’re seeing the rise of a new mini-industry: HaaS, hacking as a service.
As application developers, there’s only so much we can do. Test APIs to ensure they are up to the latest standards and versions. Log activity into journal files. Working closely with business executives and various IT groups — QA, testing, operations — is essential. So is asking obnoxiously intrusive questions about planning for app security before a line of code is written. Breaches, after all, are themselves obnoxiously intrusive.
If there was an answer to these major security problems, it’s reasonable to think the combination of big brains and deep pockets would have figured it out by now. Alas, no one has. It’s possible no one ever will.
No one is closer to the bits and bytes, the very lifeblood that flows through the arteries of cloud-based IT systems than application developers. What is your organization doing to step up security? What plans are in place to deal with a breach after it occurs. What’s your role? There’s lots to talk about. Share your thoughts, we’d like to hear from you.
Microservices, containers, and APIs, oh my. They are the holy trinity that anoints cloud and mobile computing with unfathomable power and limitless scalability. APIs are the glue that hold the others together and make possible a universe of interactions with services, applications, analytics, and data from well, from anywhere and everywhere.
It makes sense, therefore, to want an API to do as much as possible each time it is called. Do more with fewer calls and you maximize efficiency, right? Maybe, maybe not. Ok, let’s try the converse, an API that does one tiny, highly focused task when called upon. Do less with each call and you boost speed. Again, maybe, maybe not.
Just like two children of differing weights, seeking that one golden location on each side of a see-saw fulcrum that places the overall system into magnificent equilibrium, APIs are not really any different. Find that magical balance point between too many small calls and too few big ones, and you’ve built a work of art. It’s the Goldilocks effect brought into the cloud age.
One of the world’s top API experts calls it API granularity. Manfred Bortenschlager, Red Hat’s director of business development for API-based integration solutions and API management, says developers need to get better at it.
“One thing that’s difficult to get right is API design — in particular, the right granularity. An API could give you a lot of data back; a lot of values, which are potentially unnecessary; or just too much data,” Bortenschlager says. “If you are serving a mobile app, this could be too much payload. On the other end of the scale, you could have an API that gives you too little back. This means that an API consumer would have to issue many API calls. Getting this balance right is tricky.”
APIs don’t exist in a vacuum. An application can easily encompass several dozen. Though each performs one task, all of them, when taken together, must be designed for optimal system performance, resulting in a speedy and enjoyable user experience. Many small API calls can get hung up on network latency that drives users crazy. Fewer, bigger calls require fewer round trips, but may return data or metadata that’s simply not needed, slowing down an application. On mobile devices, these small delays can be deadly, leading to session and user abandonment. Not too big. Not too small. Goldilocks.
A separate issue, but no less important, is that businesses publish APIs that allow their customers to gain access to data or services. It’s a fact of life that software gets updated and that you’ll never, ever get all users of an API to be on the same version at the same time. (How many are still running Windows XP?) Some may upgrade today, others not for months, still others not at all.
That means maintaining tight control over versions is essential. And it’s not easy, according to Bortenschlager. “It’s impossible to have an API with changes that never break. It’s impossible. That’s just the nature of it,” he told me. “What’s important is to communicate changes very well and far in advance.”
Through API management, Bortenschlager says, it’s possible to know exactly who is using each version of an API. “If you know that you are going to change a subset of your API to a new version, you can target the communication to those developers in advance,” he says. Good advice.
How does your company manage its API assets to ensure that those used in an application are efficient and compact? And how do you deal with the headache of having multiple versions of an API in use simultaneously? Share your thoughts; we’d like to hear from you.
The CloudExpo conference in New York is always a good take for developers, architects, and managers who want to understand where the technology of cloud computing is headed next. Serverless computing appears to be that destination.
As session presenter Doug Vanderweide from the Linux Academy — as entertaining a speaker as you’ll ever run into at a technology conference — puts it, the first thing you need to know about serverless computing is that, yes, there are servers. They’re just not yours.
Let’s back up a step and note that today’s cloud computing boils down to microservices and containers. Each offers profound benefits, though neither is perfect.
Containers are hot and it is microservices that makes them great, Vanderweide says. Microservices break work into small steps with APIs to handle them. You can manage functionality independently, streamline development, and save time with reusable code. Microservices work best when running in small virtualized environments, namely, containers, which are quickly deployed, inexpensive to run, easily scaled and orchestrated, and offer version control.
But, beware, Vanderweide says. Containers exist in a cloud technology ecosystem that’s changing daily. They’re prone to sprawl, can suffer from broken dependencies, and they are at the mercy of networking woes. Serverless to the rescue.
As Vanderweide explains it, serverless computing is anonymous, generalized virtual machine instances that are managed by the cloud provider. They’re provisioned when needed and de-provisioned when you’re done. They’re billed based on executions and resource consumption, not at an hourly rate. With a focus on triggers, inputs, and outputs, along with high availability and superb scalability, serverless is a great match for microservices.
The base operating system (Linux or Windows) is a general configuration that supports multiple languages (Node.js, Python, .NET Core, Java, etc.). The key to this is the provider can quickly provision instances because they are all the same no matter the corporate user.
The real allure may be in the stellar TCO (total cost of ownership) that serverless delivers. When you look at VM vs. function-based pricing for 2 million executions per month, consuming 4 GB-seconds (4 GB of memory used for one second) per execution, the differences are clear. Vanderweide says that works out to $279.74 on AWS and $220.97 on Azure, but, in a serverless ecosystem, a paltry $121.80 for Azure Function and $129.86 for AWS Lambda. Pretty impressive stuff.
Vanderweide calls this “the long tail of serverless.” For the cloud provider, the sameness of configurations for everyone makes greatly reduces the expense of providing them. That means each new instance is, well, instantly profitable.
Compared with containers, similar function workloads cost less to run and you never pay for capacity that’s sitting idle. Beyond that, automation, abstraction, and cloud vendor services can eliminate DevOps tasks (and possibly DevOps payroll, too). Infrastructure costs drop, the systems development lifecycle is simpler, server management is no longer your problem, and deployments are faster.
It’s not perfect, of course. Serverless computing can suffer from laggy startups of cold code. And it’s an immature technology that may leave you wedded to a specific cloud platform provider, at least for now.
Vanderweide sums up the advantages of serverless computing with a quotation from Greg DeMichillie, head of developer platform and infrastructure at Adobe: “In five years, every modern business will have a substantial portion of their systems running in the cloud. But that’s only the first step.” DeMichillie goes on to say, “The next step comes when you free your developers from the tedious work of configuring and deploying even virtual cloud-based servers.”
What’s your take on serverless computing? Are you still trying to catch your breath with (and I hesitate to use this word) “traditional” cloud computing? Too much too soon? Or are you ready to get out in front of the next wave and carve out a new career path — again? Share your thoughts about serverless computing; we’d like to hear from you.
Designed by engineers, comprehensible only by engineers. You’ve no doubt heard some variation of that old maxim. Let an engineer design a software or hardware product, and the average person will have a tough time figuring out how to use it, because the user interface is arcane, convoluted, circuitous, dense, indescribable, inexplicable — or worse.
What made me think of this is a new paper, published online today by Adobe, called “12 Tips for Mobile Design.” It’s a good read, and I suggest that developers, architects, and anyone else who touches the mobile app universe in any way invest some time. Building a beautiful-looking app that is a joy to use is a vastly different exercise than building efficient, error-free code. After all, as we move from DevOps into BizDevOps, which brings developers deeper into the business side — and closer to customers — than ever before, understanding design concepts (or at least being able to talk a good game) is useful.
Adobe says we need mobile apps that are not just “useful,” but “intuitive” as well.
And there’s the rub. Developers (we used to call them programmers) are good at developing. Good at thinking serially. In loops. In if-this-then-that (IFTTT) case structures. In writing tight, API-driven, containerized-as-microservices code. In stark contrast, interface designers — and it truly is a special discipline combining art and psychology — are good at UI/UX, designing the user interface and user experience, neither of which are logic structures. They can’t write a lick of code. I’m simply suggesting that a little cross-pollination is a good thing for everyone.
What are the 12 tips, you ask? Here’s the list. It’s up to you to do some deeper reading. Read the paper to dive into each one.
- De-clutter the user interface
- Design for interruption
- Make navigation self-evident
- Make a great first impression
- Align with device conventions
- Design finger-friendly tap-targets
- Design controls based on hand position
- Create a seamless experience
- Use subtle animation and micro-interactions
- Focus on readability
- Don’t interrupt users
- Refine the design based on testing
None of these have anything to do with platforms, infrastructures, or anything else “as a service.” It’s not about AWS vs. Azure vs. Google vs. Bluemix. It’s about you. Sure, you’re a great code jockey, but, what about your interface, navigation, experience, color-palette, and typography skills? Where do you fit in? Share your thoughts, we’d like to hear from you.
NASA. Remember NASA? It’s the once-glorious government agency that put men on the moon, the agency whose Voyager I space probe left our solar system in 2013 for parts unknown, the agency that, in the immortal words of John F. Kennedy, did things not because they were easy, but because they were hard.
FORTRAN. Remember FORTRAN? Well, of course you don’t. And that’s precisely why, in mid-2017, maintaining ancient programs written in it isn’t easy. It’s hard. Really hard. It’s so hard, in fact, NASA is holding a contest featuring a prize purse of up to $55,000. It’s the sort of app dev challenge that would look good on any résumé — if you know FORTRAN, that is. And computational fluid dynamics, too.
According to NASA, all you need to do is “manipulate the agency’s FUN3D design software so it runs ten to 10,000 times faster on the Pleiades supercomputer without any decrease in accuracy.” It’s called the High Performance Fast Computing Challenge (HPFCC).
If you’re a U.S. citizen at least 18 years old, all you need do, NASA says, is download the FUN3D code, analyze the performance bottlenecks, and identify possible modifications that might lead to reducing overall computational time. “Examples of modifications would be simplifying a single subroutine so that it runs a few milliseconds faster. If this subroutine is called millions of times, this one change could dramatically speed up the entire program’s runtime.”
If you’ve ever asked what you can do for your country, this may be it.
It’s your chance to go far beyond mere cloud computing, your chance to do outer-space computing — perhaps to infinity and beyond.
Ok, let’s get serious… FORTRAN has suffered mightily from the same ignominious fate as assembler language and COBOL (the language that paid my bills for many years). No one cares about FORTRAN, no one wants to learn it, few institutions bother to teach it, and many who were expert in it are long-since deceased.
Physicist Daniel Elton, in a July 2015 personal blog entry, suggests that FORTRAN remains viable (at least among physicists) because of the enormous amount of legacy code still in production, its superior array-handling capabilities, little need to worry about pointers and memory allocation, and its ability to catch errors at compile time rather than run time. In a March 2015 post in the Intel Developer Zone, Intel’s Steve Lionel (self-anointed “Dr. FORTRAN” and now recently retired) said a poll of FORTRAN users conducted at the November 2014 supercomputing conference indicated 100% of respondents would still be using the language five years later.
With good reason, we live in a world dominated by the likes of Java, C, C++, C#, Python, PHP, Ruby, Swift, R, Scala and scads of others. Visual Basic, Pascal, PL/I, ADA, APL, along with COBOL and FORTRAN have seen their day. The problem is that, to paraphrase Gen. Douglas MacArthur, old programming code never dies — and it doesn’t fade away, either.
How much ancient code from legacy languages do you come across in dealing with enterprise IT? Are you afraid to tinker with it? Does anyone know what those programs actually do? Has the documentation been lost to the ravages of time? Does the source code still exist? Tell us how you deal with it; we’d like to hear from you.
Cloud deployments of software often pose the most ticklish error detection and repair problems. Customers are constantly using a cloud app developer’s products, at all times of day and night and across geographies. Meanwhile, it’s a safe bet that something in those releases will be breaking, and error fixes will needed, said Brian Rue, CEO of Rollbar, which provides real-time error monitoring services for developers. The trick is detecting errors quickly, rather than waiting for customers to report them.
“You’re releasing improvements, releasing bug fixes, and that constant state of change means that you need to have a constant state of monitoring,” said Rue. “If something is broken, and you don’t find out about it until a customer writes in days later, it could easily be days or weeks before you find a way to repeat the problem. The development team gets caught up in a constant state of firefighting.”
Rue shares some best practices for error handling and making code error fixes in this article. Rue co-founded Rollbar after experiencing the problems of error handling when developing gaming apps, at first on a kitchen table in a garage with three colleagues.
The vicious circle
“Imagine a circle starting from deployment,” explained Rue. From deployment, the next thing that happens, typically, is an error happens. Your team needs to discover if it’s a new error or a read error. A new one calls for alerting and prioritization. Once an error is prioritized, then the developers can go explore the data for the error. They can discover what uses the error affects, the values of the variables and other information about the cause of the error.
“Usually, that’s enough data to enable writing and deploying error fixes,” said Rue. Then it’s on to the next problem. “That wheel of release, error monitoring and error fixes is constantly spinning,” he said.
Structured data is good data
The better the data structure is, the more the developer can discover every detail about each code error. “Data really should be structured in terms of keys and values, as opposed to just raw strings,” said Rue. So for example, let’s say there’s an error message that says: “This user tried to log in and it failed.” That might be something that the cloud developer wants to log. That should be logged as, say: “User login failed with the user ID as metadata.” That way, it’s both easier to group as it is, so there is just one message saying: “Login failed.” Then the cloud developer can see all those together and is closer to making error fixes.
“Once you have that structure, you can easily query data forward to see which logins failed. You can figure out how that correlates against other problems, and so on,” Rue said.
Add instrumentation to apps
The core of error monitoring is tracking the application from the perspective of the application, according to Rue. So, to use it the cloud app developer needs to be able to add the instrumentation into the application. Typically that’s as simple as installing a Ruby gem, installing a package from npm or installing a kind of Java middleware; all services most development team have used. “But, at a high level, this requires buy-in from the developers to identify what there is, and then make sure that each component is instrumented,” Rue said.
The Red Hat Summit in Boston this week drew more than 5,000 developers, according to Paul Cormier, president of Red Hat’s products and technologies business. That’s impressive for a major software company that literally started out as a flea-market operation.
“It’s so much fun to watch this all roll out,” Cormier says. “I’ve been at Red Hat for 16 years and was employee #120.” And how did Red Get its start? Cormier says company founder Robert Young began by “downloading Linux off the ‘net, burning it to CDs, and selling it out of the trunk of his car at flea markets.” This is an outfit that’s come a long way, with a pervasiveness that extends into almost every home. At one time, there were seemingly dozens of free, open-source Linux distros in the early days, but it’s the one company that created tools, platforms, and enterprise-class support that is the premier survivor.
Two key announcements made at the Red Hat Summit were OpenShift.io, a complete development environment accessed through the browser, and the Red Hat Container Health Index, a method for scoring containers for several factors, including version currency and security. Other announcements were a tightening of Red Hat’s relationship with Amazon Web Services and an on-premises containerized API management platform, which I reported on last week.
OpenShift.io is a new, comprehensive, end-to-end development, test, and deploy environment in a browser. There’s nothing to install on developers’ local desktops, on-premises, or in a business’s private cloud. Everything needed to design, build, and deploy is available through the browser.
“I’ve said this until I was blue in the face — a container is Linux, it’s just Linux carved up in a different way.”
— Paul Cormier, president, Red Hat products and technologies
“Now that we’ve finally put Dev and Ops together, we’re making the tooling more intelligent and more intuitive for developers to be even more productive,” Cormier says. “The OpenShift.io stuff uses artificial intelligence from all the things we’ve learned over the last 15 years to guide developers through building their application and recommend what might be a better path to go than the path they’re on.” With nothing to install, Cormier says developers can begin building from day one, avoiding the weeks and months it can sometimes take to procure and spin up development resources and infrastructures.
Another major announcement was the Red Hat Container Health Index, a service that grades the containerization performance and security of Red Hat’s own products and the products of certified ISVs. It’s not a one-time examination of containers, but rather a way to track ongoing container health volatility, letting you know that container considered fully secure a month ago, earning an “A” rating is now vulnerable, dropping to a grade of D or F.
“I’ve said this until I was blue in the face — a container is Linux, it’s just Linux carved up in a different way,” Cormier says. “Container tools help you package just the pieces of the user space OS that you need with the application.” When people were playing with containers and yet betting their business on them, they pulled containers from everywhere. Now, customers want a commercial-grade system.
“What we’ve done is containerize all of our products into a RHEL (Red Hat Enterprise Linux) container. We can scan the pieces of the OS that are included and tell if there are known security vulnerabilities, bugs, or if there’s a new version available. We’ve built that into our back-end systems that we use to build all our products,” Cormier says.
Red Hat will now make those tools available to ISV partners to test their own containers. All results will be available through a portal. “If you’re going to be a container provider in the commercial world, this is what you have to do.”
Do you use Red Hat development tools and platforms? What do you think of the company’s announcements this week and how do you plan to leverage these technologies in your upcoming projects? Share your thoughts with us; we’d like to hear from you.