CW Developer Network

Page 4 of 107« First...23456...102030...Last »

January 29, 2018  9:55 AM

Sumo Logic fills crack in cloud DevSecOps ring

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Describing itself as a machine data analytics company, Sumo Logic is aiming to put a new CAPS C into a technology market already used to Continuous Integration, Testing, Deployment and Delivery.

Sumo Logic’s Continuous is Continuous Intelligence in cloud-native application deployment scenarios.

The company has now scooped up FactorChain, an early stage ‘security threat investigation software’ company.

As a piece of business strategy, Sumo Logic is aiming to create a technology proposition that offers a more converged take on IT ops and security for modern application delivery in the cloud.

Filling the crack

Company CEO Ramin Sayar suggests that there is a crack in the current models governing the adoption of cloud technologies and that, as a result, customers are struggling to adapt traditional security models to cloud applications.

To his mind, this acquisition is a route to solving persistent challenges that exist inside investigation workflows at cloud scale.

Triangulation tribulations

The company says that fundamental challenges associated with understanding application and cloud data with existing tools and skill sets, resolving IT vs. security symptoms and root causes and quickly triangulating across cloud scale data sets to resolve threats are preventing the natural extension of traditional methods to the cloud.

“Further, DevOps models require security to align traditional centralised, backlog approaches to threat investigation to new rapid response, distributed and democratised models. Along with scope of workflow and insight, fundamental breakthroughs are needed in data search, navigation and human-machine collaboration to enable the velocity demanded by these new models,” said the company, in a press statement.

Combining the tools

FactorChain’s investigation platform will now integrate into Sumo Logic’s SaaS Machine Data Analytics Platform, providing retained learning of threat investigation workflows across IT and security.

“Cloud and modern application deployments demand a fundamentally new approach to security threat investigation – workflows must span both the application and infrastructure layers, integrate across both security and IT ops and enable resolution in minutes,” said Dave Frampton, founder and CEO of FactorChain.

Integrated data, analytics and workflow will enable analysts to resolve complex investigations, while quickly identifying infection spread and applying what the company calls ‘accumulated learning’ across IT and security teams.

Free image: Wikipedia

January 26, 2018  2:50 PM

The five enemies of continuous software (and what to do about them)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Eran Kinsbruner in his role as lead technical evangelist for Perfecto.

Perfecto is known for its ‘continuous’ (as in delivery, integration, testing and deployment) automated, cloud based mobile application testing tool.  

Kinsbruner points out that early in any given year, development teams everywhere often find their pipelines filled with bottlenecks, which can put the breaks on release cycles – and ultimately, their bottom lines.

Often overlooked, there are five specific challenges these teams need to overcome to ensure things move smoothly throughout the software delivery lifecycle.

Kinsbruner writes from this point forward:

#1 Tighter release schedules 

To move faster with test automation in the face of tighter release schdeulues, teams can either implement faster SDLC methods, e.g. those that are based on BDD or ATDD and/or shift to faster and more stable test frameworks like Espresso and XCUITest (mobile specific).

In addition, consolidating the CI (Continuous Integration) process and having a single CI that serves all teams, will result in products being tested faster and more reliably.

#2 Test automation lacks best practice

Test automation stability and reliability lack best practices — but we do know that test stability can be attributed to two major factors.

  1. One is the lab and test environment in which the test is running.
  2. The second is the test code itself.

Assuring a reliable test lab that can guarantee device and browser availability and connectivity is a critical precondition of the overall execution.

Following test development practices ensures that the tests will be less flaky and generate accurate results, continuously. Test code is code — and therefore requires maintenance and ongoing refactoring as products and features change or are retired.

#3 Text execution’s flaky management

Intelligent test execution management isn’t optimised, there, we’ve said it.

With little time between releases, teams should optimise tests executed within the DevOps  pipeline as much as possible.

To do so, teams need to be able to focus on the right tests and the right target platforms on which these tests will run.

Today there are few ways teams achieve this goal; one is through analytics and test report analysis that helps identify the most valuable tests and the most problematic platforms. Another – a growing trend in the industry – is the use of machine learning tools that run through the entire test data and guide the decision makers or tools that can more easily generate robust test code.

These techniques help increase product release velocity.

#4 A framework frame of mind

Evolving and maintaining test sets will maximise productivity, but do we have this insight at our disposal?

This challenge can be overcome through a smarter selection of test frameworks.

Having a test framework that manages the entire object repository, the execution at scale in parallel and provides proper insights and reports is the key to maximising test team productivity.

Teams often choose more than one framework to address their objectives – some will be open source tools and others commercials. However, success lies in the have freedom of choice and ability to manage the entire test artefacts.

#5 Synchronisation is out of sync

There’s a lack of synchronisation within the tool stack and organisational capabilities

Similar to the previous point, having the freedom to choose a test framework – whether you are a developer or test engineer – is critical to success. But, if there is no efficient orchestration of the entire testing activities and the dev and test teams are out of sync, wrong decisions can be made and blind spots will appear in the overall quality processes.

In an age dominated by fast and frequent software and app releases (iOS 11 is a good example), the DevOps pipeline cannot run efficiently without testing continuously throughout the software development lifecycle.  

Eran Kinsbruner’s recent ebook ‘Best Practices For Expanding Quality In The Build Cycle’ is linked here.


January 24, 2018  10:33 AM

Low-code platforms in the RAD vs. Agile speed wars

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Agile, RAD

How do developers feel about so-called ‘low code’ platforms?

In truth, we don’t really know yet as their very existence is a comparatively new phenomenon on the clock that tracks spacetime in the total software application development universe.

One player in this space is OutSystems. The company has a low code platform to allow programmers to visually develop an application, integrate with existing systems and add their own code when needed.

In this age of code automation, intelligent provisioning, containerised application structures and packaged services… could low code come out of its ‘only used by incompetent and lazy programmers’ reputation and finally feature as a useful means of production in high intensity software shops?

VP for UK and Ireland at OutSystems Nick Pike (obviously) thinks that low code will be popularised and he now takes time to clarify its place in a world where high velocity programming must very clearly understand and appreciate the difference between Rapid Application Development (RAD) and Agile development.

Code fast, always

Both RAD and Agile are broadly thought of as a means of coding fast, to put it simplistically, but what is the difference?

“RAD and Agile both emphasise early and continuous software delivery and welcome changing requirements even in late development. However, Agile prescribes its methods and ideal working environments. RAD is far more flexible, emphasising quality outcomes over the exact way and timeframe in which they are delivered,” said Pike.

RAD methodology

Pike says that the essence of RAD is its flexibility to adapt to changing customer vision throughout the development cycle. It starts by defining a loose set of requirements, so developers get an idea of what the product needs to achieve — and this couldn’t be further away from those detailed spec sheets of the past.

“Developers then create a prototype that satisfies all or some of the requirements. This prototype may cut corners to reach a working state and that’s acceptable because any technical debt accrued will be paid down at a later stage. This prototype is presented to the client and feedback collected. At this point, clients may change their mind or discover that something that seemed right on paper makes no sense in practice. This kind of revision is an accepted part of the RAD approach and developers return to step two to revise the product,” said OutSystems’ Pike.

If client feedback is entirely positive then developers can move to the ultimate step of finalising the product and it can be handed to the client with confidence that it meets their requirements.        

RAD benefits

Pike and the OutSystems team detail the following potential benefits of RAD:

  • Speed: Thanks to the feedback from the prototype phase, there is a far greater likelihood that the product delivered will be acceptable to the client the first time.
  • Cost: Developers build the exact systems the client requires and nothing more instead of building out complex features that may not make the final cut. The prototype stage of RAD eliminates this costly exercise.
  • Developer Satisfaction: The client is there every step of the way as developers present their work frequently. Not only does the client become more confident, but the development team no longer feels divorced from those who will be using their software.

Is 2018 the year of low code?

Well definitely somewhat maybe… but in a world of faster development, knowing where we stand on RAD vs. Agile is fundamental if we are to speed up.


January 16, 2018  9:17 AM

Sage CTO: Rise of the ‘application cloud’ & other stories

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Klaus-Michael Vogelberg in his capacity as chief technology officer at financially-focused Enterprise Resource Planning (ERP) company Sage.

Vogelberg is perhaps somewhat unimpressed by the gadgets and gizmos being displayed at the Consumer Electronics Show (CES) in Las Vegas this month.

He insists that it is vitally important that we aren’t fooled into thinking that technology innovation in 2018 is all connected cars, talking robots and face-recognition smartphones.

Instead, Vogelberg is rather more impressed by the code that goes into these shiny new devices.

He insists that the ‘constant evolution’ will be in the software developed to connect those devices and the new forms of software engineering that will underpin the devices launched in 2019, 2020, 2021 and beyond.

So with that in mind, here’s a snapshot of Vogelberg’s software trends to watch out for in 2018 — Vogelberg writes from this point forward.

The rise of the application cloud

Cloud computing has been a game changer for consumers and businesses across the globe over the past decade. However, this year we will see the market for cloud platforms compete on customer benefits rather than technology capability.

Few cloud platforms are pure technology platforms and could be more accurately described as application clouds delivering app-centric user experiences. The Apple iPhone pioneered this concept of an application cloud and Salesforce adopted it for business with its Lightning com platform (aka Force.com) and AppExchange. Microsoft is taking Office 365 and elements of Azure in a similar direction, while Facebook and Google remain customer experience platform providers to watch.

The implication of this shift in 2018 is that platform choices in the cloud will only be partly driven by technology considerations; application clouds provide access to customers and markets. This becomes just as important a consideration for an ISV as the technological merits any given platform may provide.

End of the architecture monoliths

Software used to be designed for a given technology platform, leveraging and extending the architectural features of the ‘stack’, resulting quite literally in monolithic software products.

The shift to customer-centric ‘experience’ platforms makes the monolithic approach less attractive as it creates a dilemma.

Implement an application on a particular platform and it will only ever work in that environment; design in a monolithic fashion, and it will lack the architectural flexibility to embrace customer-experience platforms.

Expect to see this debate played-out in software labs across the world this year.

Serverless event-driven programming

Microservices require infrastructure to operate in a layer typically referred to as ‘platform as a service’ (PaaS). 2018 will see a generational shift in PaaS to ‘serverless’ environments, a technology in which the cloud provider dynamically manages the allocation of machine resources.

Serverless applications do not require the provisioning, scaling and management of any servers and pricing is based on the actual processing consumed not on capacity provisioned. Amazon Lambda and Microsoft Azure Functions are two leading examples of this technology. Serverless, event-driven programming models are set to revolutionise software architecture; it is the secret sauce behind many of the headline-grabbing technology exhibits in Vegas.

Yet this move is not without controversy; one market observer noted that “serverless is one of the worst forms of proprietary lock-in that we’ve ever seen in the history of humanity” — which are perhaps rather strong words, but it illustrates the force of change sweeping through the software world.

All of this helps make technology smarter, more connected and of greater value to the users.

When we talk about invisible accounting, taking advantage of artificial intelligence, machine learning and neuro-linguistic programming – it is because of the innovation that is happening in software architecture and application programming that is making it all possible.


January 15, 2018  8:42 AM

Chicken & waffles for data scientists, hello DataOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

It had to happen.

DevOps consolidated the developer and operations functions into one ‘workplace culture’ and so came together the previously separately plated chicken & waffles of the software engineering world into a new (apparently harmonious) union.

Some even talk of DevSecOps, with security as the special sauce… such has been the apparent industry acceptance of the DevOps term.

But this is the age of data analytics, big data, data governance, data compliance, data deduplication, log data, machine data (Ed – we get the point) and so on… shouldn’t data itself get a departmentally unifying portmanteau all of its own?

The answer is yes, it should… and when you add operational excellence to data you (obviously) get DataOps.

DataOps is here

Data convergence player MapR has come forward with its DataOps Governance Framework.

The ‘framework’ (well, company initiative with custom-tuned software services relating to the core MapR stack) will integrate the MapR Converged Data Platform with selected partner technologies.

It aims is to help companies meet compliance requirements for data governance beyond traditional big data environments such as Apache Hadoop.

According to a MapR press statement, this technology is tailored for organisational data transformation and data lineage requirements — further, it focuses on data quality and integrity to help meet obligatory compliance, including data privacy requirements.  

“By providing a comprehensive open approach to data governance, organisations can operate a DataOps-first methodology where teams of data scientists, developers and other data-focused roles can train machine learning models and deploy them to production. DataOps development environments foster agile, cross-functional collaboration and fast time-to-value that benefits the entire enterprise,” said Mitesh Shah, senior technologist at MapR.  

Shah finishes by claiming that the MapR DataOps Governance Framework lets organisations extend required data lineage and other governance needs across clouds, on-premises and to the edge with all data in any application – even those considered outside of the big data realm.


January 10, 2018  10:48 AM

Chef: The DevOps tools arms race is over, time for application-centrism

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Chef, DevOps

The Computer Weekly Developer Network asked, is 2018 the year of the DevOps backlash?

Well, it was only meant to be an open question, a hypothesising supposition, a point of informed speculation that might lead to an informal pub discussion at best.

But, as is often the way with these things, the industry has taken it as a clarion call for commentary and deeper analysis… and who are we to turn down the opportunity for deeper inspection of the DevOps state of the nation?

It’s time to hear from Chef, the DevOps-focused automation infrastructure specialist. 

Mainstream monetisation

Technical community manager for the EMEA at Chef Software Mandi Walls thinks that DevOps itself is approaching the point of mainstream monetisation. So much is this the case that she suggests we are at the point of seeing the monetisation of [DevOps] buzzwords and certifications, which happens when a fashionable technology or culture gains speed.

But the road ahead is not just DevOps simples.

“There are still industries which are dependent on technology yet haven’t embraced it as a primary strategy for growth and improvement, such as sectors heavily reliant on outsourcing for software development, who may consider it to be a back office function. These kinds of businesses are not yet perceiving or applying many of the most valuable changes possible through DevOps,” argues Walls.

DevOps tools arms race

Walls describes the current state of the ‘DevOps tools arms race’ and says that (in purer, cleaner, arguably better times) we were more focused on technological tooling and workflow – to deliver customer and user benefits, as a way to boost the bottom line.

She says that while this new era helps companies deploy a wider range of features and fixes for customers, it doesn’t account for what tools and methods the development and operations teams use. So, improving the staff experience is a massive benefit they’re missing out on.

“Overall, we’ve seen technology take an increasingly important role in everything from banking and healthcare to education and construction, while lagging industries include insurance and utilities – partly because they’re heavily regulated environments where the constraints have bred an ecosystem of specialised practitioners. These industries may eventually move en masse once their baseline concerns have been satisfied by specific, custom solutions,” said Chef’s Walls.

Also keen to throw opinions into this discussion is Chef VP of marketing Mark Holmes.

Infrastructure to app-centrism

Holmes contends that the current growth in containerisation is creating a new era of what he calls ‘application-centrism’.

Backed by the general shift to the cloud and the increasing distribution and composition of applications, [the current growth in containerisation] means that there is a gradual shift from ‘infrastructure-centrism’ — where the unit of value is a server and the unit of work is a configuration — to ‘application-centrism’, where the unit of value is a service and the unit of work is a deployment. This modal shift also requires automation at scale, though with different jobs to be done,” said Holmes.

Chef Software’s wider position states that DevOps tools need to match to these new modes, so that we can do the new things the right way, vs trying to force the old way.

The company says that this current change is constant (and so will be long term) and that the rise of serverless, or ‘service-centrism’ will again adjust the mode and requirements for automation.


January 8, 2018  9:31 AM

Qualys: How to be cool with DevOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network asked, is 2018 the year of the DevOps backlash?

Well, it was only meant to be an open question, a hypothesising supposition, a point of informed speculation that might lead to an informal pub discussion at best.

But, as is often the way with these things, the industry has taken it as a clarion call for commentary and deeper analysis… and who are we to turn down the opportunity for deeper inspection of the DevOps state of the nation?

First among a small group of spokespeople invited to deconstruct modern DevOps is Chris Carlson in his role as vice president of product management at cloud security, compliance and information services company Qualys.

Carlson writes as follows…

All processes and technologies have a hype cycle, growing pains and detractors even as they become mainstream and common place – DevOps is no exception.

DevOps drives much more value than software development methodologies like Agile, because DevOps extends beyond and spans much more than just the development function in any given organisation.

DevOps extends very much into operations, of course it does [that’s why it’s called DevOps], but also and just as importantly, DevOps extends its influence into business strategy, competitive strategy, financial strategy, product management, release planning, customer service, even employee recruitment and retention.

While manual actions can be used to add new capabilities into a DevOps process initially (like performing vulnerability assessments during the development phase so that insecure applications are not released into production) – over time, those manual actions tend to be performed less or ignored entirely. Only by automating the manual action and integrating it into the end-to-end DevOps pipeline can it be sustainable.

As with any (new) process utilising (new) technologies in a different fashion, the ‘cultural transformation’ is as important as the tool or process transformation.

Being cool with DevOps

All stakeholders, constituents and consumers need to have bought into the approach, process, tool usage, metrics, monitoring, feedback and continual improvement. Doing DevOps because it’s new or cool will create more failures cases that don’t necessarily prove that DevOps is not successful or valuable.

Successful DevOps – and DevSecOps – implementations aren’t only driven top-down by executives, but also can be bottom-up driven by practitioners to create incremental and continual improvements in existing development and operational processes. Even top-down initiatives need a successful cultural transformation for successful DevOps/DevSecOps projects.

While an organisation can have a successful initial DevOps project without cultural transformation, the likelihood that the success and benefits are sustaining becomes much lower. There is a lot of excitement to try new processes that get buy-in at the beginning, but without continual organisational support (or automation), it’s natural for people to fall back to the old ways of doing things.

Tactics, strategy, objectives

The metrics of successful DevOps projects are more tangible if they are driven by [intelligent tactics that lead to well composed] business strategy and objectives.

Development teams becoming more efficient in a vacuum might save some costs within that one department, but successful business initiatives fulfilled by a successful DevOps implementation drives revenue and market share increases for the organisation as a whole.

Cyber issues (information risk management) becomes even more important in DevOps than improvements in isolated standalone development methodologies. If a development organisation goes Agile but still uses waterfall methods to build, package and release its business applications, there are still check points for IT security to assess, evaluate, and approve new code prior to implementation to production.

DevOps accelerates the release of applications into production that can completely bypass IT security assessments.

This is where DevSecOps becomes even more important – not as a way to slow down DevOps to force in or bolt on security – but rather a way to seamlessly build in security into the fabric of the DevOps people, process, and tools.

Dude, be cool, this is DevOps. Image: Qualys


January 3, 2018  7:14 PM

Blockchain for developers, where do we start?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network wants to know what’s next for software engineers, that much you already know.

In that regard then, we want to know how developer-programmers should be thinking about blockchain technologies in relation to the builds they are currently working on and the projects they are about to be involved in in the near future.

Why? Because blockchain is widely lauded as one of the key driving factors influencing tech in 2018 and beyond.

For those that need a reminder, blockchain is a type of distributed ledger for maintaining a permanent and tamper-proof (we often use the term ‘immutable’) record of transactional data.

A blockchain functions as a decentralised database that is managed by computers belonging to a peer-to-peer (P2P) network. Each of the computers in the distributed network maintains a copy of the ledger to prevent a single point of failure (SPOF) and all copies are updated and validated simultaneously.

Blockchain for developers

With countries like the United Arab Emirates working to migrate their entire public sector records bases to blockchain over the next decade (or much sooner), what should software application development professionals be cognisant of in relation to this fast-growing technology standard?

Sandy Carielli is security technologies director at Entrust Datacard, a company specialising in trusted identity and secure transaction technologies.

Carielli points out that today, in 2018, we are at the stage where dozens of possible blockchain applications are being thrown against the wall… and we’re not yet sure what will stick.

“When you’re building a blockchain application, the first step is ensure there is a clear understanding of how blockchain adds value to the application. Like PKI 25 years ago, blockchain is a hyped technology that investors and technologists apply to almost every problem that they see,” said Carielli.

She insists that it’s important for developers to take a step back and ask themselves how solving their problem with blockchain make things better.

The big blockchain developer question

The question to ask is: does ‘solving the problem with blockchain’ actually introduce new complications that didn’t exist before?

“Additionally, developers must consider how to address some of the risks and limitations that blockchain introduces. It’s well understood that blockchain has a scalability problem, when every node has a replica of the entire blockchain, it starts to get unwieldy at a higher scale. Scalability is a focus of many researchers and start-ups, but until they solve the problem, developers still have an application to build. In order to have a useful and cost-effective app, they must assess the amount of scalability their application requires, and reconcile it with blockchain’s current scalability limitations,” said Carielli.

She adds another consideration for developers — they should have a disaster recovery plan if things go wrong.

Carielli reminds us that blockchain is notoriously inflexible, so developers must be able to make tough decisions when problems arise.

“For example, back in November, Parity wallet owners found themselves locked out of their ETH wallets due to a code flaw that was accidentally triggered. Parity now finds itself in discussions with its user community on the possibility of a hard fork to recover those funds. If developers’ applications include the use of blockchain for cryptocurrency (or anything else of value), they need to consider all worst-case scenarios up front and develop a policy for how handle them should the need arise,” said Carielli.

The guiding comment to go away with from Carielli and the Entrust Datacard team is that developers building blockchain applications should think about sharing their contingency plans with the user community and user base — this way everyone (in theory) knows what they are signing up for.

Image: Entrust Datacard


January 2, 2018  9:21 AM

Is 2018 the year of the DevOps backlash?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The DevOps honeymoon is over – well, it could be… and here’s one reason why.

As we know by now, DevOps is a portmanteau term used to describe the notion of more connected, cyclical, integrated and holistically-aware way of working between Developers (the Dev in DevOps, obviously) and the Operations team, which could encompass sysadmins, DBAs, configuration specialists, testers and other key supporting operational staff.

DevOps origins

The term itself [arguably] arose not because developers decided they needed to get more friendly with operations (that would never really happen anyway}, but because the software industry saw a tier into which it could feed new tools that would attempt to connect the Dev function to the Ops function and produce a more polished, more cost effective, more functional, more robust, less flaky end product.

But DevOps (as a term) has been around for a decade now… popular science seems to agree that the term was coined in 2008, so what happens next?

Sources are murmuring on this topic and some suggest that a DevOps backlash is imminent – but why?

Backlash clouds form

The reason DevOps itself could implode is because of DevOps, that is – in order to embrace DevOps, developers need to use DevOps… but hang on, that’s not quite as tautological as it sounds.

In order to benefit from the Continuous Delivery (CD) dream that DevOps promises, software application development professionals need to use a) their core development platform and environment of choice and b) DevOps tools.

That’s development tools, plus also DevOps tools, just in case you weren’t counting.

What developers would like to use is a more singular integrated toolset that removes the frustrations they feel when they have to change spanners several times to complete an entire development life cycle.

Nails in the DevOps coffin

Could this multi-tooling issue be one of the signs that signals the death of DevOps?

Some of the so-called ‘digital transformation’ [yawn!] projects that we heard so much about in 2017 and before will now logically start to fail — they have to, not everything can work — so will that add another nail in the DevOps coffin? Will we hear people say that not even DevOps can carry you into digitally transformed bliss?

Perhaps a new breed of more competent ‘full stack’ developers will rise up that can handle operations functions and this too will dampen the DevOps furore?

There was DevOps before DevOps anyway i.e. elements of IBM Rational tooling were tackling the issue of ‘code being thrown over the wall’ before the turn of the millennium.

The honeymoon might be over, or, at least, some serious marriage guidance counselling might be needed this year.

Free image: Wikipedia


December 19, 2017  6:56 AM

Okta and others host ‘Iterate’ developer event

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Has the role of ‘identity developer’ now been formalised?

Perhaps not, identity and user authentication controls will probably fall at the feet of specialist security developers and systems architects seeking to place a higher level lockdown on software applications in production.

Regardless of this truism or suggestion, cloud identity and authentication specialist Okta is hosting a dedicated developer conference named Iterate to augment its core Okta Oktane 2018 event, which will again be held in Las Vegas.

Over and above Okta’s core expertise in identity, the event itself will focus on a wide spectrum of interrelated technologies.

Scheduled for Feb 27 2018  as a one day event, Okta’s decision to host a dedicated developer event backs up its core identity developer story that the Computer Weekly Developer Network covered earlier this year here.

Iterate is a joint effort between Okta, Twilio, the JS FoundationAtlassian and Algolia.

“Our new developer conference is named Iterate. [The event] is split across two tracks: Build and Evolve. In Build, we’ll explore the ever-changing field of technical best practices (backend, security, front-end, etc.) and in Evolve we’ll talk developer culture: how to automate and improve your tooling, improve your productivity, stay passionate, etc,” said Okta developer Randall Degges – @rdegges.

Okta insists that Iterate is not a vendor conference – that is, there will be no vendor talks and Iterate isn’t about promoting the work being carried out at Okta.


Page 4 of 107« First...23456...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: