CW Developer Network


January 15, 2020  10:22 AM

What to expect from OurCrowd Summit Israel 2020

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network team is looking for new innovation, always.

Away from the well-trodden conference halls of Barcelona, London, San Francisco and Las Vegas, we now have an opportunity to focus on a different zone where a new breed of software and technology services companies are increasingly coming to the fore.

That space is Israel… and, if truth be told, the interest being generated in this country’s technology space is both local and global in nature.

In the past year, the total value of startup acquisitions and IPOs was $9.9 billion, reflecting 80 deals with an average deal size of $124 million. Further, 30 of the 500+ startups with a valuation over $1 billion were founded by Israeli entrepreneurs.

The OurCrowd Global Investor Summit Israel 2020 is staged in Jerusalem from February 13th to 14th… and our editorial team will be there in force.

The event’s organisers remind us that OurCrowd is behind some $1.3 billion in committed funds for around 170 startups and over a dozen venture funds since its inception in 2013.

The brainchild of CEO Jon Medved, OurCrowd’s annual Summit last year saw 18,000 people register to attend from 189 countries (that’s out of 195 countries on the planet). The organisers say it is the fastest-growing tech event the largest equity crowdfunding event in the world.

Although many events of Summit Week are open to the public in parts, OurCrowd Summit gravitates to three main days of invite-only presentations and meetups (February 11-13). As well as the more corporate-level gloss, we can expect VC forums, insider access to accelerators and labs, touring opportunities and some local Israeli hospitality.

Startups: Going Beyond

The theme for the 2020 OurCrowd Global Investor Summit is “Startups: Going Beyond”… a tagline meant to perhaps convey the potential for tech startups to plug into the power and breadth of the cloud and go ‘webscale’ i.e. as wide as the web, as big as the data lake needs to be and as broad as the compute engines driving innovation in this space can be pushed.

“From AR-assisted brain surgery to AI that warns of natural disasters to a brain-computer interface that treats spinal cord injuries, startups are creating astonishing solutions to old problems, overturning industries and changing people’s lives for the better,” noted the OurCrowd events team, in a pre-event statement.

The organisers promise us a ‘veritable multitude’ of tech demos and so some highlights to look forward to include:

  • “Top 10 Tech Trends for 2020 and Beyond” – a look at what will be hot, disruptive and actionable in the coming year. In light of the end of the decade, this will also identify which of the trends will shape the tech landscape for the next 10 years.
  • “Feeding the Planet Without Killing It” – exploring breakthrough advances in AgTech and FoodTech.
  • “Power to the People: The FinTech Revolution” – focusing on startups providing tools for personal finance.

OurCrowd insists that it is the most active venture investor in Israel today and it vets and selects companies, invests its capital and provides its global network with access to co-invest and contribute connections, talent and deal flow.

OurCrowd founder & CEO Jon Medved said, “The OurCrowd Global Investor Summit is the premier showcase of Israeli technology and a golden opportunity for the entire ecosystem to meet and get business done. We have seen long-term strategic partnerships and hundreds of millions of dollars of investment emerge from the meetings and events at the summit. Many of our participants [72% of applicants for the 2020 summit] are repeat attendees.”

The organisation says it builds value for its portfolio companies throughout their lifecycles, providing mentorship, recruiting industry advisors, navigating followon rounds and creating growth opportunities through its network of multinational partnerships.

Other aspects of the Jerusalem show itself include unfunded startups pitching live throughout the day at the ‘Open Mic for Entrepreneurs’ slot, a gathering described as a Hyde Park Speaker’s Corner for global tech dreamers.

Most important, the organisers stress, the Summit can provide a preview of future startup success. Thirteen startups that appeared onstage at the past four Summits had a notable acquisition or IPO within a year. On the mainstage alone, six startups had major exits within three months of the Summit. For instance, last year Beyond Meat was featured on the main stage and two months later had the biggest IPO in a decade. Two years ago, JUMP presented and was acquired by Uber two months later. Three years ago, Intel acquired Mobileye two months after it appeared on stage.

Some of the startups exhibiting or demoing are:

  • Sight Diagnostics: the “anti-Theranos” a fingerprick blood tester that was recently FDA cleared.
  • AlphaTau: their clinical trials destroyed 80% of solid cancer tumors treated in days.
  • Climacell: building a network of advanced climate data centers that will prevent deaths from weather-related disasters.
  • Beyond Meat: biggest IPO in a decade.
  • Hailo: world’s fastest AI accelerator chip for the edge and IOT devices.
  • RideVision: saving lives with a Mobileye-like solution for motorcycles.

Social selection pack

As is customary these days, OurCrowd has the full selection pack of social streams supporting its event. The event hashtag is #OurCrowdSummit and the Twitter stream is @OurCrowd with CEO Jon Medved’s personal tweets here.

 

 

January 13, 2020  3:52 PM

What to expect from Dynatrace Perform 2020

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Kicking off 2020’s conference season as regular as clockwork is Dynatrace with its Perform event from February 3 to 6 in Las Vegas.

The Computer Weekly Developer Network team is once again bound for the keynotes, plenary sessions, breakouts, birds of a feather hangouts and (Ed – we get it, there’s a smorgasbord of show content) all the other conference and exhibition essentials.

Dynatrace, for those that would like a reminder, calls itself a software intelligence company — its roots are in Application Performance Monitoring (APM).

The company’s application monitoring and testing tools are available as cloud-based SaaS services, or as on-premises software.

Sixth-year running

So 2020 marks the sixth consecutive year that Dynatrace has staged this show — and this year we can expect some 47 speakers delivering more than 60 sessions over the four days… audience numbers are thought to be approaching the 3000 mark.

Dynatrace hinges its core technology proposition around AI-fuelled automation designed to provide illustrative answers that developers can use to assess the state, wealth and health of the applications they choose to create. 

This is APM for developers, yes… but it is also APM with a view to the effect that apps (and their functional demands from data storage/retrieval to the number of calls they make to analytics engines or other cloud services and so on) are having on underlying infrastructure and, ultimately, on the experience of users.

CEO John Van Siclen will lead the show kickoff before (as is customary at these things) handing over to the company’s Steve Tack in his capacity as SVP of product management. The central message from both men will resonate with what the company has been saying for a while i.e. Dynatrace is focused on automating cloud operations and accelerating the migration of workloads to the cloud.

AIOps re-defined

Last year Dynatrace spent time talking about how it is working on AIOps re-defined, a notion of AI-enriched operations where ‘open ingestion’ and integrations allow Application Performance Monitoring to get that much better. 

Recent news from the company (that we can expect to hear more about at the show) includes Dynatrace’s announcement of Keptn, an open source pluggable control plane to advance the industry movement toward autonomous clouds. Keptn is said to provide the automation and orchestration of the processes and tools needed for continuous delivery and automated operations for cloud-native environments.

The company has also recently detailed its Autonomous Cloud Enablement (ACE) Practice to accelerate DevOps’ movement to autonomous cloud operations. 

ACE promises to provide best practices, hands-on expertise and automation services on the journey to autonomous NoOps cloud operations. Initial practice focuses will be on unbreakable CI/CD pipelines and self-healing production operations for cloud native environments.

“This year at Perform Las Vegas 2020, we’re ramping up our Dynatrace University offerings because we know this is one of [attendee’s] favorite parts of attending Perform,” blogged Melissa Boehling, program manager and team Lead for the Dynatrace University. 

Attendees apparently told the company that they wanted more hands-on training (HOT) Days. Starting this year, attendees can now register and attend up to four HOT sessions and spend twice as much time with Dynatrace experts to expand their knowledge and skills.

Example session and presentations include: ServiceNow and Dynatrace integration best practices – Put your IT operations on auto-pilot; Democratising data: monitoring-as-a-self-service for biz, dev and ops, How to improve every user’s mobile experience; Advanced observability in cloud-native microservices and service meshes; How to transform into a NoOps organization; and Dynatrace Digital Experience Management overview.

All in all, Dynatrace has been in the news more throughout 2019 than at any time in its past 15 or so year history. The company that was once part of Compuware (private equity firm Thoma Bravo took the company private in 2014 in line with separating from Compuware, the Compuware APM group renamed to Dynatrace), so now we’re six years in with the company in its current form, hence this is Perform number six too.

No signs of a 7-year itch in any part of the firm, so let the show go on — the event hashtag is #Perform2020


January 13, 2020  9:15 AM

Women in code series: Lucy McGrother

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network and Open Source Insider team want to talk code and coding.

But more than that, we want to talk about coding across the diversity spectrum… so let’s get the tough part out of the way and talk about the problem.

If all were fair and good in the world, it wouldn’t be an issue of needing to promote the interests of women who code – instead, it should be a question of promoting the interests of people who code, some of whom are women.

However, as we stand two decades after the millennium, there is still a gender imbalance in terms of people already working as software engineers and in terms of those going into the profession. So then, we’re going to talk about it and interview a selection of women who are driving forward in the industry.

Lucy McGrother, lead technical support, Fujitsu UK & Ireland.

CW: What inspired you to get into software development in the first place?

Lucy McGrother: During my degree in Business and Management Studies at Bradford, I studied relational databases and had to create one from scratch. Although I had specialised in production and operations management mainly, I really enjoyed the opportunity to create a relational database. As a result, when I started work I looked for project/production planning jobs.

Then, in my next role, which was ‘down South’, I was immediately seconded into a team preparing a new project management system for a rollout. After spending some time in this role, I came to realise that I actually liked the IT side of the role more than the project planning role.

[Fast forwarding through a few other job moves, eventually, in the jobs that I took] I was given such a vast amount of experience, all the way through from the first line helpdesk, on-site support, training, back office support, through to server builds and migrations, to name a few. Since then, I have only every taken jobs that I love and I’ve never regretted it and I’m still doing jobs I love over 22 years later. 

CW: When did you realise that this was going to be a full-blown career choice for you?

Lucy McGrother: It was my first IT job that made me want to make a career in IT – especially as I’d been given such a wide breadth of experience and have used that a springboard for other roles. However, it’s only been in the past six years that I’ve really used scripting in my work. Approximately 15 years ago I started work in an enterprise management role and three years ago, I moved into a platform role for the SOC where a good portion of my work has been working with scripts one way or another.

CW: What languages, platforms and tools have you gravitated towards and why?

Lucy McGrother: The tools I have used have largely been dictated by who has worked on something previously and what their preference was and what kind of work we’re doing.

When I first started working at ICL (later Fujitsu) the first scripting language I used was Perl and we used this to automate morning checks on the customer account, which I worked on. I didn’t write the scripts but I did troubleshoot them and update them as needed, which is probably one of the hardest things to do – update or troubleshoot someone else’s scripts.

I used VBScript and PowerShell mostly in our EM environments because the vast majority of machines were running Windows. However, these days I’m using a bit of Ansible because I inherited a lab when I moved to a platforms role and I’m also doing a fair amount of Python. Because my role is in a security team, a lot of 3rd party products use or support Python and a major piece of my work has been on a SOAR (Security Orchestration and Response) platform which is based on Python and JavaScript.

CW: How important do you think it is for us to have diversity (not just gender, but all forms) in software teams in terms of cultivating a collective mindset that is capable of solving diversified problems?

Lucy McGrother: I believe that organisations that put diversity at its core, are those who can provide collaborative environments where different ideas, perspectives and styles of thinking all come together.

It has been reported on several occasions that diverse organisations with inclusive cultures have a financial advantage — and that it is as a result of greater innovation, enhanced agility, productivity and decision-making.

CW: What has been your greatest software application development challenge and how have you overcome it?

Lucy McGrother: I’ve not had a development challenge in that way, but I have suffered badly with Imposter Syndrome. This has crippled me throughout my career. Four years ago I was lucky enough to be invited to a Fujitsu internal networking event celebrating Ada Lovelace Day, which was specifically for women in the company.

The lessons I learned from that day and the network I have grown since then has been incredible. It forced me to stop working in my comfort zone and push myself in a way I’d never bothered to before. I’m so glad that happened because I wouldn’t be where I am now without it.

CW: Are we on the road to a 50:50 gender balance in software engineering, or will there always be a mismatch?

Lucy McGrother: Although we are making great progress in achieving a 50:50 balance, there is still an ocean of gender inequality to conquer, particularly in IT. Will there always be a difference? No – well, at least there doesn’t have to be.

It is important, however, to recognise that gender equality in the workplace is not something that can be fixed overnight. To deliver real change, all most commit to fight unacceptable pay gaps, male-dominated boardrooms and unequal growth opportunities. Business that will succeed in the long run are those that foster a culture of inclusivity. Only when we can do that effectively, we will then see more women entering the software community.

CW: What role can men take in terms of helping to promote women’s interests in the industry?

Lucy McGrother: I’ve learned that real change can only come from taking every single person – women and men – ­on the journey. It’s important when pushing for diversity that everyone gets involved. I know at Fujitsu we have male allies that work with organisations to help drive forward, plan and set goals for a more diverse working environment. This approach surely helps to encourage, attract and support females, both new employees and those already within the business.

To really achieve equality, we must recognise the role that men have to play in this. If we are truly to realise gender balance in business, D&I must help both men and women. Equality is not for the benefit of one group, while at the expense of another.”

CW: If you could give your 21-year old self one piece of advice for success, what would it be?

Lucy McGrother: Stop doubting how good you are – the only failure is failing to try.

McGrother: Push yourself out of the comfort zone, but never doubt who you really are and what makes you great.

 

 

 


January 10, 2020  11:27 AM

When is a software platform a ‘true’ platform?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Tomer Weingarten is his position as CEO of endpoint security and ‘threat lifecycle’ specialist SentinelOne.

Weingarten argues that when you (any of us) have been in the tech industry for any period of time, you quickly grow tired of the word ‘platform’.

As we know, the tech industry consists of many organisations who all claim to have the same capabilities… and unfortunately platforms (and the claims that surround them) are often no different.

As the developer community in particular knows, not all “platforms” are equal. In fact, many of those vendors who say they have a platform may indeed have something that is platform-esque… but it is not a true platform.

It is very common to find companies who have purchased and installed products from several of these vendors find they still have several critical capability gaps. In most cases the promise of a platform turns out to be very limited, or overstated at best.

Weingarten writes as follows…

Platform truths

Many organisations use the word platform in their marketing simply because it sounds better, when what they actually have is a product.

In fact many “products” are, in reality, not even a fully capable product and are what I call a product feature. They exist as a stand-alone product because they were able to get funding and create a company. The problem is that you begin to have dozens of products installed, each one handling its own specific use case without greater integrated capabilities or benefits.

A platform eliminates this problem because you have a single product with mature, robust capabilities. In the security space the result is less management overhead with better security efficacy.

A true platform is open and has easy integration options. Some vendors out there are still touting an old product (sorry, platform!) developed many years ago – meaning their ‘platform’ is not open at all and can’t be integrated with anything easily.

If you don’t have a true, actionable platform, then you can’t have things like software development kits (SDKs) and open APIs. The importance of an API is significant because it affects security workflow by opening the path to integration with the ecosystem. A robust API also opens the door to greater automated workflows.

It’s nice to be niche

The lack of an SDK and full API can also affect security coverage. While many vendors support the major operating systems, such as Linux, Microsoft and Mac OS X, they do not have a good solution for niche operating systems such as an old Unix IBM system or a NetApp file server. In those cases a platform that offers an SDK dramatically increases the customisation of security operations or strategies.

A platform can be extended and integrated into the environment itself, not just the workflows.

For developers, without APIs and SDKs, products can be a bit of a dead end. In this new age of multi-vendor environments, you often find tens if not hundreds of different vendors in one organisation, meaning integration is a must. Our management platform has over 300 APIs. Those APIs allow us and our customers, to integrate, interoperate and automate with other security solutions, but also other types of systems.

APIs also enable you to build your own customised reports.

You can also query using the API in a flexible way based on your organisation’s needs and security policies. For example, you could ask for a monthly report on the admin users that have been created on a CEO’s machine to check it for anomalies.

For larger customers, you can use open APIs to stream data to your private cloud data lake.

Third-party hostage situation

Many companies sell software which they have built by relying on third-party software libraries that are obtained either open source or via OEM agreements. In order to truly be a platform it needs to be your own intellectual property. Companies should not have the possibility of being held hostage by excessive third-party software which they cannot control and influence.

But what does this mean in real terms?

Well, if you don’t actually own the platform you’re working on, you don’t have 100% control of it. So you could say that those vendors out there who are putting the time and effort in to painstakingly create their own platforms from the ground up are inherently more secure – because they’re in control.

They have also gained flexibility and agility. Features can be enhanced or created and bugs can be fixed at the drop of a hat because there is no need to wait for your third party developer to get up to speed. You can develop your product at a much, much quicker rate when you are independent in this way. The predictability and performance will be greater.

So next time you’re considering adopting a new technology, ask yourself this: is it something that your developers can talk to, interact with and harness information from, or is it a ‘platform’ in name alone, outdated and likely unfit for purpose?

SentinelOne CEO Weingarten: A man who knows his products from his platforms.


January 6, 2020  10:48 AM

Bundling up a new ‘form factor’ for developer testing services

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software application developers need testing services, this much we know already.

But what form factor should those code testing services come in?

Should testing come in in a packaged -as-a-Service cloud offering? Should testing come as defined specific custom-aligned tools for specific jobs? Should testing come as one massive platform-play chunk of services that a developer can go and pick-and-mix from? Or should testing be modular turn off and ‘onable’ thing?

Amsterdam based software testing and cybersecurity services company spriteCloud B.V. thinks it’s the latter i.e. software testing should come as a mixable bundle.

The company is now offering its tools in the form of a custom testing service bundle that meets different specific software testing needs. 

Test Stack

Called a Test Stack, this modular service package consists of a blend of functional testing, test automation, performance & load testing and cybersecurity testing.

Service subscribers assemble their Test Stack by determining which testing services they require, the number of days of work they want for each testing service, and the length of the subscription period. 

spriteCloud explains that its Software Testing Subscription service is for organisations that generate significant portions of their revenue from mobile, desktop, or web applications. 

The Test Stack can be adjusted throughout the duration of the subscription (consultants are on hand to help with this) to fit the changing test requirements of the project and organisation as necessary. 

Lean & Large

CEO of spriteCloud Andy McDowell suggests his firm is taking this approach based on its experience of working with companies from lean start-ups to large multinational enterprises.

McDowell says that spriteCloud’s proprietary SaaS product, Calliope.pro, plays a central role in the provision of the Software Testing Subscription service. 

“A centralised, cloud-based reporting and monitoring tool for test results data, Calliope.pro enables development teams to stay up to date on the current health of the codebase as well as compare test results (past and present) to easily identify regressions,” said McDowell.

Calliope.pro is a DevOps tool for test results monitoring — test results are reported on a central dashboard, allowing stakeholders to share, compare and analyse them. 

 

 

 

 

 


January 3, 2020  10:36 AM

Moroccan Roll: from donkeys to debugging

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

I first visited Morocco in 2002… a time when things often went from A to B on the back of a wooden cart, usually pulled by a donkey.

Many aspects of Maroc life are still done the old way as we head into 2020, but the impact of electronics has not gone unnoticed in the Maghreb region.

When we set out on an Inspect-a-Gadget trip and made a few inquiries into Morrocan technology through the normal press channels, one smart cookie replied, “Have a great time, I get better 4G on the edge of the Sahara than I do in South London.”

So Morocco has moved on, technologically speaking, for the most part.

The corrupt airport taxi drivers who refuse to adhere to the (always much lower) metered fare that their cabs suggest might be payable are still there, but largely, there has been a lot of progress.

Uber and Google Maps are yet to arrive in any fully-fledged tangible capacity, but technology is having an impact.

Internet embryo

An embryonic Internet had already been in place since first being introduced by the now-defunct ONPT (Office National des Postes et Télécommunications) back in 1995 – but you wouldn’t have known it if you traveled in the country at the time.

Public Internet services in this part of North Africa didn’t really start to ‘flourish’ until well into the millennium.

That almost biblical feel to a disconnected place still persists when you visit the country, but a quick trip to Marrakech before Christmas 2019 felt like the perfect time to reflect on how a country like this can progress from technical obscurity to being a place where access to the world wide web has become commonplace for everyone from businesspeople to the traders and hawkers in the souq.

Sources note that between 2013 and 2014, the Moroccan Internet population grew by 1 million and by 2015, 94.1% of Moroccan users use mobile devices to access the Internet.

There are currently around 10 Internet Service Providers in Morocco — and the two most visible of these are Maroc Telecom and Orange Morocco… Inwi also operates.

Baba Bikes

Also present are ‘Boris Bike’ type bicylces known as Medina (meaning ‘city’, in Arabic) Bikes, although we didn’t see anybody technologically connected (or indeed brave enough) to unlock one and get going amidst Marrakech’s mad traffic.

Perhaps they should have called them Bilal Bikes, or just Baba Bikes?

One final nice touch (picture below), we noticed that Marrakech’s cellular base station mobile phone masts are massive 100-foot high fake palm trees made out of concrete with fronds that mostly cover the unsightly antenna blocks.

The donkeys are still there in the backstreets of Marrakech’s labyrinthine souqs, but the owners are often now on their phone working their way through arguments and trade deals to exchange goods… thankfully the donkeys remain blissfully unaware and navigate using their own age-old onboard GPS system that never needs a web connection or a patch update.

All pictures below by Adrian Bridgwater.

 

 

 

 

 

 


December 10, 2019  8:25 AM

CI/CD series – Confluent: Events provide an ‘in-built primitive’ for continuous coding

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Neil Avery, lead technologist and member of office of the CTO (OCTO) at Confluent — the company is known as for its event streaming platform powered by Apache Kafka that helps companies harness high-volume, real-time data streams.

Avery writes…

Continuous deployment sets the goalposts for the style of application architecture. 

It means the system should never be turned off, there is no such thing as a big-bang release, instead, new functionality is incrementally developed and released while old functionality is removed when no longer needed. 

The application architecture is decoupled and evolvable. 

Event-driven architectures provide both of these qualities. To access new functionality, events are routed to new microservices. This routing of events also helps support CI/CD functionality such as A/B testing or Green/Blue deployments (roll forwards, rollback) and the use of feature flags. 

Many organisations normally get started with CI/CD by focusing on decoupling and event-driven microservices. The prolific adoption of Kafka, not only makes it a good platform for eventing but also means there is a wealth of industry expertise for building this style of application. 

Event storage, replay & schematisation

This style of architecture relies on event-storage, event-replay and event schematisation. In production, Kafka stores all events and becomes the source of truth for understanding system behaviour. 

You might say it acts like a black-box recorder for events that can be used to replay incidents or scenarios at a later time. For test scenario purposes, events can be copied from production and made available in the CI environment (once desensitised). It also affords a level of regression testing difficult to achieve with non-event-driven systems. 

So events provide an in-built primitive that by their nature make it easier for organisations to get started with CI and CD. The input and outputs of different components are automatically recorded.  

The decision to build an event-driven system is significant. There are various pitfalls we commonly see, especially when developers are new to this approach: 

  • Slow builds
    A key challenge of the CI build is that test cycles take progressively longer as a project develops. Slow builds affect team velocity and weight against fast release cycles. To overcome this build pipelines should be staged and parallelised.
  • Limited resources for CI pipeline
    As teams grow in scale the resources required to support the CI pipeline will also grow. The ideal solution is to use a cloud-based CI environment that scales according to demand. Recommended tools include: Jenkins-Cloud, AWS-Code/BuildDeploy or CloudBees
  • Inability to reproduce production incidents
    Event-driven systems provide a unique advantage in that production events can be copied to non-production environments for reproduction. It is simple to build tooling to not only reproduce but also inspect incidents and characteristics that occur within specific time intervals.
  • Manual testing of business functionality
    It is common to see manual testing stages to certify business requirements. However, manual testing must be replaced with automation and as such APIs should focus on supporting API based automation tooling.  Recommended tooling includes Apigee, JMeter or REST-Assured
  • Insufficient regression testing
    It’s important that regression testing strategies are in place. Regression tests should be signed off by the business as new functionality is introduced.
  • Lack of knowledge about test tooling for event-driven systems
    There are many tools available for testing event-driven systems, we have compiled a comprehensive list at the end of this article. 

Generally speaking, the ideal go-live system is based on a ‘straw-man’ architecture. One which contains all of the touchpoints mentioned above and provides an end-to-end system; from dev to release. It becomes very difficult (and costly) to ignore fundamentals and retrospectively change so it’s better to get it right from the outset. 

Go-live considerations

From a deployment perspective, the go-live application should have a signature that meets infrastructure requirements, i.e. hybrid-cloud, multi-dc awareness, SaaS tooling (managed Kafka – Confluent Cloud). All SaaS and PaaS infrastructure should be configured, integrated and operational costs understood.

The go-live system is not just the production application, but the entire pipeline that supports the software development lifecycle all the way to continuous deployment; it’s the build pipeline that runs integration, scale, operational testing, and automation. Finally, it supports a runtime with the use of feature flags, auditing and security. 

Every application will have a unique set of constraints that can dictate infrastructure.

For event streaming applications delivered using CI/CD, the recommended tools and infrastructure would include:

  • Language runtime: Java via Knative + GraalVM
  • Kafka Clients: Java Client and Kafka Streams via Quarkus
  • Confluent Cloud; A managed Kafka service in the cloud (AWS, GCP, Azure) including Schema Registry and KSQL
  • Datadog: SaaS monitoring
  • GitHub/Lab: SaaS source repo
  • CI environment: SaaS-based build pipeline – one that supports cloud autoscaling (Jenkins cloud, CloudBees, AWS code commit/build/pipeline/deploy)

Event-driven applications also have particular requirements that require special tools for testing and automation. 

Confluent’s Avery: Kafka events are the black box recorder route to CI/CD wins.


December 9, 2019  10:00 AM

Why we need an Internet of Blockchains

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Jack Zampolin in his capacity as director of product at Tendermint Inc.

Tendermint is the company behind the eponymously named consensus protocol Tendermint and the interoperability platform Cosmos.

Zampolin writes…

The first computers were individual machines.

Then, in a quest for more computing power, bigger and bigger machines were built, initially by linking these individual machines together, until, eventually, supercomputers emerged.

These huge entities took an army of people to maintain and after a while, the benefits of adding power to these large computers dwindled and the returns fell off.

Then, the Earth cooled… and we decided on a different approach.

We made a huge number of smaller computers and ended up scaling them; this is what we now call the Internet. So essentially, we went from computer to supercomputer, to network computers.

As with machines, it is in blockchain

The development cycle has been roughly the same in the blockchain world.

We started out with Bitcoin — the first decentralised ledger technology, the first computer in this case.

Then Ethereum came along, like those original supercomputers.

Now, on Ethereum, it was possible to run a lot of different programmes, but it was still relatively slow compared to the third approach.

This third way allows to build an interconnected network of computers to Ethereum’s supercomputer. When blockchains connect to each other and network with each other, we get a development that will enable the true global scaling and growth of this technology.

Perhaps the greatest challenge for blockchains in enterprise – particularly public blockchains like Bitcoin and Ethereum – has been the ability to scale and to transact with different blockchain networks.

Crossing the blockchain chasm

If blockchains are to matter at all beyond cryptocurrencies… and if they are to be used for applications such as maintaining self-sovereign identities, delivering decentralised social media and in a variety of use cases throughout the supply chain, then they would profit greatly from being able to interact with one another.

A lack of interoperability leads to individual chain maximalism and tribalism, so for many the conflict is unavoidable. Sometimes this is even helpful, in a positive way, because it compels developers to enhance their projects’ code so that their blockchain might rise above all others.

More often than not though, because these disconnected chains think they will need to cover all use cases (and be a kind of Swiss Army Knife blockchain), they end up lacking specialisation… and so are not fit for many uses.

There’s quite a chasm between these views, one that comes down to how each approaches the question of trust. We should make these chains talk to each other in a permissionless, frictionless way.

Operable interoperability

Instead of participating in divisions between crypto factions, we need a way to offer a network of interoperable blockchains. Blockchains have been traditionally siloed and unable to communicate with each other. They have always been hard to build and could only handle a small number of transactions per second.

The industry is now looking for tools that allow any engineer or developer to build a brand-new, custom-designed, independent sovereign blockchain which can interoperate with an arbitrary number of others. Each individual chain should be able to choose and run its own independent governance, adding further flexibility and allowing developers to select the mechanisms which best suit their particular use cases.

Zampolin: A lack of interoperability leads to individual chain maximalism and tribalism.


December 6, 2019  10:38 AM

Sumo Logic cracks on with new bands of intelligence strapping

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Self-styled continuous intelligence company Sumo Logic has reached beta on two new analytics services that extend its Cloud Flex offering (a credit-based licensing strategy).

Interactive Intelligence Service and Archiving Intelligence Service work to provide monitoring, troubleshooting and threat detection for business applications.

This software collects and analyzes all types of data for operational, security, business intelligence, IoT and other various use cases.

Crucially, it is now packaged at varying price points to suit diverse use cases and cost needs. 

Sumo Logic tells us that today’s legacy and siloed monitoring and analytics vendor licensing models force customers to make a trade-off as their machine data grows, either by paying runaway license costs or being forced to discard data to control costs creating blind spots. 

The proposition here is that organisations should be able to dynamically segment their data and tailor the analytics accordingly for the right level of insights, frequent or infrequent interactive searching or troubleshooting and full data archiving. 

The new Sumo Logic Interactive Intelligence Service enables customers to ingest any log or machine data they desire.

No re-preparation, re-ingestion, re-hydration

The data is securely stored in the Sumo Logic service and is available on-demand for interactive analysis without any additional data re-preparation, re-ingestion, or re-hydration. 

This service was designed and is ideal for use cases where users need to quickly and/or periodically investigate issues, troubleshoot code or configuration problems or address customer support cases which often rely upon searching over high volume of data for specific insights. 

“Today’s data analytics pricing and licensing models are broken and simply don’t reflect the rapidly changing ways customers are using data,” said Suku Krishnaraj, chief marketing officer, Sumo Logic. “By introducing our new Interactive Intelligence Service and Archiving Intelligence Service, we are shifting the conversation from a volume-based, one-size-fits-all approach, to a flexible value based licensing model enabling customers to gain limitless value from their analytics solution at a price that makes sense for their varied use cases.” 

The Sumo Logic Archiving Intelligence Service is designed for use cases such as operational data stores, cloud data warehousing, or to potentially search during an unplanned security incident or business event. This new service will allow customers to send unlimited log or other machine data for free, without incurring any additional costs for using the platform to send data to their own AWS S3 bucket or cloud provider of their choice. 


December 5, 2019  9:55 AM

Sophos to developers: who you gonna call? (for cyberthreat APIs)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Every company is now a software company, this much we already know.

But now, every company wants to be a developer company, well… that’s the emerging message from some of industrys newer and more established players.

Not content with being a ‘next-generation cybersecurity solutions firm’ (it’s words, not ours) Sophos is making a play for developer credibility.

SophosLabs Intelix is a cloud-based threat intelligence tool for software application development professionals to use when building applications. Secure ones, obviously. 

The product allows developers to make API calls to it for what is described as ‘turnkey cyberthreat expertise’ — which, one assumes, means that Sophos has compartmentalised chunks of functional code capable of performing security-related analysis tasks.

Those tasks include the ability to assesses the risk of software artifacts such as files, URLs and IP addresses. 

According to Joe Levy, CTO, Sophos, the platform continuously updates and collates petabytes of real-time and historical intelligence, including: telemetry from Sophos’ endpoint, network and mobile security solutions; data from honeypots and spam traps; 30 years of threat research; predictive insights from machine and deep learning models etc.

NOTE: A honeypot is a network-attached system set up as a decoy to lure cyberattackers and to detect, deflect or study hacking attempts in order to gain unauthorised access to information systems.

Using RESTful APIs, developers can use this technology with file submissions for static and dynamic analysis, queries on file hashes, URLs, IP addresses and Android applications (APKs) to answer questions like:

  • Is this file safe? 
  • What happens if I open or execute it?
  • Is this link safe? 
  • What happens if I call this URL?

SophosLabs Intelix is available through the AWS Marketplace and includes several free tier options.

Sophos CTO Levy describes this technology’s three key service features:

Real-time Lookups enable classification of artifacts with access to the SophosLabs intelligence by querying file hashes, URLs, IPs, or Android application thumbprints. Reputation scores identify known bad and known good files, as well as those in the grey area.

Static File Analysis uses multiple machine learning models, global reputation, deep file scanning without needing to execute the file in real time.

Dynamic File Analysis provides dynamic file analysis and classification capabilities through execution and instrumentation of submitted files in sandboxes, utilising the latest runtime detection techniques to reveal ‘true’ behaviours of potential threats.

Who you gonna call? Source: Wiki Commons.

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: