CW Developer Network


July 12, 2019  8:42 AM

BlackBerry juices up threat hunting software

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Things changed at BlackBerry, more than once, to be fair.

The company that used to be known as Research in Motion (RIM) decided to drop the somewhat incongruous name and some bright spark in marketing decided to rename the company after the device brand that we once all knew and loved.

BlackBerry marketing hasn’t always been too sharp of course; the (arguably) bizarre appointment of singer Alicia Keys to the post of BlackBerry global creative director back in 2013 goes down alongside Will.I.Am and his smartwatch launch in association with Salesforce at Dreamforce in 2014 as some of the weirdest stuff we’ve seen.

Marketing shenanigans notwithstanding, things changed yet again at BlackBerry… the company that used to be a device company reinvented itself under ex-Sybase CEO John Chen as a trusted security software and services company.

It made sense i.e. BlackBerry had always been popular with governments and big enterprises.

CylanceGUARD

New products from CylanceGUARD this year include CylanceGUARD, a managed detection and response (MDR) solution that uses BlackBerry Cylance security experts and its native AI platform to provide continuous threat hunting and monitoring.

Acquired by BlackBerry in February of this year,Cylance has been described as the first company to apply artificial intelligence, algorithms and machine learning to cyber security.

Cylance’s machine learning and artificial intelligence technology was strategic addition to BlackBerry’s end-to-end communications portfolio. Notably, its embeddable AI technology is being used to accelerate the development of BlackBerry Spark, a secure communications platform for the Internet of Things (IoT).

Threat hunting

BlackBerry says that for an enterprise to consider itself an ‘elite security organisation’, threat hunting (rather than just plain old anti-virus provisioning) is needed to take a proactive stance to threat detection; however, there are only a handful of organisations in industries such as financial services, high-tech manufacturing and defence that can claim to have productive threat hunting teams that deliver results.

According to Jason Bevis in his position as vice president of threat hunting at BlackBerry Cylance, CylanceGUARD is a subscription-based piece of software that validates, triages, analyses, prioritises and automates analyst and incident engagement.

“With alert automation, artificial intelligence and an advanced orchestration engine, CylanceGUARD simplifies complex technologies and workflows to dramatically reduce the time it takes to identify intrusions and act against attack proliferation,” noted Bevis and team.

Bevis concludes by noting that BlackBerry Cylance customers can access a web portal for visibility into their security environments, as well as receive mobile warnings on iOS and Android devices, including delivered context to streamline investigations.

Image: BlackBerry Cylance

July 11, 2019  9:49 AM

Top ops for DataOps in Hitachi Vantara Pentaho 8.3 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Pentaho is still Pentaho, but these days it’s a product line and accompanying division inside of Hitachi… and not just plain old Hitachi Ltd, but Hitachi Vantara — a company branding exercise that came about in order to unify Hitachi Data Systems alongside Pentaho and a few other morsels under one label.

Company nomenclature notwithstanding, Pentaho has now reached its 8.3 version iteration.

The technology itself focuses on data integration and analytics… but it also thought of as a platform for Business Intelligence (BI) with corresponding competencies in data mining and ETL.

In keeping with current trends, this new version is designed to support DataOps, a collaborative data management practice that helps users realise the full potential of their data. 

In deeper detail (as linked above) DataOps describes the creation & curation of a central data hub, repository and management zone designed to collect, collate and then onwardly distribute data such that data analytics can be more widely democratised across an entire organisation and, subsequently, more sophisticated layers of analytics can be brought to bear such as built-for-purpose analytics engines.

“DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, portfolio marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.”

Will it (data) blend?

New features in Pentaho include improved drag-and-drop data pipeline capabilities to access and blend data that is difficult to access. A new connector to SAP offers drag-and drop blending, enriching and offloading data from SAP ERP and Business Warehouse.

Pentaho has also addressed the challenge of ingesting streaming data. With a new Amazon Kinesis integration, Pentaho allows AWS developers to ingest and process streaming data in a visual environment as opposed to writing code and blend it with other data, reducing the manual effort.  

Amazon Kinesis is an Amazon Web Service (AWS)  for processing large streams of big data in real time. Developers can then create data-processing applications, known as Kinesis Data Streams applications. A typical Kinesis Data Streams application reads data from a data stream as data records. 

There is also improved integration with Hitachi Content Platform (HCP): the company’s distributed object storage system designed to support large repositories of content, from simple text files to images and video to multi-gigabyte database images. 

According to Stewart Bond, research director for data integration and integrity software and Chandana Gopal, research director for business analytics solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analysed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.”

Other details in this news include Snowflake (the data type, not the generation kind) connectivity.

The Pentaho team remind us that Snowflake has quickly become one of the leading destinations for cloud data warehousing. But for many analytics projects, users also want to include data from other sources, including other cloud sources. 

Try attempt to provide an answer to this situation, Pentaho 8.3 allows blending, enrichment and analysis of Snowflake data along with other data sources. It also enables users to access data from existing Pentaho-supported cloud platforms, including AWS and Google Cloud, in addition to Snowflake. 

You can read more on the Pentaho team’s position on DataOps here.

 


July 10, 2019  9:56 AM

Code smell has gone stinky

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Front end engineer at Red Ventures is Edward Granger and he’s not happy — so what’s his problem?

Granger says he doesn’t like the term ‘code smell’, which is being increasingly popularised.

But what does the term mean in the first place?

Enterprise software commentator Martin Fowler gives us a definition for code smell when he says that: a code smell is a surface indication that usually corresponds to a deeper problem in the system.

“[Code] smells don’t always indicate a problem. Some long methods are just fine. You have to look deeper to see if there is an underlying problem there – smells aren’t inherently bad on their own – they are often an indicator of a problem rather than the problem themselves,” writes Fowler, in this posting.

But Granger doesn’t like the term.

He says that every organisation will have unique needs and values that determine their standards for code quality. However, what is non-negotiable is to place code quality over people or the task at hand.

Ain’t no right-wrong binary deal

Granger contends that to evaluate code as categorically right or wrong does not help engineers perform their jobs any better. In fact, it may be the fastest way to zap team morale.

“When we work on issues of code quality, we should focus on eliminating the extraneous cognitive load in our codebases before scrutinising our peers with textbook scruples. When we focus our attention on correcting smelly code, we end up littering pull requests with critiques completely unrelated to the problem at hand,” writes Granger, in his own rebuttal blog that aims to whiff-away the term itself.

He goes on to say that the term code smell isn’t just pejorative – it’s misleading. 

“Code smells are never an indication of failure. They’re patterns that will inevitably emerge over time – no matter how good of a developer you are. Every engineer on earth is going to find imperfect patterns in their code… because we’re finding solutions in an imperfect world. It’s often that we truly don’t understand what we wanted our code to be until after we’re done writing it,” notes Granger.

Is Granger being a snowflake and trying to care about people and feelings too much? Perhaps not… he certainly knows his cognitive loads from his scope resolution operators and you can read his original post (linked above) to assess your own position on code smell.

Red Ventures is a portfolio of digital businesses that focuses on performance-based digital marketing and integrated e-commerce, data science and other bespoke technologies.

Source: Wikipedia


July 8, 2019  9:51 AM

Women in code series: Joan Touzet

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network and Open Source Insider team want to talk code and coding. But more than that, we want to talk coding across the diversity spectrum… so let’s get the tough part out of the way and talk about the problem. 

If all were fair and good in the world, it wouldn’t be an issue of needing to promote the interests of women who code — instead, it should and would be a question of promoting the interests of people who code, some of whom are women.

However, as we stand two decades after the millennium, there is still a gender imbalance in terms of people already working as software engineers and in terms of those going into the profession. So then, we’re going to talk about it and interview a selection of women who are driving forward in the industry.

Joan Touzet is an Apache Software Foundation (ASF) Member, Apache CouchDB PMC member and committer, with over 30 years of experience in commercial and open source software development. Based in Toronto, Canada, she currently works with Neighbourhoodie Software, running the CouchDB Development/Production Support team. In her spare time, Joan composes and records music, rides motorcycles, designs and builds electronic musical instruments, and pets cats. Gnomes over ponies.

Joan Touzet

CWDN: What inspired you to get into software development in the first place?

Touzet: My third-grade [US school level for 8-years olds] teacher at the University of Chicago Laboratory Schools was named Ms. Fano. She had the distinction of having assisted in the running and programming of one of the first computers – UNIVAC. It was directly because of her example and the exciting stories she used to tell about a single console controlling rooms full of vacuum tubes, that I started taking those beige boxes with the colorful apple logo on them a bit more seriously. I can only hope that other girls in my class were just as impressed with those stories of a woman far ahead of her time.

CWDN: When did you realise that this was going to be a full-blown career choice for you?

Touzet: I went to school for electrical and computer engineering – hardware, not software. But fresh out of university, I ended up at an electronic design automation company, writing simulation technology used to verify microprocessors in the wake of the first Pentium scandal. I found I had a knack for it, and it’s where I stayed.

CWDN: What languages, platforms and tools have you gravitated towards and why?

Touzet: Largely, UNIX-based platforms, and one of four languages: Python, for quick-and-dirty work; Erlang, for solving problems that need distributed computing approaches; C, for low-level work when performance is paramount; and Verilog, when doing hardware design and targeting FPGAs.

CWDN: How important do you think it is for us to have diversity (not just gender, but all forms) in software teams in terms of cultivating a collective mindset that is capable of solving diversified problems?

Touzet: How could it possibly be valuable to want the reverse – a lack of diversity in software teams? If problems are diverse, then we need a variety of approaches to resolve them, which will come best from a variety of people. Even more important is to consider all possible angles of the problem – software developers tend to be blind to the social, economic, accessibility and ecological impacts of the problems they solve. So yes, it’s frightfully important that we maintain a broad-based team; we’re no longer just backroom engineers toiling in isolation.

CWDN: What has been your greatest software application development challenge and how have you overcome it?

Touzet: Taking to heart the fallacies of distributed computing and building systems that do not take, as an assumption, any of those principles.

CWDN: Are we on the road to a 50:50 gender balance in software engineering, or will there always be a mismatch?

Touzet: That depends on whether or not the industry chooses to prioritise problems that interest women as much as men.

CWDN: What role can men take in terms of helping to promote women’s interests in the industry?

Touzet: Listen to them when they pitch ideas to you. Fund more of their startups. If they’re your employees, consider their input seriously, even if you think they don’t have the background to offer the opinion they do. And – need I say this? – hands off their bodies without consent.

A no-dichotomy spectrum

CWDN: If men are from Mars and women are from Venus, then what languages or methodologies separate the two (basic) sexes?

Touzet: I don’t subscribe to this dichotomy. Gender is a spectrum, just as there is a variety of opinions from people at both extremes of that range. The secret to involving people who don’t share your immediate interests is to listen to theirs, and find a way to appeal to those instincts. Look beyond the business interests to their personal lives – you’ll find hints of what can light someone else’s fire.

CWDN: CW: If you could give your 21-year old self one piece of advice for success, what would it be?

Touzet: Be patient, and continue to look for the best in everyone you meet and work with.

Touzet: If problems are diverse, then we need a
variety of approaches to resolve them.


July 5, 2019  7:49 AM

Wider DevOps needs sharper identity certificates 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

DevOps happened, right? So then, users (both Dev… and Ops) now find themselves in a place where they need to manage their digital identities inside increasingy connected systems. This much we can all agree on.

We also know that each user identity (again, both Dev… and Ops) will generally be governed by a certificate (or public key certificate) that governs ownership.

Further still, we know that Private PKI (Public Key Infrastructure) is an enterprise-branded Certificate Authority (CA) that functions like a publicly trusted CA, but runs exclusively for a single enterprise.

Commercial Certificate Authority provider Sectigo has pointed to a ‘widening world’ of DevOps (where many different platforms are potentially used) and suggests that this helps to validate its position as a provider of automated PKI management software.

The company has recently come to market with its Private PKI service for issuance and management of SSL certificates, private PKI and identity certificates for users, servers, devices and applications. 

What this means is that Sectigo Private PKI enables users to augment or replace their Microsoft Active Directory Services (Microsoft CA) by managing non-Microsoft devices and applications, including mobile, Internet of Things (IoT), email, cloud and DevOps on one platform.

Outside the MSFT stack

“With the explosion of applications managed outside the Microsoft stack, Microsoft Active Directory Certificate Service no longer addresses all critical use cases. Sectigo Private PKI delivers a managed PKI solution to alleviate problems associated with establishing and managing internal PKI,” explained Lindsay Kent, VP of Product Management, Sectigo.

It’s true, Microsoft’s automatic certificate management allows IT administrators to instruct desktops and servers to enroll and renew certificates without employee involvement. 

However, today’s enterprise has many applications that reside outside any Microsoft operating system, leaving administrators (and employees) with the burden of manually tracking, enrolling and renewing certificates and keys. 

According to Sectigo, administrators can discover previously issued certificates and then issue, view, and manage all certificates from a single platform – avoiding the risks, errors, or hidden costs associated with manual installation and renewal.

“DevOps environments require high certificate volumes for the just-in-time needs of many computing processes that may live for just hours or minutes. Whether using self-signed CAs on Kubernetes clusters, issuing SSL/TLS certificates into Docker containers, or automating installation of public SSL certificates, today’s enterprises benefit from Sectigo’s ability to host secure offline roots for customer-premise subordinates embedded into DevOps tools,” said the company, in a press statement.

Free, but unworkable 

Because of the difficulty of setting up a private CA, many enterprises turn to free public certificates. What often happens here is that they run up against unworkably low certificate volume caps.

Sectigo claims that its Certificate Manager (in conjunction with Automatic Certificate Management Environment (ACME)) can scale DevOps without such interference.

Private PKI use cases extend well beyond DevOps. The service supports all necessary certificate types in a single SaaS application, providing strong digital identity across the enterprise with the assurance of best-of-breed PKI practices and security.

Sectigo Private PKI service enables issuance and management of dozens of PKI-aware applications from a single platform.

 


July 4, 2019  9:12 AM

Is AIOps on the up and up?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Business process services company Wipro and technology consulting firm Moogsoft has snuggled up in bed, by which of course we mean formed a partnership.

Moogsoft’s developer credentials stem from its work in Artificial Intelligence for IT operations (AIOps).

AIOps itself we can define as the  umbrella term for the use of big data analytics and various forms of AI to automate the identification and resolution of common software code management and data workflow issues that will typically crop up during the deployment and execution of applications.

As defined here by TechTarget, “The systems, services and applications in a large enterprise produce immense volumes of log and performance data. AIOps uses this data to monitor assets and gain visibility into dependencies without and outside of IT systems.”

Back to the partnership, Wipro is using Moogsoft’s AIOps know-how… so is that it?

There’s a little more meat here: Wipro Holmes is the company’s AI and automation platform — by working together, the firms say they can improve the business availability of IT through unified alert management, root cause analysis, proactive anomaly detection and predictive capabilities.

Partnership backslapping

Kiran Desai, senior vice president and global head, cloud and infrastructure services at Wipro Limited said that Moogsoft is lovely and mentioned the importance of a situation-aware incident management approach in modern application management scenarios. 

Phil Tee, CEO and founder of Moogsoft agreed that Wipro is lovely too and mentioned the need for continuous service assurance for modern apps. 

Developer takeaway…

Proactive anomaly detection across complex interlocked application dependencies (some of which may now be even more intricate as a result of the use of microservices and the fabric of Application Programming Interfaces that connects them) may not be the first consideration that developers think of when planning architecturally… the AIOps market could well be on the up and up.


July 3, 2019  9:41 AM

RavenDB offers developer-friendly ‘operations free’ database

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Developers use databases — no surprise there.

But, crucially, developers aren’t DataBase Administrators (DBAs), so they tend to prefer database platforms that can provide them with as much out-of-the-box functionality as possible.

In real terms, that means programmers like to gravitate towards databases that provide a more managed way of performing all the daily tasks such as maintaining hardware servers, installation, configuration, monitoring internals and security.

This is the core technology proposition behind RavenDB Cloud’s database cluster

Hailing from Hadera in Israel (about an hour’s drive north of Tel Aviv, we checked), RavenDB Cloud includes metrics for measuring each step of indexes and aggregations to deliver cost optimisation.

Features like pull replication enable a hybrid on premise-cloud architecture — and RavenDB Cloud runs on smaller machines.

The technology here is currently available on Amazon Web Services and Microsoft Azure — Google Cloud Platform will follow at the end of 2019.

Grunt work

“Our objective is to completely remove all the grunt work associated with acquiring, configuring and maintaining a database so that users can focus more on their application and how it works with their data,” said Oren Eini, CEO of RavenDB.

RavenDB also offers migration tools to its DBaaS from RavenHQ, SQL databases, major NoSQL databases and on-premise RavenDB solutions.

The company has several tiers of clusters: for higher end systems, dedicated clusters are available… but users can also choose to run production clusters in a burstable mode, suitable for medium sized projects and in doing so reduce database costs by up to 20%.

For hobbyists and small projects, RavenDB offers a free tier.


July 1, 2019  10:30 AM

Can AI solve developers’ “image” problems?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Tal Lev-Ami in his capacity as co-founder and CTO of Cloudinary.

California and Israel based Cloudinary provides cloud-based image and video management technology that allows users to upload, store, manage, manipulate and ‘deliver’ images and video for websites and applications.

Lev-Ami laments the fact not everything on the web is as beautiful as it could be and looks to AI-driven routes to a prettier (and, consequently, an altogether more functional) Internet and so writes as follows…

Visitors to Las Vegas’ famous ‘strip’ are dazzled by a profusion of brightly coloured images and videos. Some of the old neon signs haven’t aged very well, with burnt out letters and their intensity fading. On other structures and facades, modern high-definition videos and images seem to come at you in 3D.

It’s the perfect metaphor for today’s online experience — however, a web visitor’s ecstasy can be a developer’s agony.

To meet consumers’ insatiable appetite for dazzling and immersive User eXperiences (UXs), web developers and designers manually solve the same image and video management problems ad infinitum. Delivering thousands of images, or sites that allow users to upload images, means constantly having to remove backgrounds, crop and resize images, change colours and apply effects.

But aren’t these manual, time-consuming tasks the perfect things to outsource to AI?

Answer: ‘yes’.

Deep learning has completely revolutionised computer vision over recent years — and media asset management is no exception.

Here are two examples where AI is proving particularly useful:

Background removal

Using AI for background removal combines a variety of deep-learning and AI algorithms. They recognise the primary foreground subject of photos and then accurately remove photo backgrounds in seconds.

This seemingly simple task belies a lot going on behind the scenes. The AI engine must first recognise the salient object(s) in the image; then accurately segment that object/s and, finally; separate the foreground to an alpha layer.

The AI engine must determine which objects to classify as foreground versus background. This classification depends on a scene’s context and composition and must be near-perfect to produce the expected quality. In the case of a picture of a woman wearing a fur coat, for example, the distinction between the fur and hair pixels and the background pixels must be flawless.

Image auto-cropping & resizing

Site visitors expect top-quality, quick-loading images that display properly regardless of which device they’re using. This involves delivering the same image in many different aspect ratios and potentially cropping closer or wider on your main subject, depending on size. Where hundreds of images are involved, cropping and resizing becomes incredibly tedious and fiddly. Deep-learning algorithms automate this by detecting the image subject, then resizing and cropping them to the desired delivery size and aspect ratio.

Again, there’s a lot going on behind the scenes.

To decide where to crop, algorithms first analyse an image’s pixels and prioritise the most salient areas on-the-fly. All auto-cropping algorithms give priority to faces, but there are differences techniques for determining other salient image areas. Our software, for example, uses a neural network to predict where people will look at in an image.

These are but two examples; AI is applied in lots of other useful ways.

For example, it can help automatically convert landscape mode video formats into mobile-optimized portrait mode. In this case, machine learning automatically determines the optimal focus point, such as faces, subjects, products or moving objects.

Deep learning algorithms for this ‘content-aware’ cropping and scaling make it easy to fit responsive layouts, change product colours and apply effects. Automatic tagging and transcription capabilities use AI to organise and manage images and videos quickly and at-scale. Last but not least, auto-tagging algorithms help developers to better manage, reuse, and analyse incoming user generated content.

Why do I know all this?

Because before my fellow co-founders and I established Cloudinary in 2011, we were working as consulting engineers and found ourselves manually repeating the same image-related tasks time and time again. Fortunately, AI evolved at just the right time to help significantly solve our own ‘image problems’.

Today, we help brands spend less time doing grunt work related to images and more time delivering the kinds of online experiences that boost their businesses.


June 26, 2019  5:21 AM

Edge & the datacentre: scope considerations for developers

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Chris Carreiro in his capacity as CTO of ParkView at Park Place Technologies — the company is a specialist in datacentre maintenance with global staff on every continent.

Carreiro says that he thinks the IT trade press has painted a picture of an ongoing power struggle between the cloud and the edge, with one or the other due to claim supremacy.

He argues that the IoT-driven data volume explosion, our mobile lifestyles and a plethora of forthcoming data-intensive, low-latency technologies (such as augmented reality) demand that traditional compute, storage, network and compute-accelerator resources be moved closer to the end user — i.e. to the edge.

But, as we know, any wholesale replacement of cloud technologies is unlikely.

Instead he suggests, we are entering a period of ‘forced’ distributed architecture across different levels of ‘edginess’, which will complement centralised capabilities in the cloud and/or enterprise datacentre. Carreiro writes as follows from here…

The hierarchy of edge levels

It is important to note that the edge is not a specific location; it is shorthand for any relocation away from the datacentre when data is processed closer to the end user and there are many different levels of edge which will differ by industry.

A hierarchy of edge levels might be regional, neighborhood, street-level and building-level nodes in a smart city, for example. The levels would be different for consumer mobile technology, autonomous vehicles and so on. Micro-datacentres sprinkled around various corporate facilities could be advantageous in certain cases. In broad terms, there will be a ‘spectrum of edge’ spanning the device level to the gateway toward the cloud.

Practice for the perfect shift

To optimise applications for edge computing, it will be vital to take care regarding where objects are instantiated and in which machine (physical or virtual) memory is allocated. Relocating parts of the business logic processes from monolithic applications hosted on a central server in the datacentre to the edge can potentially raise scope resolution issues or, more likely, novel granularity control problems.

If interdependent objects that would have previously occurred on the same virtual or physical server in the datacentre are now separated, one at the edge and one in the datacentre, there might be no pathway for these objects to reference one another or access a shared memory space. Additionally, when declaring variables and objects in this distributed environment, developers will need to consider which process and in what namespace an object is descended from in order to properly address issues like object persistence, data integrity and read/write permissions.

Control cost, don’t let cost control you

There are costs associated with moving data, pushing data up to process in a central datacentre and sending it back to the edge for the user. It will be essential to ensure wherever an object/variable is instantiated or a memory declaration is made doesn’t necessitate a trip across the network to carry out its mission. Without careful consideration of where instances exist as part of business process and application design, there is no point in moving processes to the edge, as doing so with a traditional application architecture could easily multiply the workload across the network as edge processes constantly refer back to the centralised servers.

In some ways, efficient development skill in the edge will depart from current practice. Having fewer lines of code isn’t the measure of success. Building an application with fewer variables or shared memory might result in a less bulky application, but hinder efficiency if moved to the edge.

After the balancing act, who is responsible?

The move to the edge can raise important security and compliance problems as well.

For instance, a particular business process that previously existed in the datacentre may require administrator access to function. When that process is decoupled from the cloud and sent to the edge, what security profile will it be using? The process may still require administrator-level permissions, but it is no longer safely stored inside the relative security of the data center.

It’s now outside the walls and in the wild.

In a simplified example, a retailer might currently host a secure process in the cloud, where it is protected by physical measures, such as biometric access control on the datacentre, as well as extensive network security. Moving that process to the edge – such as to a closet at the back of a retail location – would represent a substantial change in how ‘locked down’ it is.

The various edge-related issues covered above can more easily be considered during greenfield implementations, where applications can be optimised from the ground up for the edge model. Unfortunately, as edge computing takes hold, most developers will not enjoy the benefit of starting fresh. They will be charged with bending and extending existing business processes designed for older, monolithic technologies into this new distributed topology.

There is no free lunch.

Variable declaration, object instantiation, memory allocation and security profile issues must be resolved as such applications are reconfigured to make the jump to the edge.

Carreiro: there are a ‘spectrum of edge’ spanning the device level to the gateway toward the cloud.


June 24, 2019  7:25 AM

StackShare Stack Decisions is a ‘developer discovery’ platform

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software is a multi-faceted thing.

Because of this core reality, we often talk about elements of software code as layers in a fabric, or indeed (to use a more literal term and one that proliferates in programming circles) as a collection of code components and ancillary services that form a stack.

Given the massive variety of choice that programmers now face when looking to populate their stack, the market for decision and product support services has been growing year-on-year.

A new company (founded in 2014) working to carve out a chunk of respect in this market is StackShare — a Silicon Valley-based software discovery platform that lets developers see all the best software tools and (crucially) see who’s using them.

This is essentially a developer-only community of engineers, CTOs and VPs designed and built to help developers share knowledge about the tools they use and why they use them.

StackShare is in some ways redolent of the work already carried out by StackOverflow, a question and answer site for professional and enthusiast programmers — and of course Spiceworks, a technology ‘marketplace’ and collaboration platform company.

Software showdown specifics

This month sees StackShare introduce a Stack Decisions, a new service (tool) that lets developers post and browse short posts about why they chose a given suite of software tools to overcome a specific engineering challenge.

Developers can share a few sentences about the problem they faced and the technology they used to solve it, allowing for messages that are longer than Twitter and less time-intensive than a typical Medium post.

“As a founder, I know how difficult and time-consuming these large technical decisions can be,” said Yonas Beshawred, founder and CEO at StackShare. “With Stack Decisions, we’ve introduced a powerful platform for making smarter technology choices — helping our community get back to the hard work of solving the actual technical challenges they face each day.”

Stack Decisions is the latest step forward in StackShare’s pursuit of building the world’s first technology graph, a mission that netted a $5.2 million Series A round at the beginning of the year.

“We’re excited about Stack Decisions because it encourages developers to share their expertise and experiences, allowing for .NET developers to share their platform choices in to-the-point sentences, which helps other developers learn more about the language and framework features and extensibility,” said Maria Naggaga, senior programme manager at Microsoft. “As a member of the .NET team, Stack Decisions gives us the opportunity to see, learn and appreciate how our growing open source developer community is using .NET.”

The .NET community has already started sharing some of the most interesting technology decisions they’ve been making while utilising the .NET platform on StackShare.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: