More companies today are investing in AI-based cybersecurity technology to speed up incident detection and response, to better identify and communicate risk to the business, and to gain a better understanding of cybersecurity situational awareness. That’s according to ESG research that found 12% of enterprise organizations have deployed AI-based security analytics extensively while 27% had done so on a limited basis.
In a recent conversation with SearchCIO, SAP CSO Justin Somaini explained how organizations can implement machine learning algorithms and AI in security to improve their cybersecurity posture. Somaini also highlighted how machine learning and AI in security can be used not to just automate tasks, but also remediate and identify issues.
Editor’s note: The following transcript has been edited for clarity and length.
What’s the role of machine learning and AI in security?
Justin Somaini: There’s no silver bullet in security; I think we all know that. But I am a very big believer in how we can apply AI, supervised or unsupervised machine learning algorithms, deep learning — this whole space — to help handle a couple of problems.
The first one is scale. Look at any large scale environments — they have more logs, more alerts than they know what to do with. Can we apply an algorithm to one low hanging fruit that can be automated and immediately responded to? Here’s an issue that we have, the computers themselves can remediate the issue. I think there’s a good component of workload that can be offloaded to that degree, and there are examples of companies that have done that.
Two, a scale of logs to be able to say, listen, we’re identifying the complex attacks that legacy technology has not been able to give to us. The third one that is really exciting: Security historically has been regimented to infrastructure and application logs, but not necessarily the application content. For example, for Concur we have traveler’s safety within the Concur system to help identify at-risk employees while they’re traveling around the globe and making sure they are safe. Can we advance that with machine learning algorithms? Can we do the same thing for employees in an HR system? Can we do the same thing for fraud within a general ledger or financial system? That business applications security conversation, driven by an AI or machine learning algorithms, is very exciting. It is really the true level of where we need to get to for security.
The robotic process automation market is advancing at a rapid clip. “Market growth is exceeding our greater than 50% CAGR to get from $500 million to $2.8 billion in five years,” said Craig Le Clair, an analyst at Forrester. “That’s very unusual.”
Le Clair, a speaker at Forrester’s recent New Tech and Innovation 2018 conference, said that although not a new technology, the RPA market is seeing a surge in popularity because of the struggles companies have experienced with digital transformation.
A lightweight technology that automates repetitive and routine processes, RPA works through existing desktop UIs giving CIOs who want to show some kind of digital progress a chance to work with a digitization technology that doesn’t require the modernization of any system, he said.
In response, companies like Automation Anywhere and UiPath are building in features, such as combining bots with advanced analytics, to make their tech competitive and even more relevant. Indeed, Le Clair said text analytics has been a “battleground” for the RPA market this year.
“Being able to rip through documents, rip through emails, use text analytics to categorize, summarize and basically give direction via normalized data sets to RPA bots to do things,” he said. “That’s a killer app that’s raising the value of RPA the most in organizations.”
But not every process is a good fit for RPA. Le Clair suggested companies use his “rule of five” to determine where to use RPA in the enterprise.
- No more than five decisions. Le Clair said that RPA works well for simple applications that that operate in high volume. He defined simple to mean less than five decisions. “When you get to more than five, you’re going to need a rules engine, BPM, machine learning to help with rules,” he said. “Because those decision have to be coded explicitly in a bot — in a script.”
- No more than five apps. RPA doesn’t rely on APIs, which means they are sensitive to changes in applications. “When applications change, they often break the bot,” Le Clair said. “This has been a huge headwind for the category.” He suggested companies keep the number of applications involved to less than five.
- No more than 500 clicks. RPA bots record the way an employee moves through repetitive tasks – cutting and pasting or data entry keystrokes. Le Clair suggested that companies keep the number of keystrokes a bot has to master to less than 500.
“If you find simples processes like these that are in high transaction volume — that’s RPA gold,” he said. “And that’s why this category has really accelerated.”
New guidance from ITIL — the widely adopted framework for the design, delivery and maintenance of IT services — is scheduled to be published in early 2019. ITIL 4, as it’s called, is the first major update of the IT service management (ITSM) library since 2011 — light years ago in technology years. What will it bring, and will it be enough?
In a recent webinar on new guidance, Akshay Anand, lead architect for the ITIL 4 update, gave a preview of what to expect. He reviewed the challenges facing service management today, the bad practices that result from these challenges, the criticisms of the current version of ITIL, and how ITIL 4 aims to address all of the above.
Let’s start with the challenges identified by the ITIL 4 research team:
- Adoption of frameworks designed to solve local problems for enterprise use
Anand said that frameworks and methods such as IT4IT and DevOps were designed to solve problems for a specific department or team, but they are being misapplied. “What enterprises tend to expect is that these frameworks solve problems at an enterprise scale,” Anand said. While these approaches may be marketed as “that one bullet that will solve all of the organization’s problems,” the actual literature on them says otherwise. “Many of these frameworks acknowledge their own shortcomings, or they acknowledge where their scope starts and ends,” Anand said. “There is no one silver bullet, but unfortunately the message is getting lost.”
- Adoption of bimodal IT as an organizational construct
Bimodal, the approach ballyhooed by Gartner as the way to meet business demands in volatile times, was roundly condemned: “I firmly believe bimodal is dangerous. We should be talking about multimodal IT organizations. It is not just two speeds, it is multispeed organizations,” he said.
- Adoption of ‘product-centric’ thinking
The current fixation on developing software products rather than focusing on IT services needs to change, according to Anand. He argued that even companies like Uber and Netflix are ultimately delivering a service – a transportation service in one case, entertainment in the other.
“They are using products as the primary vehicle for customer engagement and brand reinforcement and customer support, so certainly products have a critical part to play, but they are the mechanisms for delivering a service.”
The ITSM challenges listed above have given rise to some bad practices, or what he called “anti-practices.” These include: “watermelon SLAs,” a “join the dots” approach to ITIL implementations, and “endless maturity improvement initiatives.”
- Watermelon SLAs: IT service providers believe they are providing good service (all the indicators are green) but “when they cut into it, everything is red” — the customer is unhappy. This situation stems from misaligned expectations, Anand said. Often it’s the case that metrics are related to outputs, not business outcomes.
- Join-the-dots ITIL: Companies are looking for a linear exercise that will allow them to implement ITIL to solve a particular problem, Anand said, but ITIL doesn’t work this way. “Part of what ITIL has always stressed is that in the management of complex environments, you have to continuously iterate your way to service management success.”
- Maturity improvement initiatives: Endless quests to reach ever higher IT maturity levels “tend to be vanity exercises,” and a waste of money, Anand said. “Yes, a Level 4 is better than a level 3 maturity, but oftentimes what the business needs is Level 3 done better, rather than Level 4.”
Criticisms, ITIL 4 fixes
The scrutiny of ITSM challenges included a hard look at ITIL. Anand said the criticism his team heard “time and again” was that the guidance is too vast to grasp. The current ITIL library, he noted, runs to about 2,000 pages, and people simply don’t know where to start. The team was also told the guidance needed to be more “human readable,” and practical, similar to the approach taken in the new ITIL Practitioner certification published in 2016, which tests the ability to adopt, adapt and apply ITIL concepts in an organization.
In response, the ITIL update team has aimed to make ITIL 4 modular, as well as more lean and practical. The modular design will allow ITIL researchers to update topics that are evolving quickly (e.g. incident change management) to be updated more frequently than those that aren’t. The update “trims out the unnecessary fat,” he said, and offers lots of examples templates and practical advice.
It remains to be seen if ITIL 4 elevates the art and science of ITSM. No matter how modular, lean and practical the guidance turns out to be, one wonders if it could ever be enough to wrangle the technologies and IT services that are eating the world.
ITIL 4 comes at a time when the delivery of enterprise IT services is being disrupted nonstop by new technologies — cloud, IoT, AI, to name the biggies — and in an age when IT-based services, from Google to Netflix to Uber, are disrupting entire industries. How do you get a handle on that?
It’s no secret that data equals power in the digital marketplace, making strategies to protect that data a valuable business asset. The fast pace of IT advancement also makes the cybersecurity market ripe for disruption, and at the Gartner Security & Risk Management Summit in National Harbor, Md., last month, Gartner, Inc. research vice president Peter Firstbrook presented a list of the top six trends that Gartner research analysts voted the most influential trends in the security and risk market.
In part two of this two-part blog, get a rundown of the second half of Gartner’s cybersecurity trend list.
Machine learning is providing value in simple tasks and elevating suspicious events for human analysis.
Firstbrook said there is huge value in using machine learning to improve cybersecurity, but the technology is certainly not perfect from a data protection standpoint and still requires human input to work properly.
“We cannot escape the immutable fact that humans and machines complement each other, and they can perform better together than either individually,” Firstbrook said.
For example, machines can sort through huge amounts of data and detect anomalies, but still need a human to closely evaluate these anomalies to weed out the false positives.
“There are always outliers in the data that is hard for the machine to analyze because they can’t analyze the intent of the person or the event,” Firstbrook said. “Machine learning can elevate things that we need to see, they can help the humans get better at their jobs.
Security buying decisions are increasingly based on geopolitical factors along with traditional buying considerations.
A cold war is “raging” in cybersecurity right now, Firstbrook said, as countries such as China and Russia are suspected of hacking tech companies to steal trade secrets. As a result, cybersecurity leaders have to start taking geopolitical risk into account when they make purchasing decisions.
“That doesn’t just mean the product, but also the supply chain of the product,” he said. “Where did it come from? Who provided the component parts to build this product?”
In 2017, NATO established new cyber-command centers designed to incorporate cybersecurity in operational planning. This came months after NATO extended cybersecurity help to nonmember Ukraine in the wake of NotPetya ransomware that targeted Ukrainian institutions. Although NotPetya was targeting the Ukrainian government, it also affected consumers around the world that were using that software.
Geopolitical risk will continue to play a role in IT companies’ relationship with business partners, especially as business processes are increasingly digitized, he said.
“I have to be very careful about trusting a major equipment supplier that has that many dependencies on external third parties, because it may end up influencing our environment,” Firstbrook said.
Dangerous concentrations of digital power are driving decentralization efforts at several levels in the ecosystem.
The last item on Firstbrook’s list was more of an emerging cybersecurity trend that he and other Gartner analysts are watching: the risk associated with a concentration of power in the tech industry. He pointed to how Amazon, Google and Microsoft are the major players in the cloud market, or how social media sites run by Facebook and Google dominate digital advertising.
This makes these companies and services a target, creating risk for these huge organizations, he said.
“There are concentrations of power and that makes economic sense,” Firstbrook said, but “there’s a certain point it becomes monopolies and monocultures, and then it starts to become dangerous for security.”
Leading security organizations are starting to understand and communicate the security implications surrounding the concentration of power in the tech industry, while trying to find ways to avoid it.
“When you have a business plan that requires you know all those component parts, be conscious of where you might be creating a single point of failure,” Firstbrook said. “When you can, explore alternative, decentralized architectures and digital planning initiatives.”
As business leaders continue to realize the bottom line value of data protection, the cybersecurity market is already ripe for disruption. At the Gartner Security & Risk Management Summit in National Harbor, Md., last month, Gartner, Inc. research vice president Peter Firstbrook presented a list of the top six trends that Gartner research analysts voted the most influential trends in the security and risk market.
Firstbrook made clear to the audience that each cybersecurity trend he listed was not a prediction, but instead were changes happening right now that will continue to have a major influence on IT security in the next several years. In part one of this two-part blog, get a rundown of the first half of Gartner’s cybersecurity trend list.
Senior business executives are finally aware that cybersecurity has a significant impact on the ability to achieve business goals and protect the corporate reputation.
Firstbrook said that there is “no question” that the reason why senior business execs are paying more attention to cybersecurity is due to the fallout from major data security breaches that have occurred in recent years. He pointed to examples including Verizon’s $350 million discount on their purchase of Yahoo as a result of Yahoo’s 2016 data breach; and the huge Equifax data breach that cost the company’s CEO, CIO and the CSO their jobs.
But to sustain that executive interest, an organization’s cybersecurity leadership must change their mindset to prove to business leaders that security processes bring business value. Cybersecurity leaders must understand the organization’s appetite for risk and how it fits into achieving the organization’s goals, Firstbrook said.
“You have to articulate the risks that you experience, or that you know of, in the context of their business objectives,” he said. “If they want to improve brand loyalty, if they want to improve revenue, if they want to improve or create new business opportunities — you have to explain all the things you’re doing in the context of that.”
Legal and regulatory mandates on data protection practices are impacting digital business plans and demanding increased emphasis on data liabilities.
Although business executives are starting to get better about grasping cybersecurity’s business value, their understanding about the liability of data is often lacking, Firstbrook said. Ignoring data liability has huge implications, he said, noting the huge hit Facebook’s brand took after news broke that Cambridge Analytica was allowed access to more than 50 million users’ personal data.
Data-specific regulatory compliance rules also pose a major liability: companies that violate GDPR rules implemented this year can be fined millions of dollars or up to 4% of their revenue.
“Leading digital businesses are starting to understand and use the full liability cost of data in their digital business plans,” Firstbrook said.
When done correctly, companies can even turn data liability risk into business opportunities. The GDPR rules require companies to tell customers exactly how they use their data, for example. Compliant organizations can advertise how transparent the company is about the way it uses their customer’s data, Firstbrook said.
“You can create a whole new brand experience for your customers that will actually differentiate you from the competition,” he added.
Security products are rapidly exploiting cloud delivery to provide more agile solutions.
Cloud delivery provides numerous benefits over on-premises security solutions, Firstbrook said, most notably advantages of scale that allow providers to offer more service opportunities, and to make updates quickly.
“It eliminates a lot of the maintenance burden on you to stay current,” Firstbrook said, adding that he’s talked to companies that are using endpoint security software that is five years out of date. “Guess what? You’re not going to stay in the game if you’re that far behind.”
As a result, leading security organizations are now critically reviewing new on premises security solutions to decide whether the cloud might be a better option, Firstbrook said.
“They’re way more agile,” he said of cloud-delivered security solutions. “They can change the detection technology, they can change the way that thing works overnight. With an on prem solution you know every quarter they update it and you probably update it two quarters after that — if you do it all.”
Artificial intelligence is modern-day alchemy, according to Qirong Ho, the CTO and co-founder of Petuum Inc., an AI and machine learning startup.
“Alchemy is about chasing the most exciting applications of the science of the time — and turning that into gold,” he said at the recent Forrester’s New Tech and Innovation 2018 conference. “We have many turning lead into gold examples in AI.”
The examples produce results but are also a distraction from a larger, more common IT problem that ultimately creates headaches down the road: Today’s AI is bespoke, customized tech that Ho has referred to as “artisanal” and “hand crafted.” That’s why he’s called for the industrialization of AI, where building an end-to-end AI application — from data collection to user interface — is standardized.
A row of new houses
Ho said to think of building an AI application like you’re building a row of 10 new houses. “You call up the contractor, and the contractor comes with the foreman and all of the workers,” said Ho during a fireside chat with Forrester’s J.P. Gownder. That’s when you realize that instead of everyone using the same set of bricks and nails, planks and shingles, the workers arrive with different sized bricks and nails, planks and shingles.
“They say trust us, we can build this for you,” said Ho. And in most cases, they succeed. The workers are able to put together customized homes that work just fine — for a while. But when something in the house needs to be changed or fixed — a new bathroom installed or a pipe replaced — that’s when the headache starts.
Ho said customized AI applications that will have to change as company needs or data formats change are going to produce a very similar outcome. Without standards, every AI application is built using different tools and systems, and that makes it difficult to maintain or even repair an application — not to mention find someone capable of doing that kind of work.
A call for the industrialization of AI
The industrialization of AI isn’t just about standardizing things like TensorFlow or PyTorch. These are sometimes thought of as standalone systems but, as deep learning libraries, they’re really just a couple of the bricks often used to build an AI application.
Instead, Ho said the industrialization of AI includes standardizing everything upstream and downstream from tools like these, from data collection to application deployment.
It’s a process of moving from alchemy to chemical engineering, where technicians, engineers and operators are trained in specialized disciplines and can be called upon to do repair work when needed. “Just in the same way if I want to repair my house, I don’t need to call the guy who built it to repair it,” Ho said. “I can call up some other maintenance technician to build it or repair it.”
Ho also warned that renters should be cautious as well. Cloud vendors may claim to provide standardization for AI, but CIOs should question “how far do they go in standardizing every last step in the AI industrial engineering process,” he said.
One example: X-Rays, thermal scans and smartphone images should all be processed differently. “I mean, you treat salt water differently than sewage water when you’re trying to produce water for human consumption,” Ho said.
At Microsoft Inspire 2017 last July, the software giant discussed its reorganization along industry verticals such as financial services, healthcare and government. CIOs can anticipate the Microsoft industry focus to endure as the company enters its 2019 fiscal year.
At this year’s annual business partner event, company executives said Microsoft would continue down that path in FY19, which began July 1.
This industry-oriented go-to-market strategy is part of Microsoft’s bid to tap what it views as a $4.5 trillion market in digital transformation. Enterprises adopting new business models and designing new customer experiences need industry know-how as well as technical acumen, so the thinking goes.
At Microsoft Inspire 2018, Judson Althoff, executive vice president of worldwide commercial business at Microsoft, cited customer wins at Starbucks, Unilever, Dun & Bradstreet, and Komatsu among others as evidence that Microsoft’s vertical market push is already producing results.
“You have to give credit to the focus on our industry strategy,” he said.
Althoff also credited Microsoft’s partners, who often contribute the critical industry knowledge that drive digital transformation projects.
A case in point
A partnership involving Microsoft, professional services firm KPMG Australia and the Commonwealth Bank of Australia exemplifies the Microsoft industry focus. The companies are collaborating on a software platform dubbed Wiise, which integrates online banking functionality within a business management system. Wiise, which will be sold to small and medium-sized businesses, is set to launch in August 2018. Continued »
As the digital revolution continues, the ideal CIO is one that possesses both technical knowledge and business acumen to help drive their company’s digital transformation initiative forward. One popular trend has CIOs leading digital transformation efforts shouldering additional responsibilities that a chief digital officer would usually undertake. In an interview at the recent MIT Sloan CIO Symposium, Naufal Khan, senior partner at McKinsey & Company, explained the growing popularity of the CIO-CDO dual role.
In this Q&A, Khan discussed why having IT leaders in the dual CIO-CDO role works better than the chief information officer and chief digital officer working in silos. He also explained the reasoning behind creating the CDO role and the influence it had on CIOs.
Editor’s note: The following interview has been edited for clarity and length.
There are CIOs who are undertaking the CIO-CDO dual role. Are you seeing more of that duality within the CIO role?
Naufal Khan: We are seeing more of that duality, and I think it’s the right thing. It started where you had a chief digital officer, you have a chief information officer, and they were peers. But companies realized over the last few years that this model wasn’t working very well. You can see that over the last three, four years where companies are trying to combine them and have a single technology leader. It’s the right move, which is going to take things in the right direction.
What’s not working is taking a digital organization and just mushing it with your IT organization. There’s a need for a better operating model between the two and how they will work together.
Why is having that CIO-CDO dual role better?
Khan: In order to make the digital assets work, you need technology to work with it. It just did not make sense to have them be separate. The siloed approach simply does not work here, mainly because of the high level of interdependability. I also feel that the CDO was often the CIO that the CEOs wish they had.
There was an attempt to get away from having two different technology leaders who are often going in completely different directions, which was not working. At the time, it also created a bit of the second class citizen feeling among the employees who are in these various areas, where they would feel digital is getting all the attention and IT is not, when the work that is being done is actually so integrated that it should be done by a single person.
Do you think that creation of the CDO role in the last couple of years was sending a message to CIOs that they weren’t getting the job done and they needed to step up?
Khan: It certainly was sending the message, I don’t know if that was intentional or not. But for many of the CIOs, it got them a little bit concerned. They started to think more about the value proposition of IT, particularly with cloud and many of these third-party solutions coming in where the traditional role of IT was diminishing. IT did need a new value proposition, and it did put CIOs on guard. Right now is a great time for CIOs to actually seize the opportunity.
In his book, The Fourth Transformation, Shel Israel makes the argument that augmented reality — the superimposition of digital information on the real world — is the next technology innovation that will transform our lives. AR follows three other technologies that are widely recognized as big-time game changers: the mainframe, the personal computer and the smartphone.
Israel, CEO at Transformation Group LLC, also predicted in The Fourth Transformation that the enterprise would drive early adoption of AR headsets, not the consumer space. Why?
“You don’t have to be fashionable [in the enterprise],” Israel said at the recent LiveWorx event in Boston. “If you’re tethered to something with a wire, that’s ok. In fact, it’s a piece of equipment for you to do your job better, safer and more productively rather than something to make you a cool kid in Silicon Valley.” (Let’s ignore for the moment the irony that it’s the cool kids in Silicon Valley who are developing these AR products.)
In making the case for enterprise AR headsets, Israel emphasized the hands-free element. He said the headsets allow employees to have their hands free in situations where a computer is a vital assistant. With AR, the computing moves in front of workers’ eyes, allowing them to observe the environment while crucial information appears in their field of vision.
Handheld devices, like tablets or smartphones, are awkward and can distract workers from their surroundings during tasks that require a good deal of focus, said Israel.
Beyond that, Israel said research shows employees like AR headsets and once they start using them it’s hard for them to switch back to a time without them.
Start with a use case that’s ‘lose-able’
There is no shortage of enterprise AR use cases already making waves across industries — AR is being used in warehouses to help workers assemble orders, by surveyors to assess storm damage and for all manner of training. Many of the companies incorporating AR into their processes have a lot of talent and research at their disposal, but Israel said that shouldn’t stop smaller companies from experimenting with the technology.
“If you’re not already playing with this stuff, start soon,” he said. “As with anything you start that’s new and foreign to you, do something that is lose-able. Make it a pilot program that’s low-cost and has a short timeline. Don’t risk your job for it, but just get to understand it better.”
A question Israel gets often is “Which AR headsets or services should our company choose?” He says it depends on the use case.
“Leave the devices and the software out of the conversation until you know what you want to do,” Israel said. Different AR offerings have different strengths and different purposes, he added, but he also wanted to make one thing clear.
“Any tool you buy today will be replaced by another tool tomorrow, either by the companies you’ve already heard of or companies you’ve never heard of,” Israel said.
A cringeworthy AR question
Another question Israel gets asked a lot is “What are the best practices around AR?” He cringes every time he hears it.
“‘Best practices’ means we’ve developed things to a level of maturity that all you have to do is copy what the other guy did,” Israel said. “Things are changing so rapidly with AR that there are no best practices yet because there are no systems in place.”
As Israel said of the enterprise AR movement, “It’s only just begun.”
The Federal Reserve Bank of Boston cares about the future of blockchain, and here’s why.
In the 1990s, the Boston Fed employed over 400 people just to cash paper checks. “Today all those jobs are gone,” said Paul Brassil, the bank’s vice president of information technology and head of IT innovation strategy.
Speaking at the recent Forrester New Tech & Innovation 2018 Forum in Boston, Brassil said technology is as disruptive to the nation’s central banking system as it is to other industries. Formed by Congress in 1913 to provide a safer and more stable monetary and financial system, the quasi-governmental Federal Reserve not only conducts the nation’s monetary policy and supervises the nation’s banks, it also plays a major role in U.S. payment and settlement systems. The latter are some of the largest retail and wholesale payment platforms in the world — and they have changed over time. Distributed ledger technology could represent a big change.
“The Fed needs to look forward to how we might be Uberized,” Brassil said, adding the central bank should not think of itself as a government-backed monopoly. “If the private sector can do it better than the Fed, we need to be aware of it.”
Grappling with the future of blockchain
With that caution in mind, the Boston Fed launched a blockchain project in 2016. One of 12 banks in the central banking system, Boston has a long tradition of researching monetary policy, Brassil said, taking advantage of resources at Harvard and MIT. This project not only explores what blockchain may look like 10 years down the road — its impact on everything from credit and mobile payments to bringing banking to underserved markets — but is also hands on.
“We’re building a proof of concept around Hyperledger Fabric,” Brassil said, referring to the open source blockchain platform operated by the Linux Foundation and is being commercialized by IBM, Oracle, Microsoft and others.
His team is building a non-mission-critical app for employee appreciation rewards. “We want to plug that into our environment to see what the 24/7 challenges are when running this platform.
If the future of blockchain involves banking, IT leaders need to know what the risks are, Brassil said. “Are we going to have storage issues, issues with smart contract development, cyber risks?”
Fed role in future of blockchain?
Another aim is more far-reaching: Namely, if and what kind of supervisory role the Fed might play in a potential “spiderweb of banks” doing transactions over a massive blockchain environment.
In a mark of how fluid the blockchain platform market is now, the Boston Fed’s initial work was done on open source Ethereum, because it was the first decentralized platform to use smart contracts, a computer program that directly controls the transfer of digital currencies or assets between parties under certain conditions.
A “pivotal trip” to the Bank of England and learning about the adoption of Hyperledger Fabric by European central banks changed that.
Brassil said his team intends to publish a white paper on the project by the end of the year.