How much time as a manager do you spend with your team beyond the day to day activities? Do you know what are motivating photons and how easy it is to create them? Having regular meetings with your team members is one thing. These are the routine activities to follow up, review, monitor, and discuss plan and actuals. In fact, these all pertain to your running projects. But the life of your team members has more than that. How many times do you relax with them and go a little informal? Rather spend some moments with them talking about their aspirations in life. Who are all there in their family? What are their interests and hobbies? Which sport do they like most? I feel, don’t have a regular agenda for these activities. That makes it boring and an activity to showcase to your management and HR.
Rather the actual goal is not that. In fact, the actual goal is to create a personal connection. Rather the actual goal is to create Motivating Photons. Creating a personal touch with each of your team members is an art. It takes a heart to do that. In fact, these are the things that stay in our minds forever. I still remember such kind of informal moments and feel those create deeper impact in life. Rather this kind of activities returns you more than what you give to others. And this give-and-take doesn’t come into records in any official books. Because these are part of personal growth mechanism. These are, in fact, small packets of energy that create additional positive stream towards achieving your official goals. Otherwise, also we all get less time for our families and friends because of increasing work pressures.
Motivating Photons are Magical In Nature
Even at home, at times, we are mentally not at home due to deadlines and upcoming meeting agendas. Under those circumstances, what is the harm in creating some lighter moments at the office? That is, in fact, one of the best ways to create motivating photons at work place. And it creates magic in life and improves personal life too.
Well, you are a successful team member working on various projects from time to time. Your success is visible to everyone. And now the management thinks it is the right time to escalate your position. Thus, you rise from the position of a senior developer to a project leader. And a job transition phase starts. So, instead of your deep and singular engagement in development and nothing else earlier, now the things are different. In fact, now you have to manage a team of developers and their deadlines. You have to report to your project manager weekly about the progress of the project. This comes with a note of key deviations that are impacting the deadlines. Not only that, now you have to plan and strategize to mitigate the risks of these delay-causing issues.
Because the main idea is to get the things back on track so that overall health of the project remains intact. As a project leader, now you and your teams have to achieve all the milestones in time. At the same time, with this job transition, you need to keep the progress well in shape. Because now, you manage the success of the project and everything lies in your hands. It happens that your yesterday peers become your subordinate and start reporting to you. This is because of your position change from a senior developer to a project leader. In fact, this is not an easy situation to handle. It might become a cause of conflicts and differences between the two of you. As a result, to avoid that, you have to manage things well.
Job Transition is not easy to manage
Soon after your job transition, it is better to make the things clear right in first go. You need to make it clear that friendship remains as it is but the team will have to follow the new rules of the game. At the same time, your technical engagements will decrease. And managerial activities will increase. Now, your discussions will also be with non-tech stakeholders. And it will not be only the development team that you will manage, other teams like QC also will come in your spectrum.
If you are in product development, you would be well aware that in addition to stability, scalability, and usability, security is the fourth pillar of functionality. These four components, in fact, have become an integral part of the basic functionality of any product, project, or service. Security has become so important because of most of the business data is shifting to the cloud. High exposure will definitely demand a higher amount of security. A product has to be stable. Role of quality control is prominent in this regard. A product under development or after development has to undergo all kind of quality control checks to ensure it is not going to fail easily. Similarly, any product that you launch in the market has to be scalable in most of the cases. Especially, for a business application, it has to cater to all kind of existing and future business needs.
Quality assurance has an altogether different role to play. It is the quality assurance that is responsible for evaluating and assessing quality assurance on a regular basis. Security is one of the critical features that QC ensures to test during product testing. But it is QA that has to ensure the process and methodology to adapt to ensure 100% testing and thus creating the fourth pillar of functionality as strong as possible. Not only that, QA is the agency to draft out what all security features are important for a particular kind of application. It can be or rather should be jointly done with information security experts in the organization. If you have global certifications like 17001 or ISAC, it makes your job easier.
The Fourth Pillar of Functionality Should be As Strong As The Other Three
But still, a constant watch on changing trends across the globe is important to understand for deployment the necessary ones depending on your organizational needs. Security, in fact, has become the basic need of an application. And that is all about the fourth pillar of functionality.
As a Project Manager, you always need to take a stock of situation about your current state and what state you want to acquire. Are you a victim, survivor, or transformer? Beyond whatever amount of knowledge and power you have, the most important factor in your success is an effort. It is, in fact, the amount of effort that matters in your success or failure. A continous habit of pumping a fresh quantum of effort is always good for transformation. Acquiring a position of a successful project manager is good to have. There is always a scope of improvement in life and work. A change is always welcome in that regard that should help you upscale your current state. If you are a victim, you will always blame your team for their problems. In that case, you will hesitate in taking any onus. Neither will you put any efforts.
It is important for a project manager to look into a mirror often and ask himself if I am a victim, survivor, or transformer. A victim project manager will lack deep connection with his teams. His engagement in project also will lack depth and appear shallow. Despite having a good knowledge, if you are a project manager in its category, you will not be living your project which is very important for the success of a project. A person who is not able to take care of his own wounds will not be able to serve others well. On the other hand, a transformer always looks back to pick best and worst parts of his experience. It helps in learning ways to enhance his own skills and inspire others to excel. That is why, understanding your role is a very important thing to ensure its success.
Victim, Survivor, Or Transformer – What Are You?
Among victim, survivor, or transformer, a victim will always blame others for everything. A project manager in a survival mode will become obsessive, fastidious, and workaholic. Work will be his safest place against any kind of disappointment or mistake. This category of project managers define themself by their success and tell the story of their career often. For both these cateogories of project managers, learning how to transform is very important.
Recently read a beautiful book The Way of The Cat: Surviving Metastasized Cancer by Itzhak Be’er. Though the book is about fighting cancer, there are a number of learning points for project managers. These ten project management lessons are quite thoughtful and can give a good amount of boost and courage to manage any kind of projects and drive them to a successful ending.
When we say a successful project, it means it qualifies the three measures of time, financials, and resources. While every resource has a direct or indirect linkage to financials, gradually the two key factors that remain with us to make any project successful are time and budget (or financials). So let us look at ten project management lessons to make any project a win.
- A War Just Broke Out And You are The Commander: Take every project as a war. Obviously, it is the project manager who has to lead and create winning situations under any kind of circumstances.
- What Are You Willing To Do To Make It Successful: Put everything at stake with no compromise. Keep introspecting and probing within if there is a hole in the basket that needs a fix.
- Knowledge is Power: Of course, it is. A lack of knowledge in any field of project management is a critical key that can mar the progress of a project. The key areas where you need to ensure a good amount of knowledge are project, team, resources, budget, timelines, and stakeholders. In case, you feel there is a gap in any of these areas, fill it fast before someone else takes any advantage out of it.
Ten Project Management Lessons
- The Time Factor is Critical: Closure of each task as per plan is important. Any task having a substantial variance in plus or minus needs assessment till it comes to a logical conclusion of reasoning.
- Never Trust Anybody: Be firm about everything happening in and around your project. Every onus falls on the project manager. There are deviators and confusers everywhere.
- Do Not Let Statistics Discourage You: Carry a clear insight and keep your instincts alive. Even though, at times, statistics may become discouraging but keep driving the vehicle with full control.
- Innovation Is Key: This is a very interesting point. Especially, when the things are getting slower and statistics becoming discouraging. A project manager needs to find out newer ways and create better roadmaps towards the goals.
- Synergy is Important: Every single unit of a project is a small unit of energy. All these energies need to be synergized in order to ensure every step moving in right direction, together.
- Find Your Support Circles: It is not bad seeking support from various functions in case of a crisis. Don’t skip any opportunity to knock a door in need. But it is important to have a knowledge of which door to knock for assistance.
- Stretch Yourself and your teams: It is good to create celebrating moments in life. For this, you need to stretch at the time of a war. Keep all your energies intact during a journey. Also, upscaling is a very important factor.
Hopefully, these Ten Project Management Lessons will help every project manager to excel in life thereby ensuring a successful project closure every time.
An Interview with Danny Santee, Enterprise Systems Supervisor, Information Technology, City of Aurora
Tell me a little about the City of Aurora and its priorities.
DS: As the 3rd most populous city in Colorado with over 350,000 people, the City of Aurora has over 3,000 employees who together adhere to four core values: integrity, respect, professionalism, and customer service. The City of Aurora’s guiding mission is to be an innovative national leader across all facets of local government. This means not only ensuring that the city is a safe and desirable place to live, work, and visit, but also maintaining a superior civic infrastructure.
In terms of infrastructure maintenance, what role does the IT department play?
DS: The City of Aurora’s IT organization supports every single city department, some of which include police, fire, safety, 911, city management, and parks and rec. We have two true datacenters that support our employees’ IT requirements, and we also maintain around-the-clock access to our public website for city residents and others who wish to visit. Our datacenters support a wide range of standard city management applications, which include everything from computer-aided dispatch, geography information systems, financial and payroll systems, and police records management to water and facilities management, building permits, tax and licensing, golf and recreation registration, storage and management—the list goes on.
What challenges has the City of Aurora faced in supporting all city departments in terms of its data centers?
DS: The city’s datacenters supported a storage area network (SAN) that was both oversubscribed and thinly provisioned. Back in 2005, Microsoft SQL Server sprawl had started to reach unmanageable proportions. Back then the IT team had turned to what at the time was a relatively new technology, VMware, to try to address the problems. This solution involved virtualizing about 80 percent of the city’s servers. The city also turned to PolyServe to address the issue—but recently, PolyServe was end-of-life, so we needed to find a new solution.
How did you go about identifying another solution that could help with SQL Server management?
DS: As soon as we found out that all product updates and support for PolyServe would be eliminated we started exploring other options. We knew our datacenters would continue to grow and that we would continue to need to add and refresh infrastructure and applications. We were guided to DH2i (www.dh2i.com) by a very helpful HP Support Manager who recommended we check it out. DH2i—which was co-founded by two senior members of the PolyServe development team—was a relative newcomer to space. But the HP Support Manager said he thought the DH2i solution offered an even more powerful solution for SQL Server management and could overcome the PolyServe technology’s flaws and shortcomings. So as the Enterprise Systems Architect, I immediately reached out to DH2i to find out more about whether this solution could work for the City of Aurora.
How did you make your final decision?
DS: After a few exploratory calls, we learned that we could start out with a “try before you buy” 30-day evaluation program, which really helped our IT organization decide to pull the trigger. So during this trial period, we used the DH2i solution on our three Dell Sandy Bridge Servers, each with two sockets and eight cores per socket, plus hyperthreading. This totaled 32 CPUs per server plus 128 gigabytes on each. The testing period was a success, so we moved forward with the DH2i solution. It initially went live to support our fleet systems, which are responsible for the city’s cars and trucks inventory—as well as maintenance and repair records.
In upgrading to this more advanced system, were cost a concern for the city?
DS: Yes, costs were definitely top of mind since our goal is always to save our residents money and spend the city’s budget wisely. Fortunately, the reasonable costs for the DH2i solution were part of what made this such a huge win for the City of Aurora. First, we were very happy with the licensing model since we could run multiple instances. For SQL Server 2012, based on our purchase window of SQL Server 2008 R2 with SA, we could grandfather-in socket purchases, which meant all sockets would receive a free upgrade.
How did the costs compare with your previous solution?
DS: We saved a lot of money with DH2i since the licensing costs for SQL Server on VMware were crazy! Specifically, the Enterprise Edition cost four times the amount per CPU, despite the fact that from a functional standpoint, we really only needed the Standard Edition. This model just didn’t fit for us since it’s a priority to spend our citizens’ money wisely. We don’t have the silo mentality that many still have of “if you are doing it for your VMware, then do it for all your databases.” In terms of a specific cost comparison, using a VMware-based solution with hardware, Microsoft SQL Server Enterprise Edition, plus software support would have cost us around $750,000. The DH2i solution cost less than one-quarter of that.
Besides the cost-savings, are there other benefits that the city realized from switching over to the DH2i solution?
DS: Today, not only are we achieving cost reductions of over 70% due to Microsoft SQL Server sprawl being eliminated, servers being fully optimized and licensing costs were basically cut in half. Our database administrators are much more productive, we’ve achieved much greater utilization of our infrastructure and our systems are more highly available.
Can you share some specific improvements that led to these positive outcomes?
DS: I’d point to three very tangible improvements that led to this:
First, the new solution overcame the restrictive Microsoft SQL Server deployment model, since DH2i’s InstanceMobility feature removed the rigid binding of one instance to one server. We can now move instances more quickly, reliably, and safely.
Second, the new solution enabled affordable and reliable high availability. DH2i uses prioritized failover and other advanced features to ensure the highest levels of availability without requiring expensive hardware or Enterprise SQL Server licensing.
Third, DH2i delivers the fastest and highest level of SLA assurance by enabling dynamic, automated load balancing of SQL Server instances to servers with the lightest workloads.
In short, we’ve been able to practically eliminate the time needed for labor-intensive “firefighting,” so to speak, when it comes to IT support in these areas and instead rededicate the resources toward activities that support and enhance the IT services provided to the city departments that also benefit the people of the City of Aurora.
What are your future goals in relation to SQL Server management?
DS: The City of Aurora currently has 9.56 Terabytes of SQL Server on our DH2i solution. Our Compellent SAN, which is where the production DH2i software is configured, is using all 3 tiers with our SQL. We place Logical Logs in our Write Intensive SSD Tier 1. We use both Tier 1 and our Read Intensive SSD Tier 2 for our databases. For our large blob-spaces, we additionally use the 7.2K HDD Tier 3. We just successfully finished upgrading 13 of our DH2i SQL instances to three new Cisco UCS B200 M4 Blade servers each with 256GB of RAM 2 Sockets with 8 cores each plus hyper-threading running Windows Server 2012 R2 Standard which can host SQL Server 2008, 2012, 2014 and 2016 (as we licensed Standard Edition of SQL Server 2016 per core). We’re excited about the future with DH2i on board.
AI (Artificial Intelligence) and AR (Augmented Reality) are not merely buzzwords anymore. Neither are two technologies just a topic of discussion or research. The two have become the reality of life and are getting visible in many of the real-life projects. Robots are becoming an integral part of our life. Industry verticals like Travel, Space, Hospitality, Healthcare, Automotive, etc. are already adopting these technologies fast to leverage. Obviously, whenever a new technology comes there are early adopters, observers, and followers. Early adopters always have an edge over others in terms of acquiring new learning by experience which others in the fray miss. And this learning comes either way, irrespective of you succeed or fail. In fact, failures equip you better to risks anticipation and mitigation well in advance.
The fact of the matter is that our personal and professional lifestyles are changing. Work is becoming easier with the adoption of newer technologies. Nonetheless, challenges never exit from life whatever advancements you achieve and adapt. Fear of robots replacing workforce for production and other repetitive jobs are quite genuine but then that itself creates another pool of opportunities. In any case, production through robots will definitely increase efficiency and productivity. That also increases the risk factor. A small fiddling with a piece of code will bring everything to halt. All it requires to stop robots functioning or start malfunctioning is an alteration in the code. That is something very serious and can lead to disasters. Using AI, companies like Ola, Uber, Netflix, MakeMyTrip, Oyo etc. have been able to change the complete business model in the logistics, travel, hospitality, and entertainment sectors.
Will Robots replace Human completely?
In fact, the human-machine partnership is going to acquire a new shape altogether. Robots will replace human in many fields in no time. As a matter of fact, machines will be performing the roles of human beings who in turn will be busy creating more tasks for these intelligent machines while making them more intelligent.
It is becoming a world of hyper-personalization on a mass scale. Mary Meeker brings internet report every year. And in her report, she brings a lot of interesting insights about India. We are not a desktop economy anymore. Mobility is the changing paradigm agent. Predictive support systems are becoming a reality. The process of discovery ascertains the cost in the healthcare industry while in the food industry it is not the case. Like, when you go to a restaurant you are able to guess the cost of food on the basis of its location, size, level etc. But the same is not the case when you go for a health checkup. You never know what may come up during diagnosis and how much that discovery may cost depending on the level of severity of disease discovered.
A recent study says that the advancement in medical technology will make people live for 150-200 years of age. But then who those people will be? That will create a wide gap on the basis of race, social status, etc. The trends are changing to cater to hyper-personalization. In that context, Simplification, seamless delivery, liquidation, etc. are few of the challenges for technology. Changes in processes have more to deal with changes in mindsets. There is a lot of data. Every company is sitting on piles of data. The key missing point is the person who joins the dots. Technology is growing very fast thereby creating a lot of scope for industries to take enough of leverage. Analytics is not only about buying a tool. It is rather a cultural issue. Management’s vision plays a major role in this.
Hyper-Personalization Is The New Global Trend
Analytics is not an end to a journey. It is, in fact, beginning a new journey. In fact, when we talk about hyper-personalization it becomes more relevant.A data strategy is quite important. Somebody from an analytics company says merely capturing just name and mobile number of a customer will be worthless. In my opinion, that is more than enough in terms of providing you a lot of useful data only if you apply your brain and use various APIs. In fact, this can provide you with a person’s lifestyle, family, friends, travel patterns, hobbies, and a lot more. The customer has always been a key to success of a business. Hyper-personalization is the catalyst to it.
Potential of robotic process and automation is endless. Limitation is only at ideation, development, and deployment. IoT is no more merely a point of discussion. In fact, it has become a business case in many ways. Every business is now technology and data-driven. As a result, successful business cases become industry shaping agents. Especially the innovative ones. When we think of technology these days it comes to mind that everybody is embracing technology. It is not that industry drives technology adoption. In fact, it is the consumer that forces technology adoption by an industry. The speed of adoption is increasing tremendously. Every new initiative in technology is making things better, faster, and cheaper. At the same time, data privacy concerns are rising at the same pace. Government is coming up with its own data protection regulations. Every new development, in fact, comes with a new set of threats.
If you look at top three concerns of CEOs, 83% of CEOs have a high focus on speed of technology change. Talent and cyber threats are the next topmost concerns. In today’s digital world technology and data are the tools for progress. Most of the organizations struggle with charting a roadmap. Is technology a disrupter? Image-based processing, AI, social media, wearables, automation, and chatbot engines are playing a major role in industries. One must understand key themes to deal with this disruption. Such as data analytics and AI. Data is the new currency, in fact. Big data analytics can help in customer service. So is true for Robotic. Process automation can reduce operational costs and at the same time can increase quality and productivity. Digital framework and strategy, collaboration tools, processes, and culture, integration are some of the differentiators.
Robotic is going to be a big changemaker
Air taxis are changing the whole concept of transportation. Disruptions are forcing birth of innovations. As a matter of robotic is going to change the complete preposition between an employee and employer.
Every project is important for an organization. No organization would wish for a project to fail. But still, project failure happens. That too despite all good efforts by respective stakeholders. Then why do projects fail? And why do we have all kind of genuine reasons behind those failures? For a number of years, there is a big hue and cry that 80% of projects delay or fail because of change in specifications or business requirements. If that is true, why are we not able to get a concrete mitigation to this most threatening risk to a project? Probably, an Agile approach is a solution to it. But that is true to what extent? At least, I am not sure if this is a 100% foolproof safe bet to avoid failure of a project. Not all projects are a top priority in an organization. But its important doesn’t go off.
In fact, every project has something or the other at stake. Otherwise why a project would start. And as I say, no project starts with an intention to fail. Logically, a project failure could cost an organization a customer. It can go worse to that extent. Losing money on a project is not as crucial as the reputation of the organization. Money you can create in another project but reutation building take more than that. Reputation loss, in fact, leads to multifold losses. Different studies show that getting a new customer is five times costlier than retaining an existing one. That is a proven fact. Khalid Saleh, author of Conversion Optimisation, The Art & Science of Converting Prospects to Customers, says that around 45% organizations spend more efforts on acquiring a customer than retaining an existing one. Interestingly, the other 55% who focus more on retaining progress fast.
Project Failure Should Be A Big NO At Any Cost
What it means is that putting more efforts to avoid a project failure returns in a better way than otherwise. Engaging customer and quality team throughout reduces the risk to a larger extent.
Data Privacy is of utmost importance in healthcare organizations. Especially, the data that pertains to patients requires complete safety and protection. This data, if leaks to unreliable sources can lead to a big amount of blunders that might become difficult to handle and control. That is why healthcare industries need to be proactive in their approach in this regard. In fact, there are ways available in the market, thanks to advancements in technology, that can pre-emptively control issues pertaining to data privacy. As a matter of fact, every healthcare organization must have a strong mechanism in place for data security. It should be a component of all their projects as a top priority. Cyber attacks like ransomware and malware are of serious concern in today’s world when most of the data is online thus increasing the extent of threats and vulnerabilities. There is a tremendous risk, in turn.
We all know that all these threats like malware and ransomware strongly impact patient care, finances, workflow, operations, and reputation. Cybersecurity and data privacy in the healthcare industry is a patient safety issue. Thus, protecting patient also includes protecting their information. In fact, this is a norm these days. It comes by default. As Jacki Monson, VP and Chief Privacy and Information Security Officer at Sutter Health in Sacramento, California, says, “Our cybersecurity team is constantly threat hunting, and if they find potential threats, they work with engineers to address them before an attack happens. We share information with other organizations and participate in various task forces to obtain threat information. We also have a 24/7 monitoring service. If there is a legitimate threat, they notify us and we go into incident response immediately.” In fact, there is a task force that regularly evaluates and monitors cybersecurity issues.
Data Privacy Is A Top Most Concern
In the nutshell, data privacy is a collective concern for the patients as well as the healthcare organizations. Probably one of the top concerns.
DHS (Department of Homeland Security) has come out with a new regulation about an email security program. It is now compulsory for the US federals to deploy DMARC (Domain-based Message Authentication, Reporting, and Conformance). The purpose is to control all kind of hackers, scammers, and other kinds of online risks. There have been a number of cases of impersonating government email addresses. So much so that reportedly every 4 emails from .gov addresses have at least one email that is spam with malicious and criminal intentions. Hence, it becomes necessary to understand why DMARC is so important and what it does exactly. Basically, it is a reporting protocol that authenticates email and checks policies behind it. In fact, it works on top of two other very popular and important protocols SPF and DKIM. It adds linkage to the sender’s domain name, relevant policies for recipient for authentication.
DMARC checks for authentication failures on the basis of above. Along with these, it also checks for identification failures at recipient end and reporting from recipient to sender. The whole purpose is to protection and improve the whole process and protection of domain from unauthorized emails. SPF is Sender Policy Framework that authenticates an email on the basis of the path it takes right from the point of its origin. Similarly, DKIM is Domain Keys Identified Message. This, also, is an email authentication process on the basis of signature of the sender. In fact, it merges DomainKeys with the email specifications thus helping in tracking and identification. As a matter of fact, DMARC prevents spoofing of emails. It is important to control hackers who are experts in making their emails appear coming from completely authentic sources. It is, in fact, important to control online ecosystem in a scalable manner.
DMARC is a strong protocol to control spoofing
DMARC, to summarize, not only authenticates sender and receiver but also the traffic and path. That is how it is able to control spoofing, phishing, and hacking in an impressive manner.
Bug Taxonomy is a practice that is becoming prominently important in software testing. It not only enriches testing team’s experience but enhances the whole testing mechanism. You can call it a tool to make your software testing process stronger and fruitful. In fact, it is a process of categorizing and listing of possible bugs in a module or piece of code to perform a particular function within a module. While this listing becomes a valuable repository of quality control team it also helps in optimizing their productivity. As we all know there is always an employee turnover in any organization especially the software companies. So when an experienced testers leaves the organization and a new one joins in his or her place, Bug Taxonomy becomes handy to add a level of maturity and experience thus giving a thrust in testing. On the other hand, it has many other benefits.
Bug Taxonomy removes duplication of job by testers investing a good amount of energy and time in reinventing the wheel every time they start a new testing. It also helps senior testers in the team to keep improvising it by evaluating it on a regular basis and brainstorming with others. Maturity of test cases also gets a new level with it. It is, in fact, always good to keep it reviewing and evolving form time to time. Whether you are a single product selling software company or having multiple customers demanding multiple products entirely different from each other, this technology always creates wonders. And it is always true that even in a new product there are a lot of functions that already exist. Thus the existing collection of taxonomies is helpful in those cases. You need to have a track and recording of each taxonomy for that matter.
Bug Taxonomy is a handy powerful tool
Bug Taxonomy helps in removing redundancies and inefficiencies across the team. In fact, it creates a repository of knowledge and experience of teams working on various projects. It also builds a strong bond between engineers with a long experience and those who are fresh in the field.
Business Process Automation is important for business continuity. It is not the case that organizations that don’t go for automation have no business processes in place or no business continuity plans. But if there is a scope of automation that helps in enhancing your business then why not go for it? In fact, with so much of technological advancements, organizations are using robotics, artificial intelligence, and machine learning for scaling up of automation. Obviously, there has to be a specific maturity level in the processes in place. Only then you can think of their automation. Maturity includes standard processes in place with benchmarking and plans to succeed. As a matter of fact, it requires consistency, patience, and knowledge for standardization of business processes. It is not that you just establish a process and start thinking of its automation. Firstly, you need to ascertain it to reach to a level.\
Since there is a tremendous increase in data to handle in an organization that demands a high volume of repetitive tasks, it is good to go for business process automation. Utilizing intelligent manpower for such kind of activities is a waste of talent and resource. It will, in fact, impact the growth of not only the individual but also of the organization. Logically, business and IT is a symbiotic relationship. IT has to help business in its growth with the help of latest IT machines and information systems. At the same time, business has to ensure a continuous flow of funds for upkeep and upgrade of existing machines, systems, and manpower. None of the two has to become stagnant thus losing importance in the organizational ecosystem. In fact, it implies for any two departments or functions too in the same way. That is how business runs successfully.
Business Process Automation Is The Need Of The Hour
Modernization of IT systems is important to stay ahead in this world of technology. Organizations don’t hesitate in overhauling of the complete setup, if need arises to that extent. It depends how old legacy systems are there are part of business dependencies. A continuous review of existing business process is crucial.
Dynamic File Management system comes into picture where flexibility is in demand. In fact, it is every enterprise’s requirement. But think of an environment of an auditing firm or an investigating firm where the access and authorizations change on a regular basis depending on customers and the executives handling those customers. In fact, this extent of these requirements is always there in varying prepositions in most of the organizations. For all such organizations this solution is one of the top priorities. All in all, it has to cater to four key functions in order to serve well. These four functions are Analyze, Move, Manage, and Explore. As we all know, data is the new asset for any organization. That is why no enterprise can dare to ignore this aspect. Most of the organizations believe in automating file workflow. It requires proper file analysis on various parameters.
File Analysis parameters are size and type of file, owner, access type, duration of access, and so on. All this is a part of Dynamic File Management. A concrete log mechanics has to be there in order to control any ambiguities in rules. Logs also become handy for auditors and forensic purposes. If automation to a larger extent is able to suffice the purpose, why not go for it. Automation in workflows is always beneficial in terms of increasing mobility, productivity, and accuracy. If the complete automation is accurately built, the automation part avoids manual intervention and the accuracy part brings error-free mechanism. This brings a high level of consistency in the whole ecosystem. That, in fact, helps in raising the level of trust in the organization. As a result, it leads to a higher value in employee satisfaction level. This kind of companies achieves higher success rate.
Dynamic File Management Helps In A Big Way
Business rules and roles & responsibilities help in defining authorization and access levels of individuals. Accordingly, the file movement takes place in the system. In fact, alert and escalation process is a useful add-on in Dynamic File Management system. The system generates auto-alerts and escalates the things accordingly to the right hierarchy depending on the exception and its severity. It is always good to create meaningful rules for exceptions.
Data Storage Management is one of the top five priorities of any enterprise. Though the ownership lies with the centralized IT but only to the extent of the technical solution. The sanctity, usage, and usefulness have to be with the respective departments or functions who are the controlling agents of their data coming from any source. Obviously, the data is increasing at a tremendous speed and so are the risks that associate with it. Now, all kind of data thefts is happening online after the world has become digital. The funny part is that many organizations who accumulate a huge pool of data in their data centers don’t use it to that extent. It impacts them in two ways. The cost of data management goes higher and creates a deficit since there is no significant use of that data is happening. This is quite a sad state.
On the other hand, there are organizations that are taking full advantage of data and driving their business with the help of analytics tools in a best possible manner. In any case, data storage management requirements are increasing thus demanding it to scale up for higher reliability and availability. Four prominent factors that ensure reliability are scale, distance, security, availability. It is immaterial of the size of an organization. Rather, it depends on the line of business and severity of data. So, the volume of data, definitely, plays a major role. If there are multiple data centers, the distance between those is also an important consideration. In a way, file management system becomes a sub-component of data management. Security will have a triangular aspect. The three corners of that security triangle would be global, local, and industry-specific. And it is important to cater to all the three.
Data Storage Management Solutions Have To Be Secured
Automation is the key in today’s world of technology and digital. It is important to create a strong shield around your Data Storage Management Solution. The new digital economy very clearly states Data as the most precious asset for any enterprise.
New Digital Economy says data is the new currency. In fact, in today’s world, it is the top valued asset of an enterprise. But most of the organizations don’t understand it and thus doesn’t take it seriously. But they forget one important learning of the business. Anything that is relevant to your business you are ignoring or not giving proper value becomes a powerful tool for their competitors. In an organization, most of the day remains in custody of IT who don’t know how too accrue it in business terms. Or even if they try to get something out of it, it is from IT’s perspective and not the business perspective. That is the reason the cost of data becomes a burden rather than accruing benefit out of it. What could be the reason? Is it the lack of business engagement or involvement?
Or it is an indication of too much dependency on IT? It might also be a reason of ignorance. Whatever is the case, New Digital Economy is something that is becoming a top most priority for a business to drive it in right direction. Logically, every business unit needs to have their individual data management mechanism, dashboard, and a set of intelligent tools. Going a step further, within a business unit, each of the key department should have a similar kind of setup. This is remove dependency of a business unit from centralized IT. At the same time, the similar dependecy of deparments on their IT will go off. This can happen only if they follow above two steps. The way data is growing at a tremendous speed, it is becoming the need of the hour. Otherwise data will keep on accumulating and servers become data dumping yards.
New Digital Economy Is The Key To Success
It is strange that data storage and upkeep is IT’s requirement. But when something happens to data like data loss, the respective departments start blaming IT. This relationship between IT and the respective departments or a business unit needs to strengthen right from the point of origin of data with a co-ownership model.
I recently had the opportunity to sit down with Cuong Le, Senior Vice President of Field Operations for Data Dynamics to discuss what has become a very hot topic – “digital transformation.” He shared with me his thoughts on the challenges being faced by those hoping to reap digital transformation’s numerous benefits, as well as strategies and technologies that are successful in overcoming them. We then went on to discuss the recent introduction of the Data Dynamics StorageX 8.0 dynamic file management platform.
Q: Digital transformation is topping the list of virtually every business and IT professional’s priority list. And while the benefits are numerous, the obstacles are abundant as well. What are the primary challenges your customers are facing today?
C.L.: Organizations must adapt to meet the requirements of the new digital economy where data is the most prized asset. Unfortunately, the management of the underlying storage for this most valuable of assets still remains with centralized IT, who look at data from an infrastructure perspective, as a cost to manage, rather than the asset that it is.
The challenge is to shift the management of data storage from centralized IT to individual business units. Agile management of data storage requires a modern file management solution that can scale to today’s storage needs and can address the challenge of distributed, heterogeneous storage.
Data growth is compounded by distributed data repositories that create technical challenges and place a large burden on the IT staff.
Proprietary storage resources create technology barriers. It prevents or creates obstacles to file movement and depending on where data is relocated, new proprietary technology creates lock-in once again, continuing the cycle of vendor lock-in.
For organizations who successfully make this transition, it will lead to a more focused approach to each application’s and user’s need for data and their ability to access it when and where they want it.
Q: Do these challenges differ across geographies, vertical markets, and/or size of organization? Likewise, are you seeing differences in challenges between business and governmental agencies?
C.L.: Three factors that pressure data storage management are scale, distance, and security – all which vary depending on organization size, the distance between data centers and industry-specific security issues. For large, multi-national corporation, successful data storage management requires sophisticated file management solutions that can scale to petabytes of data and perform with high reliability.
Automated file management workflows enabled by API’s are key when managing petabytes of data. GUI storage management consoles are sufficient for managing tens of terabytes of data, but when the job scales to hundreds of terabytes or petabytes, a GUI console is not practical and is prone to operator error. API’s empower DevOps, IT Service Management, and line of business organization to implement file data management practices in their applications. To ensure file data is managed the way they need it to, but also ensure that its available to provide the highest value to the business. The most efficient data management is always to have it done by those who know and own the data.
Data security is an important issue for all industries, in particular for the government, financial and healthcare which are popular targets for hackers. File permissions are a vital part of an organization’s security; therefore, when moving files and restructuring file systems, a modern file management solution is needed that can ensure that all file permission remain intact before, after and during the migration. Business is also very dynamic with mergers, acquisitions, and divestitures they need the ability to change and establish the right file permissions as needed.
Q: What strategies/technologies are topping the list as possible solutions?
A dynamic file management solution is top on the list. A dynamic file management solution is responsible for three key functions: analyze, move and manage.
Q: You recently launched StorageX 8 – could you tell us a bit about it and how it addresses the aforementioned challenges?
C.L.: The StorageX 8.0 dynamic file management platform empowers you to analyze, move, manage and modernize your data where you need it and when you need it, from your data center to the cloud. StorageX is built using industry standards and operates completely out of the data path, freeing your data from technology lock-in, complexity and risk. Enterprises who are consolidating data centers and modernizing legacy applications rely on StorageX, the most trusted name in file management.
The StorageX’s powerful analytics empower you to deeply explore your managed storage resources based on name, location, creation, last access, attributes, and SID. Analytics directly feed automated policy workflows for Phased Migration, Archival Migration, file to object conversion and more. Using StorageX, you are in control of Your data. You can move files confidently from File-to-Object or File-to-File and place YOUR data in the location YOU want to optimize your business strategy. It empowers its users to:
The overall scope of Endgame Testing varies from product to product but its sole purpose is to perform those testings that are not part of sprint testings in Agile. These would include complete functional, load and performance, security, usability, and integrity testing. Basically, it covers those testings that involve complete product rather than its parts or sprints. As the book, Agile Testing: A Practical Guide for Testers and Agile Teams says that in this case, you need to concentrate on overall product functionality. In fact, it should “confirm that the application is working correctly, give you added confidence in the product, and provide information for the next iteration or release.” Logically if you see exploratory testing as part of the Endgame Testing makes a lot of sense. It, as a matter of fact, helps in identifying defects in a very effective manner. This testing happens just before the release.
Here user’s perspective plays a major role. It emphasizes on a smooth flow between various components that different teams develop as part of an Agile project. Integration, user experience, and logical flow are the key point of focus in this testing. The purpose is to find out the defects or flaws in the product that are not possible to identify inside agile teams. And it includes flawless flow across the product. If we see, in Waterfall approach, there is an ample scope of testing the product as a whole. But the same is missing in agile where there is no such test group. Rather, in Agile, each team handles testing and development of only the component they handle. Even the forward or backward integration testing is partial with the help of emulators or mocks. More or less, every component’s testing is in isolation.
Endgame Testing is the last step before production
Overall, the goal of endgame testing is to explore and evaluate a product in terms of the value it provides to the customer.
Security Testing is not a new phenomenon but its depth is compellingly increasing due to avoid security flaws in an app that becomes an invitation for hackers. If we follow proper security testing steps everything can be taken care of to a larger extent. In fact, it is essential to understand these steps thoroughly.
A concrete plan plays a major role in the success of any project. Without planning execution is always prone to failure. You can form a strategy only if you have a solid plan in place. Especially in case of security testing, it has to be exceptionally well in terms of identification of all the vulnerable areas to look into. Rather, a scenario wise plan will be a batter preposition. Actually, flow of business logic goes into coding. After coding (or during coding) you need to spend some time with developers to get the crux of flow of the same logic in the application. That means the busines flow not becomes application flow. In addition to helpin in mapping the both, it also helps in identification of logical vulnerabilities. Though automated tools help in testing but still vulnerabilities like authorization bypass should be taken care of in manual testing.
Threat modeling is next to go for in Security Testing Steps. If you design a model of high-level threats to the application, it helps a lot in creating proper test cases. Identification of development components like coding language, technology stacks, technology platforms, etc. are also part of the same step. With the help of historical data of other projects you can ascertain the pros and cons of each of these components.
Security Testing Steps Provide Guideline
Selection of right testing tools is critical. Open source tools like Zed Attack Proxy and Nmap are good in that zone.
Don’t perform testing just for the sake of it. These Security Testing Steps are just a guideline. Relying completely on automation in testing is another weakness. Hackers would be happy if you don’t apply your mind in customizing it and taking a step ahead of standard style of testing.
Security is the key driver in all Security Testing Steps. Don’t ignore SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing).
Increasing presence on internet demands higher level of security. Same way, enterprises are moving to cloud for residing their valuable data and applications. All this attracts potential risks, especially in terms of virus, ransomware, and malware. In fact, hackers only need a small wsecurity flaw to penetrate into your servers thus causing a big harm to your data and applications. Not only that, this in turn, also impacts business continuity and reputation. In adition, there are huge financial losses due to heavy ransom demand for unlocking or decrypting your data. Any vulnerability in code can produce leakages and security gaps. Though there are many leakage possibilities of that you need to think of while writing a code. As a matter of fact, testing has to be the strongest area in the whole development cycle. You need to find the best of the tools, methodologies, and skill to tackle that.
The most common gap in coding causing security flaw is Hidden Field Manipulation. This is most prominent in e-commerce portals. On the other hand, you need to adopt extra protection for an e-commerce website due to the kind of transactions it handles. Recently there was a case reporting a billion of loss during a month due to amounts getting debited from the company account insrtead of customer’s accounts for purchase transactions. Was it due to a flaw in code or an intentional move by an employee is yet to ascertain. In Hidden Field Manipulation, applications encapsulate some hidden fields within web pages. Due to immature handling of coding standards some of these field carrying highly crucial information might land a company into a big trouble.
Security Flaws Can Land An Enterprise Into Big Trouble
The second most common factor responsible for security flaws due to code vulnerability is Cross-site Scripting. This, in fact, is more prone to happen because of careless coding. It becomes a golden gate for hackers letting them steal sessions or inject malicious content thus defacing a webpage to vulnerable content or redirecting users to malicious sites.
The third most common loophole in coding is Cross-site Request Forgery. This kind of Secutiry Flaws happen due to negligence of coders while coding. If acode doesn’t understand the value of random tokens and reauthentication on a critical data transaction page, it could cause havoc. In fact, if these two factors are mssing an attacker becomes free to perform transactions on behalf of users. Depending on the accessibility rights of a user, the intruder can cause any volume of damage to an organization.
Increasing presence on the web is also exposing our applications and data to higher risk. These risks include cyber attacks that are now increasingly creating panic for enterprises. That is why security testing for all kind of web applications are very important. In fact, important to an extent that there should be no scope of an iota of compromise in that. It not only require suitable taskforce and tools but also exhaustive security testing plans. This planning should start right from the beginning of a project. The overall purpose is not only to ensure a secure application launch but also a riskfree user experience. Achieving this is not a difficult task. But you need have some fundamental things in place before you start. Because building a strong security fence within the code provides you a good amount of confidence to go ahead. Adopting a right path is important.
With Petabyes of data transactions happening across the internet with the help of a large number of web applications calls for a tight security testing methodology. An end user will always expect a hassle free experience in this regard. Monitoring every transaction is an impossible task. Hence, a foolproof exception handling mechanism needs to be there in place. That too with respective provision of automatic alerts and alarms. There is an overall atmophere of panic due to continuous cyber attacks. A higher number of virus attack incidents also indicate increasing threat in our digital space. This, in turn, demands adopting right tools, people, and methodologies to tackle these issues. In fact, there are a number of reasons to increase the intensity of security testing for web applications. The key is realization. Without realization nobody understands the gravity of matter. It is not necessary to wait for an incident.
Security Testing Should Be a Top Agenda
Enterprises need to act smart in creating comprehensive security testing agenda rather than launching a web application without bothering about it. After all, creating a secure customer experience is the call of the day.
Three industry experts in their respective fields join hands to adopt open edge computing IoT. The aim is to adopt innovative practices. Huawei joins hands with Infosys and Wapwag at Huawei Connect 2017 for this mission. Infosys is a leading provider of Information Technology and business consulting services in the global arena. Wapwag is a provider of smart water affair solutions. Huawei, as we know, is a leading global ICT (information and communications technology) solutions provider. The ultimate goal is to enrich lives and enhance efficiency by connecting world in a better way. Together they release a series of important innovative practices that include smart robots/ machine tools and smart water affairs based on open edge computing IoT. This, in turn, will definitely help in accelerating the implementation of industry applications. Industries like manufacturing and water management are developing towards intelligent IoT because of digital transformation.
Despite all technological advancements, the network front still enocunters a number of challenges. Those include managing heterogeneous connections, data analysis, data processing, and device management. That is where the need of smart industrial robots or macchine tools solutions arise. For this Huawei and Infosys have been working together for some time to find out solutions using open edge computing IoT. The solution also supports the interconnection with industrial robots or machine tools from different suppliers. In addition, it rapidly adapts to intelligent data processing requirements in various industrial manufacturing scenarios. Obviously, this helps manufactures in a great way to proactively predict faults thereby improving maintenance efficiency of robots and machine tools. In addition, Huawei and Wapwag jointly launch the innovative smart water affairs solution at Huawei Connect 2017. This solution works on the basis of edge computing IoT and has a high level of adaptibility under differnt working conditions.
Huawei Connect 2017 witnesses 116 partnerships
The solution presented at Huawei Connect 2017 works perfectly even when there are many types of interfaces and protocols. Thus, it meets the demand of intelligent data processing at the edge in different water management scenarios. In addition, it realizes intelligent connections of water supply devices. Also, it manages scenarios like where old and new water supply devices coexist. Or when you deploy devices from multiple plants. As of now, Huawei edge computing IoT has plenty of use cases in fields like elevator connection networks, smart manufacturing, power IoT, smart cities, smart water affairs, and lighting IoT. As a matter of fact, Huawei had recently 116 partnerships in the industry in order to initiate the Edge Computing Consortium (ECC). In addition, these partnerships also aim to promote rapid innovation and accelerate digital transformation through open architecture.
Already more than 100 countries and regions across the globe are using Huawei enterprise network products and solutions. To accelerate enterprise digital transformation there is an intense need of smart connection of everything. That has been the primary aim of Huawei Connect 2017. As a matter of fact, as of now, 197 of the Fortune 500 and 45 of the top 100 have selected Huawei as their digital transformation partner.
Fragkiskos Filippaios is the co-author of the paper “Social Career Management: Social Media and Employability Skills Gap”. He is also associate dean for graduate studies at the University of Kent. Social Media Education, according to him, has become an integral part of education. Especially higher education has to take a lead in this. “Universities and colleges fail to appreciate the need to include the use of online social networks in the curriculum,” he says. “Therefore, there is an urgent need to equip graduates and future professionals with those tools.” Practically, if we see, social media is (or rather has already) penetrating every level of society in personal and professional capacity substantially. In fact, businesses that function similar to social media platforms are getting successful. Similarly, business apps that work like social media platforms get more appreciation from its users, especially the millennial forming a substantial share in any organization.
Gradually a time will come when every employee at a workplace will be equally contributing to the organization’s social media management. That is why social media education becomes more important and essential. As a matter of fact, many enterprises are already taking it quite seriously. There is a paradigm shift from assigning social media responsibilities to a particular team or person to recognizing that it has become the responsibility of all employees. Otherwise, it makes a good sense. You can’t keep away your employees from social media platforms. In fact, if that is the case, let them also be a stakeholder in the responsibilities. As a matter of fact, two years back there was a study published by Altimeter that says 47 percent of organizations provide social business training for their staff. In 2013 it was 45 percent.
You Can Get Online Social Media Education Easily
That is the reason of existence of social media training and certification sites offering online courses like SocialB, Splash Media U, Expert Rating, Hootsuite Academy, Market Motive, and Mediabistro. In fact, Social Media Education Certificate from these and likewise online academies carries substantial weightage.
There is a lot happening around regarding the new technologies like IoT, IIoT, Big Data, Cloud Migration, Cloud Security, Analytics, and so on. For any technology to mark success, its adoption and mass engagement is important. The same us true for IoT Products too. While we talk about smart home, smart buildings, smart cities, smart transport, etc., IoT, perhaps, is the basis of all this. But before arriving at a stage where you really start working on smart technologies, a basic framework has to be in place. In such kind of project mass engagement and mass acceptance is one of the most winning criteria. And how do you achieve that? Obviously, design something that appeals to masses. And why will something appeal to masses? When they find some benefit in that technology. Because we know the power of technology. Also, we all crave for betterment in life.
IoT Products can bring revolution in life. But there are certain guidelines to that. Firstly, it is about delivery. What are you going to deliver to the customer or user? What additional services, functionalities, and capabilities will happen and how will it enhance the value of customer adoption? Secondly, any IoT Project requires some seamless outcomes. For this, you need to develop or adopt suitable strategies right from data gathering to implementation. Thirdly, whenever there is a good level of adoption and engagement you need to materialize it to study user behavior, level of satisfaction, risks, etc, and then accordingly enhance your product. User experience is the key thing in the whole cycle. Something has to be there that is beyond customer expectation. That something extra could add a huge value to customer experience. These are some basic things to keep in mind while developing the product.
IoT Products need to have a direct impact on consumer’s life
Above all, it is always wise to keep looking into the future. Whatever IoT Products we talk, think, develop, or deploy, it has to have a substantial impact on consumer’s life.
Social Media presence is becoming as important for an enterprise as the physical world. Any thread related to the organization on any of the social media platforms – good or bad – can’t be ignored. It needs to be addressed to in a timely manner. In fact, in an almost real-time environment. Paradigms are changing fastly. If you ignore Social media, networking, and analytics, it may become one of the key reasons of downfall in business revenue and reputation. As a matter of fact, your ignorance might become bliss for your competitors. Social collaboration and social networking are becoming two important factors for employees, external stakeholders, and customer connect. Gone are the days when FaceBook, Twitter, LinkedIn, etc. were not permitted during office hours. In fact, now social media presence matters a lot to any organization. Human Resource, CXOs, and Marketing & Sales have to be an integral part of it.
Merely social media presence will not suffice the purpose. There are many enterprises having an account on twitter for instance but are not at all active there. You need to understand the power of social collaboration and social analytics. As a matter of fact, this can give you ample information about your customers and their liking or dislike about your products or services. In fact, social media connect has taken a leap over contact number or email address of an organization. As a customer, if you want to start a communication thread with an organization social media is the first choice. As a matter of fact, you find organizations replying faster on social media posts than the emails and phone calls. That is the power of social media. In fact, social media teams are a middle layer between the external world and an organization as a strong pillar.
Social Media Presence Counts A Lot
Taking advantage of social media presence you need to create an effective data pool.
Shanghai International Port Group is getting its New Information Management Platform with the help of Huawei and Accenture. This comes through the existing Huawei-Accenture Strategic Agreement. Under this agreement, Huawei will provide its hardware and software expertise and services. That will include Huawei’s hyper-converged infrastructure (HCI) FusionCube, OpenStack-based cloud operating system FusionSphere, integration server, software-defined storage, and network hardware. This is probably the latest cooperative achievement in the port information field. On the other hand, Accenture will help Shanghai International Port (Group) Co., Ltd (SIPG) to establish a comprehensive information management platform that will run on Huawei’s FusionCloud solution. SIPG is a huge business empire involving in port operations and similar businesses like port handling, stevedoring services, warehousing, logistics, and real estate development services. The group aims to become one of the global top-tier providers in this industry. To achieve this, SIPG is addressing several challenges in various spectra.
These challenges include capital scarcity, the streamlining of its management structure, ensuring service quality and controlling operating costs. The overall design information management system will be a new landmark for SIPG. In addition to above, Huawei and Accenture are also working together on the implementation of SIPG’s engineering system, human resources system, and master data management system. Not only that, they are also providing a private cloud platform. As a matter of fact, their business intelligence (BI) system implementation and optimization is also the part of this project.
New Information Management Platform
The deployment of new information management platform is one of their most challenging projects. Matt Ma, Huawei’s President of IT Cloud Computing & Big Data Platform Product Line, says, “Cloud has accelerated the digital journey for the majority of enterprises, and the integration of Huawei’s FusionCloud with OpenStack ensures the openness of the cloud platform. This will enable SIPG to establish and manage the private cloud, public cloud, and hybrid cloud, resulting in the provision of more innovative and valuable services to their customers.”
Woolf Huang, Managing Director of Accenture’s Products operating group in Greater China says, “I am pleased that SIPG has chosen Accenture to help design and implement a customized private cloud solution with highly industrialized applications to help refine its port operation and improve efficiencies. It is critical that we have the right technology and skills in place to make this project a success, and together with Huawei we are confident that we will be able to help SIPG make its digital transformation journey a success.”
In fact, the development of information management platform comes from one of the best combinations of Huawei’s industry-leading software and hardware portfolio and Accenture’s expertise in consulting, systems integration, and outsourcing.
This post is in continuation to my previous post. Any best performing CRM App may come to an end. Obviously, there are reasons to it. In fact, you start getting enough signals from various stakeholders like customers, sales, finance, or business. These signals will highlight what the existing app is missing to provide which is very much possible otherwise. Worse are the times when a key business app becomes a shadow app in the organization because of such reasons. And then there evolve all kind of support mechanisms like spreadsheets or manual processes to keep living with such apps forcefully but eventually losing grounds in terms of effectiveness, productivity, and usability. It is always better to wake in time and take appropriate actions than impacting business in terms of reputation and revenue. Let us discuss below the key factors that declare an app redundant thus seeking quick action.
- Every execution needs resources. It is important to assess if the time you spend in pumping in the inflow of data is worth getting the outputs. The time is crucial in business. Getting information in time and taking timely actions is critical. Automation and capturing the data at its point of origin are two important factors in that aspect.
- Priorities keep changing with the changing scenarios of the business. Any change in leadership brings a new philosophy. Sometimes, it is for the betterment of the organization. While in other cases, it is to demonstrate authority which might be harmful.
- Change in other business apps like ERP might cause integration issues with the existing CRM App. But it is important to attain because the business can’t survive without it.
- Expansion in business might create a need for a new and powerful CRM app which the existing one is not able to cater to.
Continuous assessment of existing CRM app is important
- Legacy experts of the existing app might leave the organization thus leaving the whole mechanism in the doldrums. Especially in those cases where you don’t find these experts in the market because of older technology.
Any CRM system must be able to create a substantial value addition in the business. It should have a good data quality, effective integration, and an ease of usage in the organization. Mobility is another factor that is quite important these days. Employees want to be productive on the move. And that is only possible if your business apps provide that provision. Nobody wants now to wait to reach the desk to perform an action on a desktop machine. You need to equip your employees with mobile devices and access to apps to clear off the pendencies as and when they appear. Of course, security is another angle that is quite significant to look into. Any business app like CRM can go redundant with the time. Technology is changing quite fast. You need to frequently assess business apps to ascertain if you need to scrap or refresh.
Scrapping a CRM system will mean that you throw out the existing one and create or implement a new one in place of that. Refresh means you need to create some new features in the existing app to make it more productive and juicy. Because we all know that no applications are immortal. Every app, in fact, has a shelf life. The value of the app deteriorates with the time and changing technologies. The same is the case with any CRM system. When you deploy it first, it is one among the best available in the market. But with the changing ecosystem, the whole gamut starts changing. Change in time brings new blood in the environment. Stakeholders keep raising new demands regarding automation and mobility that you need to take care of. And if you don’t take care of it sooner, it will start impacting your business.
CRM System goes obsolete with time
I will continue the same discussion in my next post by taking up the crucial points that create the need of revamping your existing CRM system or, for that sake, any business application.
I recently had the opportunity to sit-down with DH2i’s Director of Business Development, Connor Cox to discuss prevailing trends in the industry around what continues to be a very hot topic – digital business transformation.
There is a lot of talk in the industry about digital business transformation. What trends are you seeing, what are you hearing from customers?
CC: The most significant trend we’ve witnessed in the industry and also had our customer-base attest to is growing management complexity. This is a result of the rapid growth in technology solutions available—and the inevitable heterogeneity of modern IT environments.
Today in our experience, most mid-level to enterprise-sized organizations’ IT environments consist of a huge mix of different OS and application versions. Many of our customers we work with are simultaneously maintaining Windows and Linux installations due to various business and application requirements.
What challenges are customers facing in transforming?
CC: The giant conglomerate of technologies any given organization has under management in their IT environment makes any sort of transformation a complex and labor-intensive process. This means customers are forced to commit huge amounts of time and resources to large migration projects for any sort of modernization or transformation they want to take place.
Customers are also frequently faced with artificial limitations on clustering like OS and application versioning-standardization requirements. Working through these barriers often results in sprawling, inefficient environments in which customers aren’t able to get efficient utilization out of available IT infrastructure.
So, what’s the answer?
CC: The answer here is pretty simple. New Smart Availability technology allows you to bring your disparate technologies into one, unified management framework for Windows and Linux instances or containers. The technology leverages standalone workloads for the most simplistic and flexible management experience. Built-in intelligent automation ensures that all these managed workloads remain compliant with SLAs and business requirements. So not only does this technology simplify overall management and modernization, it also proactively ensures peak performance for your workloads under management.
Can you explain the difference between high availability (HA) and smart availability? And, why should customers care?
CC: Smart Availability is a much more comprehensive solution than traditional high availability. Not only does it fulfill a larger scope of purposes, but it is totally geared towards holistic optimization of your IT environment. It destroys the paradigm that all high availability solutions are complex to manage, extremely expensive and only good for dealing with unplanned outages.
IT pros need to care about Smart Availability because it provides the means for organizations to achieve nearest-to-zero total downtime—planned and unplanned. The technology is “smart” because built-in, intelligent automation keeps Windows and Linux environments running at peak performance and efficiency through a proactive focus on maintaining best execution venues for all native and containerized workloads.
Smart Availability is so valuable to customers because it intelligently allocates instances and containers across any mix of physical, virtual and cloud hosts in an IT environment to make sure all SLAs and business requirements are met. It also provides unparalleled clustering flexibility by leveraging standalone instances and containers—culminating in easy workload portability from any host, to any host, anywhere, at any time. This high degree of agility and unified management pane for Windows and Linux makes IT modernization extremely easy and unlocks all sorts of other benefits such as cost savings and consolidation.
You just launched DxEnterprise version 17 software. It has been a solution embraced by Microsoft SQL Server and Oracle environments, and now you have added Docker container support. Can you tell us about the new solution and what you describe as “stateful containers?”
CC: Absolutely, we’re very excited about DxEnterprise v17 software and the huge innovations it is bringing to our product capabilities—support for Docker and Linux OS. This product is a multi-platform Smart Availability solution for Windows, Linux, and Docker that helps customers drastically simplify HA while saving money and consolidating their environments.
Stateful containers are any containers that contain an application instance that requires stored data to do its job. Typically state is stored in a database, cache, file, etc. In the scope of DxEnterprise v17, our greatest focus is on stateful, containerized RDBMS applications that make up the backbone for businesses. DxEnterprise Smart Availability technology allows us to manage these stateful, containerized workloads on Windows or Linux and enable easy failover across different OS versions or Linux distributions respectively—all while maintaining data persistence. In addition to data persistence, a huge strength of this technology is the ability to manage these stateful containers alongside non-containerized application instances from a unified management platform.
Anything else you feel IT professionals should know and/or be thinking about?
CC: Just as a general word of encouragement—if management complexity is really making life tough, don’t be afraid to look past convention. So many IT pros have been stuck in the same inefficient practices with the same inflexible technology for years, and they never cross the threshold of thinking about a different approach because what they’re doing is “How things have always been done.”
We are fortunate to be living in an era in IT where tons of new startups are flourishing and bring groundbreaking technology to the market at a rapid pace. So don’t be afraid to evaluate new technology, because many of these new vendors are bringing forward technology that can positively impact your work in the IT industry in a huge way. I for one can guarantee you’d be surprised what you might find.
Ransomware is one of the biggest threat to any business of whatever size. There is only one way to escape once you are in the trap of hackers. And that is to pay the ransom. In fact, depending on the level of the hacker, you never know if the complete data is safe or not. What if the hacker replicates the whole data during this process? You will still be in a trap despite paying a heavy amount. That is where CryptoSafeGuard comes into the picture. BackupAssist launches this Ransomware Protection tool for Small and Medium-Sized Businesses (SMBs). The new software protects backups from encryption by Ransomware. That means as the product claims, SMBs need never pay ransom again. BackupAssist® is a frontrunner in automated Windows server backup and recovery software for small and medium enterprises (SMEs). This is the worldwide launch of CryptoSafeGuard™ ransomware protection.
CryptoSafeGuard™ ransomware protection is robust enough to protect an enterprise at a very affordable price. Basically, CryptoSafeGuard protects backups by stopping the back up of infected files and thus preventing encryption of backups. That proves its strong capability against ransomware. It has a capability of complementing existing anti-malware solutions thus adding an extra layer of detection at the data level. In fact, due to this, it also provides extra shielding around backups. What they call it as – Active, simple and non-intrusive.
Basically, there are four key pillars of this product. If we look at the capabilities of CryptoSafeGuard, those are as below:
- Protects – It protects your backups from corrupting your backups from the BackupAssist computer.
- Detects – It continuously scans and detects the effects of ransomware activities in the source files under backup protection.
Key Ingredients Of Ransomware Protection Tool
- Responds – It automatically raises alerts to your administrator upon detection of crypto-corrupted files. These alerts can happen via SMS and E-mail.
- Preserves – It intelligently blocks future backup jobs from running thereby keeping the last-known good backup safe and intact.
“As a public services organization supported by tax dollars, we are extremely cognizant and cautious in how our budget is allocated,” says Mike Luu, Information Services Director, City of Milpitas, California. “We didn’t have an ‘enterprise-sized’ budget to spend on ransomware protection. However, we would be negligent if we didn’t invest in protection against the type of malicious software that could block access to our computers and data, and therefore our city services, until a sum of money is paid. And, that sum can be large – too large for an organization like ours. This is exactly the position some of my colleagues have found themselves in – it was ugly – and it wasn’t anything I wanted even the remotest possibility of having to deal with here. BackupAssist’s CryptoSafeGuard will deliver the extra layer of protection organizations need at an affordable price.”
A 2016 Osterman Research report “Best Practices for Dealing with Phishing and Ransomware” offers some interesting statistics. It says 51% of those responding to a survey had been successfully infiltrated by ransomware, malware, or a hacker one-to-five times. “Ransomware is a very real and critical problem that every organization regardless of size must address through a variety of means, including good backup processes and technology – such as BackupAssist and its CryptoSafeGuard solution,” says Michael Osterman, President, Osterman Research.
Ransomware Protection Is Essential for an enterprise
“Many businesses that are hit by ransomware have to pay ludicrous fees to restore operations – some are even forced to shut-down. But, ransomware has one big weakness. If you’ve got clean backups, you can simply wipe the infected system, and restore your data – all without paying a cent in ransom. But, ransomware has gotten smarter – it knows this and now looks for ways to encrypt your backups too so you can’t use them,” says Troy Vertigan, Vice President of Sales and Marketing, BackupAssist. “That’s where CryptoSafeGuard comes in – actively protecting your backups – eliminating the threat of ransomware infection.”
CryptoSafeGuard is available September 1, 2017, and is includes with BackupCare. The subscription of BackupCare starts from $106 USD for 12 months when you bundle it with a new purchase. In fact, you can request a free trial of CryptoSafeGuard software by visiting here.
Cyber threats are increasing in a larger fashion. That calls for a higher scale of cyber security irrespective of the size of the business. Of course, data security is of prime importance for any business. Creating a security positive culture in your organization while you are scaling up in terms of growth of manpower and revenue becomes more important. As far as cyber security is concerned, it is not the task of a single person. Of course, fencing is IT’s task in terms of creating a cyber security checklist. But then if an employee breaches it in lack of education and awareness, the whole effort goes waste. That is why what you do and why you do is important to make others understand. And thus, it becomes part of the organizational culture. In addition, there has to be a review mechanism in place.
What you do today may not suffice tomorrow. While the business scales up, technology also needs to scale up in parallel, to cater to the business needs. Well, if you don’t treat a cyber attack as a risk for your business, then there is something intensely wrong. Because cyber crime is increasing at a tremendous speed. Hackers are looking for potential targets that could be a small, medium, or a large business. Are you ready for a cyber attack? Let us have a look at the cyber security checklist and go for a quick self-audit. Let us start with the checklist having most critical points to take care of:
- Passwords are the least important components for employees. You need to ensure that every employee has a strong password. And you also need to educate them on each point that why it is important to adhere to. Ensure that there is a password policy in place. In fact, automate it in such a manner that you employees have to change their password after every fortnight or so.
Key components of Cyber Security Checklist
- Two-factor authentication is another important factor. People don’t hesitate in sharing passwords but when it comes to sharing mobile phones, 90% of them hesitate in doing so. Hence ensure there is a two-factor authentication mechanism, in place. There are many ways to do so. Like, SMS authentication, OTP, thumb, retina, or hand scan. It depends on the severity to adopt the respective mechanism.
- Ensure to restrict device usage to an extent that chances of malware attacks are minimized. Any malware attack on a device connceted to business environment might invite data theft.
- Backup have to be intact, complete, and authenticated. Cyber attacks can jam your business to a standstill within a spark of a second. Ensure you have local and remote backups in or near to real-time environment.
- Ensure that all devices in use have antivirus and antimalware in place with latest versions and updates.
- Be cautious in allocating admin roles. Have a complete audit trail of admin roles. The same is true for other critical roles and data access. Restrict rights to extract data and dsitribute data.
- Make everyone aware about Phishing emails and the kind of alert they need to raise.
- Encrypt sensitive data. Ensure any dubious looking data request even from a familiar or higher level email account need to undergo high level of scrutiny to prove it genuine.
- Ensure all your data on cloud and web have a protection shield.
These are the key ingredients of Cyber Security Checklist.
Data security is the biggest concern for any business or enterprise. Without considering the size of a business, it is of utmost importance. But how many businesses understand this? In fact, how many businesses understand that data security has to do a lot with organizational culture? And hence Security Positive Culture is a critical task. As a matter of fact, keeping it as a high priority task is not a bad idea. Rather, there needs to be a constant focus on it. Who performs and how, is management’s concern. But first thing is to understand its gravity and intensity. A business needs to ensure that all apps and servers are secure enough at any point of time. Obviously, as you scale up your business, the existing measures might fall short in providing ample security. That is why there is a need for a review mechanism.
When we talk of Security Positive Culture, it is important to understand that roots need to be very strong in this matter. Of course, there is nothing like a hundred percent secure system. But there has to be a mechanism to find out the gaps on a regular basis in the existing system. That is what defines the upgrading of data security mechanism. There is always a middle way to it. A right balance is a must for which there has to be a right assessment system. Like, the way we identify bugs in a software and then categorize them on the basis of their severity, the same need to be done here. Sometimes managers and developers become dumb. They act as per the directions from the business. That is why business has to be sensitive about the data security and hence Security Positive Culture.
Security Positive Culture Is Critical As You Grow
Actually, security awareness is a two-way process. While business obviously has to understand the gravity of matter but it is the technology department that has to educate the business on HOW part. Ownership is important. And more important is clarity of ownership.
Imagine a startup scaling up with the substantial gain of users in their online community. Of course, the business has a business app in place. On daily basis, there is an increase in their user base. New users are registering on a regular basis without considering the presence or absence of data security measures in the app. What it means is that there is a constant increase in their app download on the end user’s mobile devices. While this startup knows that its database in not as secure as it should be. Thus the whole user information is at risk. The reason is the cost. The startup know there is a cost in embedding security features and making their app and database secure. But in need of funds and scaling up, it compromises in spending money to make its system secure.
While the app is unique and popular hence attracting large user base but the users are not aware about the vulnerability and risks that are there in using this app. It is a sort of blind game. Nobody at the startup end gives high importance to Data Security. That is what is happening in today’s world. 8 out of 10 startups give least weightage to invest in Data Security and thus tend to fail sooner or later. After all, customer trust and loyalty doesn’t come for free. There are many things that cusomter takes for granted. Security of customer data is one of those. And the moment a customer comes to know there is a breach in that trust level, it backs out. As a matter of fact, social media plays a major role now a days. It takes no time to make things go viral. Especially things like this.
Data Security Is A Point Of Customer Trust
Online threats are scaling up at a faster pace. You never know who is eager to steal and exploit your business data. Data security, hence, is of utmost importance. But as a business, you not taking care of it at all in that direction is a crime against your customers.
Retail sector across the globe is banking more on e-commerce. That means retail becomes e-retail. The impact is so high that it is causing a substantial impact in the US retail segment. In 2016 and 2017 we can see a number of brands of repute have been closing their physical stores. As a matter of fact, any retail with no backing of e-retail is prone to witness this kind of incident. In fact, the retail business in Asian markets is able to bounce back only because of their increasing dependence on e-retail. Rather, latter’s growth is redefining the success of former. Marc Woo, Head of e-commerce, Google says APAC accounts for more than 40% of the global e-retail business. Mainly China, Australia, South Korea, Japan, and India are dominating in this segment in the region. In the same manner, Southeast Asia will be the next leader in the region.
In 2015 Google conducted a study jointly with Temasek. Its title was “e-conomy SEA: Unlocking the $200 billion digital opportunities in Southeast Asia (SEA)”. The study predicts that the internet economy in SEA region will reach $200 billion by 2025. In fact, e-commerce alone will account for $88 billion by the end of the same period. That is actually phenomenal. As a matter of fact, e-retail will outpace physical retail by a wide margin. It would be almost 32% 10-year CAGR vis-a-vis 7%. Rather it has a realistic potential of reaching $120 billion market size. But there are certain pre-requisites to achieve these mighty targets and estimates. The challenges for Southeast Asia are no less to become a #200 billion. These include logistics, infrastructure, last-mile delivery, and automation.
E-commerce is the key to success
Barring Singapore, all other countries need to improve a lot. Unless all other countries pace up, it will be a difficult task to achieve. Logically, a high stake in the digital economy and electronic transactions is the key to success.
Enterprise agility is a direct derivative of business process management or BPM. The focus of BPM is changing from business revenues to the customer. It is the customer focus that is now creating a need for BPM application in the business. While if we look at earlier days, it was the requirement of reducing costs in the business processes raising the need of buying a BPM application. But there has been a complete change in the requirement. To facilitate customer, businesses are deploying BPM in order to optimize their business processes to stay more customer centric. So much so that businesses make the quality and delivery standards according to the customer requirement and not as per the business comfort. Rather the comfort factor is directly proportional to the customer. It comes only if your customer ensures that the situation is under control and thus he is comfortable.
Now, if we look at business process management and its relation to enterprise agility, many scenarios come into the picture. It is interesting to understand why do we deploy business process management. The sole aim is to align business goals and processes with the evolving business. And this becomes, in fact, a continuous process. Because business evolution is a non-ending exercise. In that regard, a BPM application assists businesses to define steps to perform a business activity. Once this is clear, the next step is to map these definitions to your existing business processes. And then you need to take care of gaps. This will help in streamlining the existing process and thus the improvement will reach to a next level. And the same journey continues thereafter.
Enterprise Agility Is A Direct Derivative of BPM
As a matter of fact, business agility or as we say enterprise agility is becoming synonym with business process management. A small volume of maturity in BPM software brings an exponential increase in enterprise agility.
This is the last post of the series of interaction with Phil Trainor from Ixia. In this post, he will be talking about cyber security strategy. In the previous posts, we have covered Ixia’s Solutions, Edge Security, Inline Security, importance of network monitoring, and Network Security Landscape. So when we prompt Phil with this question – What should be the key components of Cyber Security Strategy for enterprises? His answer comes as below:
Organizations need to constantly monitor, test, and shift security tactics to keep ahead of attackers in the fast-paced threat landscape we all deal with today. This is especially important as new cloud services and increased IoT devices are routinely being introduced. To do this effectively, organizations must start by studying their evolving attack surface and ensure they have the proper security expansion measures in place. Simple but effective testing and operational visibility can go a long way to improving security.
Cyber Security Strategy Framework
Some of the key components of Cyber Security Strategy for Enterprises are:
What are the key highlights of Ixia’s solutions? Ixia delivers a powerful combination of innovative solutions and trusted insight to support network and security products, from concept to operation. Whether preparing for the product launch, deploying an application, or managing a product in operation, They offer an extensive array of solutions in testing, visibility, and security—all in one place.
Ixia test solutions provide an end-to-end approach for organizations to test devices and systems prior to deployment and assess the performance of networks and data centers after upgrades or changes. To verify new service implementation, new device insertion, or network expansion, Ixia test solutions help organizations perform extensive pre-deployment testing to ensure current network functions are not compromised. This testing must be high capacity and must simulate network and application oversubscription in order to stress network upgrades to their limits.
Ixia security solutions allow organizations to assess network security and resiliency by testing and validating network and security devices with real-world application traffic and attacks. Using these solutions, organizations can perform assessments before production deployment and establish ongoing best practices that harden security by assessing individual devices, networks, and data centers. In operation, their solutions monitor traffic—clear and encrypted—to keep malware out, enable security tools to be more efficient by filtering out known bad traffic, and ensure security is resilient and highly available.
Ixia visibility solutions are uniquely positioned to help organizations manage and monitor change in their networks. Ixia provides 100% access without dropping packets, as well as visibility intelligence, load balancing at line rates, and context knowledge to serve the right data to the right tool. They have a complete visibility portfolio on the market, allowing our customers to build a visibility architecture that best fits their network needs today and in the future.
Ixia’s wireless and IoT test solutions address the complex challenges mobile operators face in rolling out high quality, differentiated services. Mobile operators can use their award-winning LTE and Wi-Fi test systems and services to subject devices and configurations to high stress, high-scale conditions and a wide mix of voice, video, and data applications. Operators can evaluate the subscriber experience in the face of mobility, system overload, and even device failure on a large-city scale. And with IoT test solutions, they can ensure that their Wi-Fi implementations are robust, cause no interference, and operate as specified.
Phil Trainor is a master in network security, inline security, network visibility, and network monitoring. In this post, he guides us why Edge Security is not enough. We have been talking to him on these topics in previous three posts. In the first post, we talk to him on the changing paradigm of network security framework. How is increasing trends of cyber threats responsible for this? How, on one hand, is the dependence the on internet increasing on every step of business? And how, on the other hand, it is increasing risks and threats in shape of malware, ransomware, and other kinds of cyber attacks.
In the second post, Phil talks about network visibility and network security. The more is the visibility, the higher it becomes easier to safeguard your network. Of course, you need to deploy right tools and strategies for that. Going further in the third post he talks about Inline Security.
Why is mere Edge Security not enough for any network? This was the question to Phil and here comes his reply. Network security is a critical concern for enterprises, government agencies, and organizations of all sizes. Today’s advanced threats demand a methodical approach to network security. In many industries, enhanced security is not an option. Installing traditional antivirus and firewall software is not going to be enough to combat radical intrusion mechanisms. Even after discovering and being disclosed in public, vulnerabilities can remain unpatched for months or even a full year, exposing an organization to even simple attacks.
Edge Security Is Not Sufficient
According to research reports, it takes nearly 60-70 days for an average company to fix a vulnerability. This gives attackers plenty of time to gain access to a corporate network. Even a mid-sized company have to fight against thousands of vulnerabilities on a monthly basis. An ideal network security application should be able to address varied threat situations impeccably. In order to protect users from disadvantageous outcomes of malicious interventions, the network security system of any organization should go beyond the contemporary threats.
What is Inline Security? How many of us would be knowing this? Phil Trainor continues with us in this third post in the series. While in the first post he enlightens us with insights on Network Security Landscape, in the second post he talks about a deep correlation between Network Security and Network Visibility. In this post, he is educating us on Inline Security. And in the next post, he would be talking about Edge Security. That is not all. He will continue talking to us further in a couple of posts. So, let us read below what Phil Trainor has to tell us all about it.
A resilient Inline security framework ensures tool failures do not become network failures. Network architects understand that just like when building a skyscraper, resilience starts at the foundation. A proper network foundation begins with a stable bypass architecture where inline tools can operate at line speeds without affecting traffic flow in the event of failure. But with different security tools requiring different data access, a simple bypass may not be enough. Adding a network packet broker to that bypass intelligently routes traffic to different security tools for inspection. Without these two working together, packets could be lost, failures could bring your network down, and security holes could emerge. There are several ways to create an inline security architecture. Creating a resilient inline security architecture requires attention to details. The result will reduce network downtime, enable upgrading tools with zero network impact, and extend the useful life of your security investments.
Ixia’s Inline Security Framework
Ixia’s Inline Security Framework offers a proven solution for optimizing inline deployment of security tools while delivering greater security resilience and lowering cost. An intelligent framework including bypass switches and inline packet brokers helps ensure availability, optimize performance, and streamline operation by network security teams.
What is the importance of Visibility in the Network and how it plays a critical role in Network Monitoring? We are talking to Phil Trainor from Ixia. This is the second post in the series. In the previous post, he talks about the changing scenario in network security landscape. Here, in this post, he continues with his insights on network monitoring and its close relationship with network visibility. Let us read below what he has to say on this, in his own words.
IT Networks are growing more complex these days and so are the compliance procedures that are keeping them in check. Network attacks are growing in both volume and frequency, creating a storm of threat data. IT administrators are increasingly realizing the importance of not only the right tools and people but also greater intelligent management. It is here that the importance of network visibility tools and practices play a key role in quickly identifying, interpreting, and acting on threats.
Visibility provides real-time, end-to-end visibility, insight, and security into physical, virtual, SDN and NFV networks, delivering the control, coverage, and performance in a seamless fashion to protect and improve crucial networking, data center, and cloud business assets.
Network Monitoring needs complete network visibility
IT security and analytics tools are only as good as the data they are seeing. IT’s fundamental challenge is to ensure that the infrastructure behind these tools delivers applications that are reliable, fast, and secure. This means that IT needs total visibility of the network. With the current level of network security threats and a complete dependence on the business on the data network, you cannot afford partial network visibility. You need lossless visibility. Better network visibility can improve mean time to repair by helping pinpoint problems. A visibility architecture helps eliminate specific concerns by organizing and integrating your monitoring strategy with your security architecture and problem resolution processes.
How has the landscape of Network Security changed during last decade? This is the question I asked Phil Trainor- Head- Security Business, APAC, IXIA. Phil Trainor has more than 14 years’ experience in senior network security engineering roles for global technology companies and currently leads Ixia’s security business for the APAC region. In addition to his title role, Phil also heads up product direction and business development of Ixia’s BreakingPoint Security solution. Ixia, recently acquired by Keysight Technologies, provides testing, visibility, and security solutions, strengthening networks and cloud environments for enterprises, service providers, and network equipment manufacturers. Ixia offers companies trusted environments in which to develop, deploy, and operate. Customers worldwide rely on Ixia to verify their designs, optimize their performance, and ensure a protection of their networks and cloud environments to make their applications stronger. Here is the answer below by Phil Trainor on Network Security Landscape.
Network Security Landscape and Cyber Threats
The world of cyber threats expanded dramatically over the last decade—and not just because of an increase in the amount of malware. Organizations are dealing with larger attack surfaces. Exploits of Internet of Things (IoT) devices have transitioned from speculation to reality. Users and organizations got direct experience with ransomware as attacks targeted nearly every mobile and desktop operating system (OS)—and the ransomware moved from the hands of elite programmers into the hands of novice hackers. Today almost all malware is made to steal money or corporate secrets. Professional hacker gangs are making millions of dollars each day harassing home users and corporations with almost no threat of being caught or prosecuted. Malware has gone from mild viruses and worms to identity-stealing programs and ransomware.
Network Security Landscape and Cyberspace
The explosion of outbreaks has forced corporates to take a look into threats posed by cyberspace –most of the companies now have Chief Information Security Officers, committed staff and improved funds for security. Cyber security is a field extremely influenced by technology trends such as digitization, the increase of fintech, ‘connected’ cars and homes among others, and supply-side cyber security companies persistently develop new solutions to combat cyber-threats arising from these trends.
Network Security Landscape and IDC Foercast
IDC forecasts worldwide revenues for security-related hardware, software, and services will reach $81.7 billion in 2017, an increase of 8.2% over 2016. Global spending on security solutions is expected to accelerate slightly over the next several years, achieving a compound annual growth rate (CAGR) of 8.7% through 2020 when revenues will be touching $105 billion. The swift growth of digital transformation is forcing companies across all industries to proactively spend on security to shield themselves against known and unknown threats.
Something important happening in terms of Security Assessment Program is worth pondering. Australian Insurtech enterprise Cyber Indemnity Solutions (CIS) collaborates with Fortinet on a critical program that will keep analyzing weaknesses in security framework in organizations. Fortinet, as we know, is a global security solution provider. The two companies launch the program across Australia/New Zealand. In fact, they don’t stop there. They have aggressive plans to cover the Asia Pacific region soon. Greg Hodgkiss, CEO, CIS acknowledges that this collaboration is a key component in risk management. As a matter of fact, on-premise and cyber security have become apparently the major portion of cyber threat for enterprises all across the globe. As the risks are increasing manifold in the cloud spectrum, it is quite prominent for enterprises to focus on its security aspect and thus look for the right partners to ensure the right solutions.
Security Assessment program is not simple to deploy. It will vary from industry to industry. In fact, it consists of a number of technologies and various aspects. Like technology to impart best practices, strong protocols, user behavior, and usage patterns. Only cyber insurance solution providers covering this complete paradigm can think of appropriate insurance cover. As Hodgkiss says, “Businesses face an ever-increasing range of complex and evolving cyber-security threats yet most businesses lack the budget or expertise to deal with these threats effectively. The most significant damage to any business is the permanent loss of critical business data, which can be a result of an attack, employee maliciousness, or simply human error.”
Security Assessment Program Is Important
This Security Assessment program between CIS and Fortinet include Fortinet’s Security Fabric, FortiGate enterprise firewall, and FortiAnalyzer central logging. Jon McGettigan, Senior Director, Australia, New Zealand, and South Pacific Island, Fortinet confirms that this solution is important because both organizations have an exhaustive presence in the area in terms of investment and resources. In fact, Australia’s new data breach law demands this level of security assessment program for cyber insurance.
McGettigan adds, “Over the next several months, we will monitor the market response to Fortinet and CIS’s joint offering. If it meets our targets, we will extend the collaboration to more markets in the Asia Pacific region.” He concludes by saying, “The opportunity for the channel is to have a different conversation at a high level in the organisation – cyber insurance needs to be flexible and bespoke to each customer. Creating a robust cyber security posture helps each organisation in being aware of the importance of protecting their critical data, and creates an opportunity for the channel to add value through implementation and services to create that solution.”
CIS will extend this Cyber Security Assessment program CIS to ‘Crimson Risk’. In fact, Crimson Risk, of which CIS is an integral part, is an association of cybersecurity companies providing advanced level risk assessment, monitoring, and consulting services. Thus, the new collaboration with Fortinet will also help CIS in expanding the Security Fabric to Crimson Risk clients.
Security Assessment Program Is To Mitigate Cyber Risks
Hodgkiss emphasizes that the target is to create a holistic risk assessment framework that covers business risk from all directions. He says, “It takes the form of a comprehensive questionnaire looking at IT, governance and compliance, human capital and third parties, and existing insurance coverage. The assessment report will provide the customer with recommended remediation and mitigation actions including insurance to indemnify them against data loss an additional layer of protection against cyber threats.”
Most of the organisations don’t trust standard cyber insurance policies. It is because these policies don’t cover all business risk and full data loss. That is why this program ensures a deep risk coverage and higher indemnity cover for their clients having crucial digital data assets. “Ongoing threat monitoring, sophisticated artificial intelligence platforms and constant review of the business changes will continue to provide a high level of protection for the business,” he says.
“Protecting a business against losses associated with cyber risk makes good financial sense and should be a key component of the cyber risk mitigation strategy. To support the risk assessment service, we can now also uniquely offer high indemnity, broad coverage, insurance to compensate data owners for the cost or profit impact of a cyber-attack, or if critical data is permanently lost,” Hodgkiss concludes. Hope it clarifies various aspects of Security Assessment and how critical it is to deploy in today’s life of cyber risks.
Fuzz Testing or Fuzzing is quite useful in many ways. In fact, it is a quality assurance (QA) technique to discover coding errors and security loopholes in software, operating systems or networks. A lot of enterprises, banks, e-commerce sites, etc. use this technique. It involves pumping massive volume of random data, that we call as fuzz, in order to simulate an attack and make the test application crash. As a matter of fact, many organizations do it on their production server. In case they find a vulnerability, they then apply a software tool, that we call a fuzzer. A Fuzzer determines the potential reasons for the crash. Barton Miller at the University of Wisconsin in 1989 was the first to develop this concept of Fuzz testing. And gradually the concept has become so hit that it has become an essential technique to incorporate.
While the other testing techniques, we use before the launch of a product. But Fuzz Testing stays an integral component even after the deployment of an application on the production server. Usually, this technique works best to detect vulnerabilities that can emerge due to a buffer overflow, cross-site scripting, denial of service attacks, format bugs, and SQL injection attacks. As a matter of fact, Fuzz testing is less effective for in identifying security threats that are not responsible for program crashes. These could include spyware, some viruses, worms, Trojans, and keyloggers.
Fuzz Testing Is A Very Useful Technique
The benefit of fuzz testing is that it is quite simple to incorporate. Yet, it offers a high benefit-to-cost ratio. In fact, that is its strength. This is because it often identifies defects and vulnerabilities that the developers and testers overlook when they write, debug, and test the software. In fact, this kind of technique is powerful enough to find the most serious faults/defects/vulnerabilities. But here is a word of caution. You should not use it for creating a complete picture of the overall security, quality or effectiveness of a program or application. Rather, it works best when you use it in conjunction with extensive black box testing, beta testing, and other debugging methods.
Cost Optimization is moving from what you pay as per provision to pay per use. Like, earlier you had to pay as per provision irrespective of how much of that provision are you going to utilize. But gradually the things are turning more in favor of the consumer and seller has to change the costing, delivery, and service patterns as per customer requirement. Hence, now the stage has come where in most of the cases you have an option to pay per use rather than the complete contractual amount. Of course, pay per use will not work in cases where your utilization is higher. Optimization can happen in many areas like billing consolidation, removing resources that are not in use, match usage to the storage class, right-sizing before migration. But some areas are difficult to achieve like planning/architecting for cost before migration, instance on-off to meet test-staging work hours, reserved instances.
Still tougher to achieve are spot instances, instance scaling to meet external demand, right-sizing after migration. As a matter of fact, the scope of optimization is always there. Because the conditions don’t remain same in terms of delivery, consumption, and upkeep. The variation itself calls for observation and evaluation on regular intervals.
Cost Optimization Is A Regular Activity
There was a mention of five pillars of cost optimization during recent AWS Startup Day as below:
- Right sizing your instances – CPU, RAM, performance requirements
- Increase elasticity – Automation of start and stop can happen on the basis of access control system. Tagging is important for elasticity and rightsizing. Setting appropriate alerts is important in those cases. In fact, when we talk about tagging, it can happen on the basis of Business tags, technical tags, etc.
- Pick up right pricing model – Like, three pricing models by AWS are – on demand, reserved, and spot. First one is for spiky workloads, the second is for committed or baseline.
- Match usage to storage class
- Measuring and monitoring – There has to be a mechanism in place that you are able to sustain including
- Cost review process
- Definition of policies
Cloud Migration Benefits are tremendous as was visible during the AWS Startup Day event. Only pre-requisite is to choose right partner in your journey to ensure hassle free migration in the least possible time. And once you are clear about migrating your on-premise servers to the cloud, it becomes easier to convince your management about the key benefits. Of course, this migration will include data, application, and other servers. In fact, these days, it is not a set of separate physical servers serving your enterprise server needs on-premise. And once you decide to migrate to the cloud, it is not about the space and infrastructure that you are able to save on but a lot more. As a matter of fact, by reducing your data center size you are cutting down operational as well as capital costs on various fronts. In addition, there are other tangible and intangible benefits.
So let us discuss the key Cloud Migration Benefits. These could be as below:
- Lean Innovation and Culture: Migrating to the Cloud not only benefits IT department but it also sets an example for others to follow. In fact, it becomes a key to evolve Lean Innovation and Culture.
- Cloud Economics: There was a mention of this in my previous post along with some useful case studies about some most successful and fast-growing global enterprises.
- Two factors TCO & Business Value, and Cost Optimization
- Lower infrastructure cost: Of course, your infrastructure cost goes down with this.
- Economies of scale
Cloud Migration Benefits are Absolute
- More infrastructure: In fact, you get more at lesser cost when you migrate to the cloud. Rather, it is a myth that you have to invest more. That is not the case. As a matter of fact, if you think of similar setup on-premise, the investment would be humungous.
- More usage
- More customers
- Reduced process
AWS Startup Day by 10000 Start-Ups and Amazon Web Services was in New Delhi on 28th July. 10000 Start-ups is a NASSCOM initiative. In fact, the launch of the apps/features has become easier with web servers and web services. In addition, it has become much easier to scale up and scale down. Not only that, the new roll outs take shorter or almost negligible time. If not, that is, at least, the demand of the business. That is Zero down time and 100% availability. Although it is impossible to attain this under all circumstances still businesses and enterprises with the help of right kind of technologies and reliable partners providing those technologies and support mechanism are able to achieve this. Effectively the key contributing factor is the cloud. Provision faster and provision with agility is the prime challenge for enterprises in today’s competitive environment.
Pull and store data faster and accurately is something that is quite important for any business. This was one of the key highlights of the AWS Startup Day. There are, of course, pressures of bringing down the cost of ownership. These pressures are tremendous, in fact. And the purpose of this for everyone is to reduce costs. Like, Netflix runs hundred of micro services on the cloud at the same instance. Any time it takes a few seconds or a few minutes to spin up a new set of services. All this happens in the cloud. Users just take it for granted take the moment they connect to Netflix, it’s too work instantly would any hiccups.
AWS Startup Day was quite insightful
Similarly, Hotstar running over a dozen of channels online was also a point of discussion in one of the case studies during the AWS Startup Day. Traditional on premise hosting versus cloud was the key evaluation. Finally, the cloud was the winner keeping in mind that a huge amount of viewers would be availing the services. And nobody likes disruptions while watching a movie or a TV program online.
ATMs have cameras but how many have tracking and alert mechanism to notify a person spending time there without any withdrawal. Why we can’t create an alert mechanism for any dubious actions at ATMs and other sensitive locations with the help of these intelligent cameras? How about shop floors? Terrorism? Factory mobility, automation, connected safety, connected supply chain, improved asset utilization can be done with IIoT. This can, in fact, compliment Industry 4.0. We are talking about some real life IIoT examples.
Sensors in the helmet can alert family members about a person drunk and driving, not only to controlling agencies but also to family members. There are Non-invasive sensors and invasive sensors (cloud). In a steel plant helmets are necessary. Alert about any person not wearing a helmet is a good idea. Similarly, same goes true for Gas leakage alerts? Some other useful areas are – To monitor current performance against plan, To identify areas of improvement, To identify the problem as it occurs. Preventive maintenance/safety to improve throughput is another good example. Dust waste accumulation monitoring can help a lot with automation and alert mechanism. Similarly, On production, line noise tells a lot about rolling machines. In fact, deployment of such IIoT Examples can really do wonders.
In the same manner, Vibration sensors can do wonders. There can be a tremendous improvement in Passenger experience at the airport. NFC/thumb impression at airports for entry can ease life at both ends. In addition, deployment of Intelligent baggage system to track if your baggage is on the same aircraft or not is also a wonderful idea. As a matter of fact, The Oil industry, mining industry can tackle gas leakage and human safety with IoT. In fact, Driverless cars can help in a decrease in accidents. Actually, What India did in IT needs to be done in IoT. That will create another landmark in Industry 4.0 and set an example for others to follow. These are some IIoT examples to start with.
When Dr. Neena Pahuja, Director General at ERNET India, Department of Electronics & IT (DeitY), talks about Industry 4.0 and Internet of Things (IoT), it comes with a lot of real life examples. This is how it goes. We all are using the internet. Are you able to connect to your factory in real time from anywhere in the world? Is the Internet in IoT about innovation or connection? What if instead of IoT we talk about the internet of everything? Industry 4.0 is a new revolution. A control system talks about how to connect different things. It gives a phenomenal strength and value when there is an integration of various components in an Enterprise.
The first wave of industry came in 1782. In fact, Today we are witnessing Industry 4.0 revolution. What are convert physical systems? It is people, products, vehicles, and everything that an industry comprises of. If you are able to get a problem in the industry when it occurs, you’re able to control it much faster. In fact, IoT is a composition of various hardware, software, and actions. These factors like Sense, embedded systems, fork computing, communicate, analyze, and act is the internet of things. Like an Artificial lake, artificial limbs, or personalized products through 3D printing.
Industry 4.0 is a worldwide revolution
In medicine, there are bodies used for teaching in our country. While in some countries they use 3D printing and artificial intelligence to perform it. No longer today doctors need to measure body parts for replacement. In fact, it is done through a body scan. As a matter of fact, Assets on the airport, hospitals, hotels can be tracked through IoT. A small smart device at a very nominal cost can help to do it. IoT can help in asset tracking, supply chain tracking, noise/vibration tracking, gas/water/oil/power leakage, field force management, tracking using intelligent cameras etc. In industries like Oil and Gas, leakage can be sensed through sensors in real time. Industries can come to know where their mobile workforce is at any moment of time. One of the start ups in Bangalore having an intelligent camera to alert in real time in case of any incidence. These are some of the best examples of Industry 4.0.
Mark Lazarus, CTO APJ, Nimble Storage, brings some engrossing insights about backup, restoration, and a lot more. Isn’t backup data for restores? What else could I use it for? May be for Test/Dev environment, Patch testing, Verify backups and even DR. What if your backup media wouldn’t cope at the time of crisis? In today’s circumstances, these are the key backup and secondary storage challenges. Mostly we experience slow backup and very slow restores. As a matter of fact, it is trapped data on the backup device that can’t be used without restore. Then there are specialized and siloed backup infrastructure that demands separate manpower and attention. One option is to move everything to All-Flash which is still relatively expensive for non-primary storage. Another option is to move everything to cloud but then it has high latencies and is costly to recover.
The best solution from Nimble Storage is storage secondary flash array. This is a new type of backup Storage. In fact, it is flash-enabled storage to deduce & capacity optimization. Rather, it is Multi-cloud flash fabric. It comes with many important features. Firstly, it helps in predictive analytics. Secondly, it acts as an Optimizer for data protection and secondary storage workloads. Flash based performance is for instant restores and recovery. It is an application aware storage mechanism. As a matter of fact, by using flash you can put your backup data to work for Dev/test, QA, analytics. Simplify data management through deep integration. In fact, it is a validated design. In addition, it works seamlessly with your primary data.
Nimble Storage optimises IOPS and Effective GBs
Some of the key features of this new solution from Nimble Storage are:
- Minimize impact on production VMs
- Simplify scheduling
- Application aware consistency
- Verified recoverability
- Multiple uses like Test, Dev, training, and troubleshooting
- Effective capacity
- Deduplication & compression – Inline, 18;1 compression. But then it depends on the type of media and data files
- Read/write IOPS
- Connectivity options
As a matter of fact, Data Efficiency is a function of data and time. Retention period plays an important role. So does the performance and capacity which means IOPS and effective GBs. In a way, it is a perfect match of primary storage and secondary flash array. SFA helps deliver DRaaS that doesn’t break the bank. Thus Nimble Storage adds a new paradigm in storage and restoration with speed and accuracy reducing risk to minimal. That is actually a boon!