From Silos to Services: Cloud Computing for the Enterprise

Page 5 of 9« First...34567...Last »

September 19, 2013  12:32 PM

Why did Software Defined * (Everything) happen?

Brian Gracely Brian Gracely Profile: Brian Gracely

Low Res Model T assembly lineIn the early 1900s, Henry Ford revolutionized the transportation industry by mass-producing the automobile. It was amazing. People could leave their homes to see the country.  Then there was a significant need for major infrastructure to enable that “exploration application”. Highways and freeways were built. We marveled at the feats of engineering to build the roads and bridges to connect us from sea to shining sea. At some point, we stopped being fascinated with the road and an amazing ecosystem of hotels, restaurants, amusement parks and other “entertainment applications” sprung up. New cities were born and the economy of the entire country grew as new possibilities were available to more people. The removal of friction from one application led to the growth of many other applications. The standardization of the road enabled incredible economic growth.

Our industry does not lack for hyperbole and many have come to believe that “Software Defined” is quickly become the latest candidate for the buzzword bingo Hall of Fame. But in order for a term (or concept) to generate this much inertia, or “noise”, there has to be a reason because there’s too much money in the IT industry to chase unicorns.

So how’d we get to this point…?

A few basic things happened.

  1. Moore’s Law doesn’t sleep.
  2. The pace of hardware change and software change is mismatched and people no longer have any patience.
  3. Open Source software become more mainstream and visible.
  4. Public Cloud Computing became more mainstream and everyone became IT.

To begin with, almost everyone has access to the same, fast hardware. This might be x86-CPU server boxes, or merchant-silicon networking boxes. There are still companies that do unique things with hardware elements, or create tightly integrated packaging, but it’s no longer a pre-requisite for entry into the market. This significantly lowers the barrier to entry.

On the software side of the equation, we have broadly available software libraries and development tools that are accelerating the pace of development. Combine this with a shift to Agile methodologies and Continuous Integration. Throw in a few layers of abstraction (from OS or hardware) and the applications bits are getting created faster than ever. Continued »

September 19, 2013  12:30 PM

Google’s Potential Strategic Cloud Advantage

Brian Gracely Brian Gracely Profile: Brian Gracely

When Google launched (or beta’d, or preview’d) the Google Compute Engine (GCE), many people though it was a response to Amazon Web Services and it’s significant market lead in IaaS. While this might be somewhat true, I tend to believe there are some nuances that people are missing that could have significant impact on different industries that you might expect.

Think about this –

Amazon builds marketplaces.

Google builds platforms.

Amazon thinks about end-user experiences.

Google thinks about platform-user experiences.

So while Amazon has built AWS to be a utility computing platform, with a number of very interesting services, it’s really much more of a utility computing marketplace. They provide the tools to create a new market for IT services and IT applications.

Google on the other hand is all about building platforms to interact with digital information, as an underpinning to drive advertising. They subsidize this amazing collection of information by providing a number of free services that end-users can enjoy (eg. Maps, YouTube, Gmail, Android Mobile, etc.).

It’s possible to believe that Google is attempting to compete with AWS as a next-generation IT platform, but I think they may have very different intentions. I think their bigger ambitions aren’t the IT industry, but rather the broader media industry.

Google now owns the most ubiquitous web property for consuming media – YouTube.

Google controls the fastest grow next interaction point for billions of humans – the Android OS running on the majority of smartphones. This can obviously be extended to tablets or potentially other large-displays (eg. “formerly called TVs” devices).

Google is beginning to move even closer to users, with a foray into wearable computing, beginning with Google Glass.

Continued »


August 24, 2013  10:06 AM

Top 5 Challenges for Private Cloud Success

Brian Gracely Brian Gracely Profile: Brian Gracely

Several years ago I wrote a post about The 5 Ps of Cloud Computing, back in the early days of IT organizations thinking they could design and operate cloud computing environments internal to their own data centers. My friend Christian Reilly (Chief Cloud & Mobile dude at Bechtel) wrote a variation based on his experience building both an internal cloud as well as a mobile application store for this business.

Back in 2009 and 2010, the maturity of the technology and skill-sets within IT to take on a transformational project as large as “private cloud” were not really there. During this time, we did see a large number of companies evolve their IT organizations to be more cost efficient through technologies such as server virtualization or converged infrastructure, but the demands for agility and speed from the business continued to put pressure on these projects to move up through the application layers.

[Sidebar: I saw a great quote on Twitter from Marc Lucovsky (@marklucovsky) about the difference between actually delivering *aaS as a service vs. selling *aaS products for someone else to run - (translation) "the value is in the *aaS portion, not so much the underlying technology"]

 

Screen Shot 2013-08-24 at 9.59.27 AM

 

 

 

 

Continued »


August 17, 2013  5:25 PM

Open Source for Cloud – Projects vs. Products

Brian Gracely Brian Gracely Profile: Brian Gracely

As more and more open-source projects get implemented within Enterprise IT organizations, one of the most frequently confused topics I hear discussed is open-source projects (FOSS – Free Open Source Software) vs. products. It’s not surprising, since the majority of IT organizations purchase the tools they use as “products” (on-premise or off-premise).

Let’s Start with the Basics

There are 100s of open source projects (Apache, Linux, etc.) that are targeting large IT challenges today. They are being created by individual developers and fostered by groups of developers that want to further the project. Some of the most popular include:

  • Linux OS
  • Apache Web Server
  • MySQL DB
  • PHP
  • OpenStack (multiple projects)
  • CloudStack
  • Various NoSQL Databases (Hadoop, Cassandra, MongoDB, Couchbase, Riak, etc.)
  • Open vSwitch
  • Various SDN projects (OpenFlow, FloodLight, OpenDaylight)

At this stage, these projects are just code. Anyone can download them, use them or contribute code back to them. There are guidelines to follow depending on the open source license that is used, which is especially important if someone decides to use some of the code to create a new product.

Projects Inside of Projects

One area that’s often confusing is when an open-source project is actually a series of loosely coupled project. OpenStack is a great example of this. There is no “openstack.exe” or “openstack.rpm”. OpenStack is actually made up of multiple projects (Nova, Horizon, Glance, Swift, Cinder, Neutron, Heat, etc), some that are official (in a given release) and others that are experimental. In order to implement OpenStack, it is optional whether or not a group uses some or all of the projects. It just depends on their specific needs.

Beyond the Projects

In between the projects and commercial products are a series of efforts, typically driven by commercial vendors, to take the next step in simplifying an open-source code base for use by IT organizations. These efforts are typically a “free” version of their commercial product. The free versions typically have one of the following characteristics:

  • Often a subset of the commercial product – compatible, but might not have all the “enhanced” features
  • Not as actively supported by the vendor, but rather by the followed community around the open-source project.
  • May be more aligned to the most recent fixes, pulls and enhancements in the “trunk” of the open-source project. This would be preferable for developers or customers that need the latest, bleeding-edge capabilities vs. stability.
  • Examples of this includes: RedHat Fedora, Basho Riak, 10Gen MongoDB, etc.

Commercial Products 

Some IT organizations are overworked and understaffed. They might love to reduce their IT costs by only using FOSS, but the challenges of limited documentation and support create pressures that they aren’t prepared to manage. For these customers, vendor-offered products, based on open-source projects, might be a good fit. Not only does it offer them the ability to potentially reduce acquisition costs, but it also gives them the following benefits:

  • Access to professional support and documentation, as well as community support that may expand their ability to solve problems.
  • Risk management that they could fall back to the open-source “trunk” code if the vendor they are working with has financial problems or doesn’t deliver the required updates in a timely manner. While this isn’t seamless, it does offer some customers a way to manage the risk of vendor selection.
  • For those IT organizations that do have capable developers, they can access the open-source versions and either better understand how the code works, or submit a pull request for new capabilities that they have developed.
  • Leverage the experience from other IT organizations that are using open source software, either through meetups or via online communities.

I’d love to get your feedback on how (or if) your IT organization is engaging with open-source projects, or if you’re using open-source software internally today.


August 17, 2013  1:54 PM

Cloud Evolution – USB 2.0 is a Long Way Off

Brian Gracely Brian Gracely Profile: Brian Gracely

Back when we started The Cloudcast (.net) podcast in 2011, our first guest was Christian Reilly from Bechtel. At the time, he was a couple years into a multi-year process of evolving his internal IT architecture to a private cloud. One of the most interesting comments he made about their evolution was the lack of interoperability between technologies and platforms claiming to be “cloud”. The way he explained it, this new paradigm was at a crossroads. It could either emulate the early Internet walled-gardens of AOL and CompuServe, or it could embrace the open standards that allowed the Internet to expand into every aspect of our lives.

More than two years later, the debate about cloud interoperability is still raging.

These days, there seem to be three camps of thought about how to deal with interoperability:

Same Cloud Everywhere 

The simplest way to think about how to leverage multiple clouds is to have the same technology everywhere, in theory ensuring interoperability across multiple environments. This is the approach being taken by VMware vCloud Hybrid Service, vCloud Service ProvidersVirtustream xStream and various implementations of OpenStack. Some of these offerings are also beginning to offer alternative API support (often AWS API capabilities).

Same API Everywhere

Other projects are attempting to take the path of having similar APIs available on multiple cloud instances. This is the approach being taken by The Amazon-Eucalyptus Partnership (Lydia Leung, @cloudpundit), as well as the OpenStack foundation. It’s important to note that early implementations of OpenStack were not all the same, but the OpenStack Foundation is attempting to remedy some of this through the RefStack program. Even with these efforts,  some people are still concerned that OpenStack will become too fragmented and needs a dominant leader (or vendor) to drive it’s success.

Cross-Cloud APIs

The third camp is interested in created a more unified set of APIs that would work across multiple clouds. This is the area that has created the most contention and debate, for obvious reasons – complexity, differentiation, competitive markets, etc. Leading cloud user NetFlix has been actively working to open-source many of the tools they use today in hopes that other cloud providers (they are AWS’s largest customer) will be able to create competitive offerings that give them flexibility for their business. Other leading cloud visionaries are looking to drive cross-cloud interoperability - An Open Letter to the OpenStack Community: Our Future Depends on Embracing Amazon (Randy Bias, @randybias). Within the OpenStack community, not everyone believes this is a good idea. This debate seems to be heavily divided between the “innovation” crowd and those that claim that ecosystems can be overtaken through commoditization (Can OpenStack dominate IaaS? (Simon Wardley, @swardley)).

NOTE: It’s important to remember that just emulating or copying an API doesn’t ensure interoperability between clouds. Those APIs must also be built on top of similarly architected systems, otherwise one API call won’t deliver the expected API result of another call. This can be especially challenging if the underlying infrastructure (compute, network, storage) has different capabilities between clouds.

As you can probably see, we’re still a long way away from the possibility of “USB 2.0″-like compatibility between clouds. Technology, politics, competition and money are getting in the way. It will be interesting to see if the vendors and open-source projects find ways to work more closely together, or if customers decide to use various 3rd-party tools (Enstratius, Righscale, Ravello Systems, Cloud Velocity, etc.)  to get around the lack of interoperability

NOTE: While quite a bit of architectural level work would need to get done to create interop between systems, it’s important to point out some excellent work being done to educate the market - Architecting OpenStack for VMware vSphere (Kenneth Hui, @hui_kenneth)


August 4, 2013  8:34 PM

What is “Enterprise Ready” in Cloud Computing?

Brian Gracely Brian Gracely Profile: Brian Gracely

Probably more than any other question, I get asked all the time if I believe that Amazon AWS is “enterprise ready“. Sometimes the question comes from analysts trying to determine the extent that the IT industry is shifting. Sometimes is comes from vendors trying to determine the pace of change/transformation/disruption. Other times it comes from IT organizations trying to determine what their future strategies look like for procurement, service offerings and future skills evolutions.

“Enterprise Ready” is one of those loaded phrases that you really need to be careful about using, because the person you’re speaking with typically has a preconceived notion about what it means. For many people, it means that the service essentially emulates all the aspects of an existing Enterprise IT data center – include all the elements of performance, redundancy, security, compliance, etc. In essence, they expect the new environment to functionally be like the world they are used to. What they don’t want are the long delays to get things provisioned, the long meetings with security and compliance teams telling them everything is unsecure, or the long budgeting process to procure the required technology. [Insert analogy about eating cake here]

What I try and explain to people when answering this is to think about “Enterprise IT” in two buckets:

  • Bucket 1 – Applications that you typically associate with IT – Email, ERP, HR, Unified Communications, Sharepoint, etc..
  • Bucket 2 – Business requests for technology that typically get turned down by IT

Bucket 1 is all about applications that have long, relatively stable life-cycles and IT is usually trying to balance cost vs. performance of these applications. This is a technology bucket. Known equipment. Known capacity needs. Aligns to depreciation cycles. These applications might be a fit to migrate to a public cloud, if the business is facing some ‘change event’ (eg. M&A, equipment EoL, licensing upgrades pending, new CIO, budget challenges, IT skills challenges, etc.).

Bucket 2 is all about the pace of today’s business world. The world where winning and losing is often measured in how quickly we can transition from a great idea to a great implementation of that idea, via technology + business models. These ideas are responsive to the market, to competition and to changes that we’re planned for in the annual budgeting meeting. At least at the time of ask, they are often the complete opposite of the Bucket 1 applications – unknown capacity needs; shorter usage duration; unknown scalability.

So what might fit into Bucket 2?

  • VP of Marketing would like a smartphone app for the sales-kickoff or annual tradeshow. They aren’t sure if it’ll get 1,000 or 15,000 downloads (unknown capacity, unknown scale). They would like it to be available 2 weeks before the event, and collect data for up to 1 month afterwards. Beyond that it’s not needed (short duration).
  • VP of Operations just got back from a conference discussing “Big Data” and would like to prototype ways to better analyze sales trends and how they are effected by weather, gas prices, seasonality and a few other sources of publicly available data. He needs the prototype completed in 60 days, as he needs to justify an ROI (eg. better sales insight) to justify a more expansive project. If the ROI doesn’t materialize, the bigger project might be cancelled – Quick timeline, potentially wasted capacity beyond 60 days.
  • CIO tells the lines of business that the existing annual IT budget has been exceeded by Q3, as a major project has gone over budget (it occasionally happens), but one of the lines of business has a major opportunity if a new system can be put in place in time. The opportunity is $5-10m in Q4, with a follow-up of $10-20M in Q1. Pace of implementation is of the essence, but where to do it? Sometimes this is called ‘Shadow IT’, it’s just a reality of doing business in the 21st century. Global resources exist, so why shouldn’t a business try and leverage them?

At this point you might be asking why I didn’t explicitly mention AWS and “Enterprise Ready”. Hopefully you’ve figured out that there is more to “Enterprise Ready” than just the underlying technology. In today’s world, there is a place for applications in the public cloud (or remaining in existing data centers) for those characteristics. But there is also unmet Enterprise demand for solving business challenges with technology, now. Those Enterprises, those applications are “Enterprise Ready” too. They are just focused on a different characteristic being the most import element of their success.

So how big could Bucket 2 be? It’s tough to tell (long-term), because we often don’t know how big a new technology segment could be until people and companies understand just what is possible without prior restraints. The Client-Server market was 10x the Mainframe market. It’s not unusual for people to have 3-4 connected devices (smartphones, multiple tablets, laptops, etc.).

Cloud Computing helps level out the short-fall in supply and demand for Enterprise IT. Whether or not the unmet demand is Enterprise Ready is now as much about pace-of-implementation as it is SLAs and IOPs. The forward-looking CIOs are trying to figure out how to deliver both to their Enterprises.


July 30, 2013  11:37 PM

The Beginning or The End of Cloud Computing? It’s confusing…

Brian Gracely Brian Gracely Profile: Brian Gracely

red cloudComputing eras tend to last 10-15yrs at their high-point and then something else takes over. Along with those changes tend to be a few leaders that adapt and continue, while a large number of the leaders fail and new companies (or open-source projects) emerge to take their place and lead the new era.

  • Mainframes: 1960s-70s
  • Minis: 1970s-80s
  • PCs / Client-Server / LANs: 1980s-90s
  • “Web 1.0″ / Commercial Internet: 1990s-2000s

Some might argue that the Cloud Cloud era got started after the Internet bubble burst (2000-2001) and early SaaS applications started to emerge. Others might say that it really took the next step when Amazon.com introduced Amazon Web Services in 2006-2007 and brought the idea of utility computing into reality (with a H/T to Douglas Parkhill for his early thinking back in the 1960s). Another group might point to the 2009 emergence of the concept of “Private Cloud” as the tipping point where it became a reality for many IT organizations (“shadow IT in my company?”) and signaled that traditional IT vendors were concerned about protecting their existing installed base (which apparently isn’t gaining much functional traction).

While it doesn’t really matter when this new era started, it is useful to try and figure out where in the transition the industry is today. As people like to ask, “are we in the 2nd inning or the 7th inning stretch?

Some would argue that we’ve begun to hit the tipping point when the legacy vendors are beginning to show their strains and are starting to fail. While it’s easy to argue that those applications aren’t going away anytime soon (note: IBM still does +$1B in mainframes, in the 2010s!!), many quarters of misses do begin to signal that they might have missed the big shifts around cloud and open source and could eventually joint the boneyards occupied by DEC, Bull, Sun, etc.

Some would argue that we’re beginning to see the makings of “Cloud 2.0″, where standards need to evolve, interoperability needs to evolve, and we may begin to see the classic battles between two technologies that set the ton for a decade to come (eg. Ethernet vs. Token Ring, IP vs. ATM, VHS vs. Beta; Blu-Ray vs. HD-DVD) – can you say AWS APIs vs. OpenStack APIs?

Still others think the Cloud 1.0 wars are over and it’s time to shift from a industry driven by innovation and vendor-led profit models to one that’s driven by commoditization and the next phase of ideas, economic growth and potential that comes from lower costs and easier access to resources (see: Jevons Paradox).

Confused yet?

Continued »


July 13, 2013  5:54 PM

Change Culture or Move Elsewhere – The IT Decision of the 201x’s

Brian Gracely Brian Gracely Profile: Brian Gracely

In the real world, there are the seven George Carlin words that if said will make people uncomfortable, especially if used with the wrong audience or in the wrong context. In the IT world, those words are CHANGE, VALUE-ADD, COMMODITY and AGILITY (in no particular order). Use those words and somebody in the room is going to cringe, or potentially ask you to leave. They are the words that draw the dividing live between vendors, since we mistakenly believe that IT is a zero-sum and there are only “winners” and “losers” and the new will always vanquish the old. You do worship at the altar of The Innovators Dilemma, don’t you?

They are the words that make IT operators sweat, thinking about the 2 a.m. pager notice they’ll get because the new system, which requires new skills, is operating less than optimally and somebody wants it fixed…now!!

Up until a few years ago, they were words that both the IT sellers and IT buyers knew how to balance to keep the ecosystem fairly healthy and constantly evolving. But a few significant changes have come along – namely “cloud computing” and “open-source” – and opened up new options that are disrupting the balance that previously existed. The previous two IT options of Build-it-Yourself or Outsource, both of which used similar technologies and skills, now had a third option – various Public Cloud options (IaaS, PaaS, SaaS, *aaS).

And these new options are forcing the IT conversation to distinctly change from one centered around new technologies to one that’s centered around the pace of change of either IT economics or IT skills/process, and sometimes both.

Let’s just look at a few recent articles:

Both of these articles center around the idea of either building “abstractions” to layers that are deemed “less valuable” (see: don’t VALUE-ADD), or focused on a less valuable group changing (see: CHANGE) otherwise they become irrelevant – oh wait, maybe they could be “strategic”, as long as they quickly learn skills that they never needed before, and fast!! Continued »


July 7, 2013  10:24 PM

Will Hardware Vendors Adapt to Changing Expectations?

Brian Gracely Brian Gracely Profile: Brian Gracely

Earlier this week, GigaOm wrote a post discussing the possibilities that the largest web companies would begin designing their own chips (CPUs, etc.). This was following up on the trend of companies like Facebook, Google, and Amazon designing their own servers and networking equipment, or efforts like Open Compute Project open-sourcing designs that could be delivered by ODMs.

While articles like this are interesting for getting a peek into the 0.01% of companies where this is feasible (and needed), since they are running 21st century bit-factories, the question that seemed to emerge was, “how will this effect the companies that sell hardware for a living?“. I’ve written before about how hardware has been rapidly evolving, especially for cloud computing.

When I read articles like this, I tend to think that this is a macro-level trend that is inevitable. The components within hardware are evolving rapidly, and the net-value of the hardware (by itself) is decreasing vs. the associated (or decoupled) software which is increasing. You can have valid arguments about the timeline over which this evolution will occur, but believe most rationale people will generally agree that the trend in value is shifting towards software and away from hardware. System integration of the two still has it’s place, but even where that occurs (in the supply chain) is changing as well.

The more interesting question to me is how the hardware companies will respond to this. Of course they will claim that hardware still matter, especially for performance. They may also claim that visibility is needed at both a hardware and software layer. Fine, that’s to be expected. But will we see any actual changes in how they do business, or how they go to market?

The reason I ask is this question is because I’m constantly looking to the manufacturing sector to give me clues on the future of IT, since they are running on parallel tracks, albeit that IT is 10-15 years behind.

When Pivotal Labs publicly launched, there was an interesting discussion between Paul Maritz (CEO, Pivotal) and Bill Rue (VP Global Software, GE). They were talking about how the airlines (directly or via Boeing/Airbus) were now buying engines. The discussion centered around the idea that engines were no longer purchased as capital assets, but rather they were now being paid for based on usage. One of the initiatives by GE was to do a much better job of collecting real-time data about the engines to better manage downtime and associated maintenance costs (all things that would effect GE’s ability to collect revenue for the engine’s usage).

This got me thinking – will we begin to see the hardware vendors begin to take a clue from manufacturing and charge usage-based pricing for their equipment rather than just capital assets (paid directly or via lease)? Will we see them begin to add capabilities to better track how their systems are being used, in near real-time?

The challenge of capital (CAPEX) purchases and long depreciation cycles is one of the biggest barriers to companies being able to successful deploy “Private Cloud”, as they don’t have the ability to create “agile, dynamically scalable” resource pools when they can’t budget and buy in that manner.

Will the lack of overall success with Private Cloud deployments, plus the specter of lower-margin hardware sales eventually force the hardware-centric vendors to change their selling models in the future? Do they actually expect to fight Moore’s Law without trying to reinvent the business model at the same time?


July 7, 2013  9:57 PM

Thoughts from an Enterprise Start-Up

Brian Gracely Brian Gracely Profile: Brian Gracely

According to this piece in the NY Times, I’m old. I’m not yet an average/median worker in America, but apparently I’d be “Grandpa Gracely” amongst the hipster tech world.

And when you’re “old”, especially in the technology industry, the expectation is that you’re looking for stability, not change. A nice paycheck, generous benefits and maybe a set of responsibilities that are challenging but won’t have you working late nights and weekends. Squeezing in meetings between checking the status of your 401(k).

But I’ve also worked at some of the larger companies mentioned in the article, and the days of never ending meetings and delayed decision-making had me frequently thinking about leaving behind the bigger brands and making my mark with something smaller. About seven months ago, I made the leap. I joined a small company, backed by venture-capital funding, that was somewhere between maturing start-up and early-stage growth company. For a number of my colleagues, the reaction was “are you sure about that?” or “shouldn’t you have done that years ago?“. Maybe, but this was the right time for me to do this. It’s been an interesting ride so far. I get asked about it all the time, especially from people trying to make next-step decisions in their careers, so I thought I’d share some of my learnings.

Jack of Many Trades  – In general, the smaller the company, the more it will be expected that you can play multiple roles and leverage multiple skills. That’s definitely the case for me. I was hired to drive Solutions and Technical Marketing, but that quickly evolved to include running Product Management, managing Strategic Partner relationships, helping to shape future strategy and being able to do day-to-day Field Enablement. If you like wearing many hats, smaller companies can be a great place to stay challenged and to grow. It can also mean that at times you have overlapping priorities and you may be asked to lead something that it beyond your comfort zone.

Long Days and Long Nights – Smaller companies have less resources. Smaller companies have less brand-awareness. Smaller companies don’t have the luxury of outsourcing the tasks that larger companies take for granted. This means the work is on you. This means the hours will be long. Know what you’re signing up for. This is where self-motivation comes into play, because you’ll have to make some personal sacrifices.

Always Be Closing (ABC) – Whether you like it or not, everyone at a small company is part of the selling process. You may not directly carry a quota, but you’ll mostly likely be interacting with your customers on nearly a daily basis. With a smaller company, those customers are constantly testing anyone they can to see if you’re really able to deliver what you say you can. So while you might not be directly selling the product or service, you’re definitely selling “confidence” and “trust” and “commitment”, the intangible things that customers of smaller companies are evaluating above and beyond the technology. Continued »


Page 5 of 9« First...34567...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: