Enterprise computing made simple

1

August 24, 2014  12:56 PM

why Netapp for SAP HANA for dummies!

Alaa Samarji Profile: AlaaSamarji
Storage

SAP HANA has been taunting me for some time now. it’s like this mythical creature that i could never put my finger on, and whenever i used to attend a seminar, all information they used to through at us was mind boggling to me (you could guess that am not a SW guy).

i work in a Tier 1 International vendor in charge of designing Data management  solutions based on Netapp portfolio. at that point of time i had no interest in anything even remotely near the HANA world until my colleague (SAP Infra consultant) ran up to me one day and went ” do you know that all of our solutions are based on Netapp”.

i felt i was smacked in the head and took it as a personnel challenge to simplify the matter for myself and for my follow consultants on how to pitch why this small box called netapp is necessary in an SAP hana solution.

here’s the result (why Netapp with SAP HANA for dummies)

1- HANA is an in-memory Database , it uses connectors to link to different data sets from different database. i.e, immaging you own a hyper market, and you need a system that look up items purshased VS the outside weather VS the time of the year VS different attributes all in real time as the purshases are happening, such information can help management analyse customer’s purchase behavior and guide them into better stocking or warehouse keeping.

2- as the transactions are happening in real life, some businesses  requires in-line live analysis of what going on (like in currency exchange scenarios), normal disk processing is no where near enough (<1 ms latency wont cut it). the solution requires a micro second response to such intense workloads, thus the in-memory processing is born.

3-after the data has been processed it need to be flushed somewhere to make room for new data and that is where netapp comes in.

4-The HANA appliance will use an NFS dump to an external storage with low latency (and who is better than netapp for give this NAS functionality, they invented it for god sake).

5-After the dump has occurred, Netapp can use its snapshot technology to take a back up of that analysed and processed data. what about consistency point you may ask! Netapp have the unique feature to integrate with SAP cockpit (Management node of SAP) to make sure that data is consistent and everything is recorded and indexed by the book.

6-Netapp can also replicate those data on a block incremental from site to site without the need or use of any host based system. and yes since its based on consistent snapshot, the replication is also consistent.

7- Finally in an HA environment with multiple HANA appliance flown in the air, you have two options. either let the host replicate in an active active mode (something like SQL cluster in the old days) and by doing that needing a minimum of 20% more CPU power not to mention the shocked network (some customers went to the extent of creating seperate physical network for this) , or create 1 big lump of data across two NFS mount point (Node A and Node B) and if big PAPA goes to sleep than the wife wont have any issue, simply look at the pointer and pick it up as nothing happened.

so in short those are my inputs on the subject, hope it was clear enough and can help those who just need some pitching points in their next conversation with a SAP customers.

December 4, 2011  3:01 PM

What are the things to ask while assessing a VDI solution?

Alaa Samarji Profile: AlaaSamarji

VDI lately became a big topic that everyone is talking about and project are up in the market for grabs for migrating 1000,3000,10000 and even 15000 PC units into a VDI ready solution, personally having implemented and consulted this technology before I have a few points I want to highlight to anyone who is in the process of considering this technology.

1. Study your business. As an IT manager/administrator you need to really and carefully study your infrastructure and identify what kind of loads you are having. As an exercise I generally use  an excel sheet and break down my users by categories where CAT A will be the normal officer users(excel, word, outlook), CAT B will be the more intensive users (matlab, autocad, GIS app) and so on. After doing that you can use various tool even primitive ones like task manager in windows to identify what kind of load its consuming from your local PC, normally it would be around 60-70% of you Current Ram and no more than 10% of your CPU’s capabilities.

2. Manage your expectation. If you are looking for a device that do exactly what a PC is doing than you are shopping in the wrong place. You need to keep an open mind on the capabilities of a VDI solution. Later on we will discuss the different types of VDI’s and how they are differentiated from each other. As a general standard if up to 70% are low end users (office, call center, browsing) than VDI would be a good feasible approach but don’t try to cramp the extra 30% into that play. A very heavy visual intensive user is better off with a PC on his own.

3. Choose your platform. In today’s market you have two choices of going about this thing. Either you use a full software based VDI where your virtualize a processing/storage pool and project it to existing PC’s and by that having your PC act like a terminal and get his session and resources from the pool or you can go full fledge and invest in a thin client/zero client dummy devices to replace your PC’s and act as your terminals. A lot of customers use a hybrid of both where for their new development they buy thin clients while utilizing the existing PC as a terminal until it get depreciated from the financial department thus become feasible to replace it.

4. Do a proper sizing. Four aspects are usually involved in the sizing activity.

a. Server infrastructure sizing- this is where all of your VDI sessions will work and play as well your brokers or gatekeepers between your Clients and your virtual sessions. The sizing of these servers need to come from finding the right sizing formula (i.e: each Xeon Core can hold up to 10 users with 1.5GB RAM for win7 and 100GB of Disk), you need also to make sure that you have a lot of PCI Nics going out of the boxes since these kind of solution draw a lot of I/O bandwidth and speed so you need to concentrate on that.

b. Storage infrastructure sizing- you will need a good SATA box with proper dual controller with a decent number of RAM(24 GB ) and a read flash  card if possible. You will also need as much of PCI Nics going out of the box as well. Deduping, compression and Raid-Dp  is a must in these scenarios to tackle the bottle necks you might face on your storage array.

c. Network infrastructure sizing- Make sure that all of your connectivity from all sides is redundant, always remember that in project like these, the entire setup is one big single point of failure unless you do a huge investment an duplicate the whole thing so get good switches with sufficient port numbers and if possible create a 10gbps inter structure connectivity between server and storage since it will enhance your solution.

d. HA sizing- always do an HA sizing and planning specially on your server side for example if 200 users can run on two servers with a 60% utilization of each, this is not HA since as soon the first server goes down the second one will not be able to handle the load and will probably crash as well, always have an N+1 topology in mind when you size.

5. Choose your platform – in today’s market there are 4 top VDI players with citrix Xen desktop on top followed by vmware, Microsoft and Oracle virtual box. In concept all four perform in the same manner however they do compete mainly on the GPU acceleration, compression and security. I would advise testing all four of them to see what suits your taste where you need to weigh the free license of Microsoft and oracle OVM versus the capabilities of citrix and vmware and that is a decision you need to take as per your running app and environment.

6. Choose your client – virtually everybody today have a  thin/zero client  device in their portfolio be it Dell, HP, IBM, Oracle, fujitsu and even hawaeii have a 75$ linux based device that performed surprisingly well in our benchmark. When this originally started, all client technologies were dummy clients for basic operations, as IT and business evolved there was a demand for the client to do more and more up to verge of demanding to run graphical intensive applications thus nowadays clients with a GPU processing unit, a small ram chip and a built in OS is very popular and in demand but relatively a bit more expensive to purchase and maintain as an overhead. One of the leading GPU enabled Client provider would have to be Fujitsu while Oralce/Sun still preside the zero client business line as the leader in that field and the most easy on the eye design with their new MAC look sunray 3i all in one 24 inch machine.

At the end of it, it all comes down on how you are planning to use the VDI technology. Today you can run your VDI on your PAD, mobile, PC’s, thin clients and even browsers running java tomorrow sky is the limit for this technology since it is seen as the user interface to the Cloud and an inseparable  part of the modern structure.


December 4, 2011  2:55 PM

How to turn your 100TB oracle DB into 10TB DB?

Alaa Samarji Profile: AlaaSamarji

In Today’s market we find that everybody that is somebody in today’s world have some form of an Oracle database running somewhere in his datacenter for the simple reason that it’s just that good vs its competition. But what do we do when the data start getting bigger and bigger, we start facing slow query performance, the risk of silent corruption grows over time, you find yourself in need of constantly upgrading your storage disks and performance and buying redundancy software’s just to keep the performance you have when you first purchased your first license.

Some time back, Oracle launched the Exadata machines which were huge million dollar DB boxes that can take you to the moon with the sum of power they were holding and they were using a secret weapon to achieve that crazy performance and that weapon was named “Hybrid columnar compression”.

so what’s a hybrid columnar compression?. Well in definition Hybrid Columnar Compression on Exadata enables the highest levels of data compression and provides enterprises with tremendous cost-savings and performance  improvements due to reduced I/O. HCC is optimized to use both database and storage  capabilities on Exadata to deliver tremendous space savings AND revolutionary  performance. Average storage savings can range from 10x to 15x depending on which Hybrid Columnar Compression level is implemented – real world customer benchmarks have resulted in storage savings of up to 204x.

I remember the first time I saw those numbers I thought it was just a marketing stunt until I popped the hood and took a look inside that technology and what I found was pretty impressive.

Traditionally, data has been organized within a database block in a ‘row’ format, where all column data for a particular row is stored sequentially within a single database block. An alternative approach is to store data in a ‘columnar’ format, where data is organized and stored by column which saves space however dramatically bring down performance.

Oracle’s Hybrid Columnar Compression technology is a new method for organizing data within a database block. As the name implies, this technology utilizes a combination of both row and columnar methods for storing data. This hybrid approach achieves the compression benefits of columnar storage, while avoiding the performance shortfalls of a pure columnar format.

So basically how you usually store your data based is something like this.

Block A

Block B

Block C

Block A

Block B

Block C

Block A

Block B

Block C

With hybrid columnar your database will look something like this.

Block A

Block B

Block C

Block A

Block B

Block C

Block A

Block B

Block C

By combining the blocks in this manner the compression was immense and the results found were really incredible.

Now having said that everybody got really excited, as oracle have found the solution for a problem haunting the masses for years now. Who would say no for a technology that can bring 100TB to 10TB of database and save thousands and thousands of dollars on storage arrays disks, licenses and maintenance charges and who you say no for a 10x performance improvement? Since everybody knows a smaller database run faster than a larger one so it’s just common sense and why to say no if this feature is coming free with the box and won’t save a thing, or will it?

Those Exadata machines comes with a bill and a big one for that matter so all those super features like cutting DB size by 10 and increase performance by 10 looked a bit farfetched to the SMB market until Oracle has declared that this technology is now available on the ZFS appliances which means even the cheapest 20K  ZFS box can do what the million dollar Exadata is doing so the real big question to ask after that is, what are you still waiting for?


December 4, 2011  2:49 PM

Can you really consolidate your SAN’s and NAS’s?

Alaa Samarji Profile: AlaaSamarji

I remember couple of years back, Storage business was not a business to look out for. The market didn’t really understand or accept the concept of putting all your files data in a single place, everybody just loved having tones of file servers and spinning disks in Raid 5.

Back than a storage case at some customer site will truly make everyone run for their money since a SAN storage was highly linked with a high price and high risk and it was simply not worth it so people tended to keep focus on their servers/networking business and leave the storage part as a secondary target.

Today I have been quoting and consulting for storage boxes like if they were PC’s, everybody just need to get one regardless of he needs it or not it just happen that Storage is next big thing in the IT world and everybody is talking about it.

It has been increasingly noted that more and more businesses tend to have multiple storage brand like for example one customer will have an equalogic box and a LSI box while another customer will be running an EMC box with a netapp LSI box ect… I might agree with those businesses in thinking that multi vendor means not putting our entire eggs in a single basket however with this huge data explosion happening everywhere we are end up with times up to 4 storage boxes per site or more in a multi vendor scenario and this is making the life of the storage administrator a living hell.

One way to solve these issues will be migrating the entire thing into a one big solid storage box however try getting a quotation for a migration plan from one box to another, you will never hear the end of it from your CFO.

Other technologies have emerged from the abyss to tackle such challenges so we find software’s like VMware or Symantec storage foundation or even software’s like RELDATA that virtualize the disk array and make your applications move freely between boxes since and thanks to these software they are speaking the same language.

For me, using one of those solution is just like stretching the highway before hitting the wall because and eventually you will hit that wall!!!

All of those solutions will work for now but will not give you the satisfaction of really merging all of those storage array however I believe I have found the solution!

Introducing the all new Netapp V series, its not all that new but for some reason not a lot of people have heard about it so what is a V series.

If you’re familiar with Netapp controllers, a V series is just a controller without disks just looking like the 3200 or the 63200 FAS series and what they do is actually swallow a 3rd party storage all together!!!

Yes this is not a typo mistake a V series actually swallow your HP, EMC, Dell or even old Netapp boxes  and assume command of its disks in a very seamless and time affective manner.

What Netapp are saying is there is no need to throw away your old disks, no need for migration plan, no need for major planned down time and the risks that comes with it, a Netapp V series controller will sit on top of your old controller(HP, EMC, IBM, Netapp ect…), by pass it  and connect directly with the storage blocks (arrays) that you are currently using.

Upon doing that, the V series assume full control over those arrays, maintain Raid configuration and assume the role of its processing controllers.

But that’s not all, the storage array will now use all the nice feature and software that comes with a Netapp software like flex cache, snap mirror, snap restore, deduping and all those nice features, also the V series not only branch out on a single storage array, it will branch out on your entire storage pool in other world 1 V series will swallow all the storage brands that you have at once and at the same time, boost the performance dramatically and give you capabilities to even grow your data and storage array without even going back to the original principal.

So what are we looking at eventually?

We are looking at a high performing Controller array with all the I/O’s , PCI’s, Flash Cards, lights and bells that you find in a high end Storage controller. This controller will assume control of all disks arrays in your entire datacenter regardless from where those arrays came from or from which vendor.

No migration is required since everything stays in place, you have just improved the performance per storage pool but you are not moving it to another place unless you want to by two click of a button.

You are saving tones of money worth of throwing away the old hard disks and replacing them with new ones.

You are having new consolidated storage administration platform DataOntap to manage the whole thing.

You are now free to purchase new disk arrays and simply add them to the V series and keep growing that tree.

At the end of it a V series is not a cheap solution nor it is for the faint hearted IT managers , if you have a multi-platform in your datacenter and you just wish you can club all of those machines and maintenance contracts in a single box with minimum downtime and cost, have a look at the V series because it is the answer to your prayers.


1

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: