Category Archives: General Info

HP ends Hitachi relationship

Well, this maybe a bit premature and I don’t have any insights in Leo’ s agenda but when you apply some common sense and logic you cannot draw another conclusion than within the foreseeable future this will happen. “And why would that be?” you say, “They (HP) have a fairly solid XP installed base and they seem to do sell enough to make it profitable and they also have embarked on the P9500 train”.

Yes, indeed, however take a look at it from the other side. HP has currently 4 lines of storage products, the MSA inherited thru the Compaq merger which comes out of Houston and specifically targeted at the SMB market, the EVA, from the Digital/Compaq StorageWorks stable, which has been the only HP owned modular array which has done well in the SME space, the XP/P9500 obviously thru their Hitachi OEM relationship and, since last year, the 3-Par kit. When you compare these products they do have a lot of overlap in many areas especially in the open systems space. It is therefore that the R&D budgets for all the 4 products eat up a fair amount of dollars. Besides that, HP also has to set aside a huge amount of money for Sales, Pre-Sales, Services and Customer support in training, marketing etc to be able to provide a solution of which a customer will only choose the one which fits their needs. So just from a product perspective there is a 1:4 sales ratio. I don’t even mention the choices customers have from the competition. For the lower part of the pie (MSA & small EVA) HP heavily relies on their channel but from a support and marketing perspective this still requires a significant investment to keep those product lines alive. HP just has released their latest generation of the EVA but as far as I know has not commented on future generations. It is to be expected that as long as the EVA sells like it has always done the development of it will continue.

With the acquisition of 3-Par last year HP has dived very deep in their money pit and paid 2.3 billion dollars for them. You don’t make such an investment to just keep a certain product out of the hands of a competitor (Dell in this case). You do want this product to sell like hotcakes to be able to shorten your ROI as much as possible. Leo has quite some shareholders to answer to. It then depends where you get the most margins from and it is very clear that when you combine the ROI needs of 3-Par and the margins they will obviously make on that product HP will most likely prefer to sell 3-Par before XP/P9500 even if the latter would be a better fit for the solution needed by the customer. When you put it all together you’ll notice that even within the storage division of HP there is a fair amount of competition between the product lines and no R&D department for either of those want to loose. So who needs to give??

There are two reasons why HP would not end their relation ship with Hitachi, Mainframe and Customer demand. Neither of the native HP product have Mainframe support so if HP decides to end the Hitachi relationship they will certainly loose that piece as well as obtaining the risk that same customer chooses the competition for the rest of the stack as well. Also if XP/P9500 customers already have made significant investment investment in Hitachi based products, they most certainly will not like a decision like this. HP, however is also not reluctant to make these harsh decisions. History proves they’ve done it before. (Abruptly ending and OEM relationship with EMC as an example.)

So, if you are an HP customer who just invested in Hitachi technology, rest assure you will always have a fallback scenario and that of course is to deal with Hitachi itself. Just broaden your vision and give HDS a call to see what they have to offer. You’ll be very pleasantly surprised.

Regards,
Erwin

(post-note 18-05-2011) Some HP customers have already been told that 3-Par equipment is now indeed HP preferred solution they will offer unless Mainframe is involved.

 (post-note 10-07-2011) Again more and more proof is surfacing. See Chris Mellor’s post on El Reg over here

Is there anything Linux does not have??

I’ve been using Linux since 1997 and back in the “good old days” it could take weeks to have a proper setup which actually had some functionality in it beyond the Royal Kingdom of Geekness.It was a teeth-pulling exercise to get the correct firmware and drivers for a multitude of equipment so if it didn’t exist you were relying on the willingness of hardware vendors to open up their specs so you could work on this yourself.


So much has changed over these last 15 years in the sense that even my refrigerator and phone is running Linux as well as the largest hadron collider and even space stations run on Linux. The vast amount of manufacturing consortium’s are actively developing on and for linux and it looks like the entire IT industry shifts from proprietary operating systems to this little opensource project Mr. Torvalds kicked of almost two decades ago. His fellowship in the IT Hall of Fame is well deserved.

One area were Linux is hardly seen is still on the regular desktop at peoples home office desktops and this is one of the big shortfalls that linux still has. All of the above mentioned examples are really specialised and tailored environments where Linux can be “easily” adopted to suite exactly that particular need and it does an incredible job at it. The people who use Linux have either a more than average interest in computing or fall into the coke and chips/pizza category (Yes, Geeks that is). Just walk into a computer store and ask for a PC/laptop (whatever) for a PC but have them remove the windows operating system, subtract the MS license fee from the invoice and ask for a Fedora/Ubuntu/”you name one of the 100’s of distro’s” to be installed instead. Chances are fairly high you get some glare eyes staring at you. This is the big problem Linux faces.

From a hardware support level most of it if fairly well covered. Maybe not under open-source licenses but from a usability perspective this doesn’t really matter.

Although the Linux foundation does a good job in promoting and evangelizing Linux it will never have the operational and financial power companies like Microsoft have so a commercial heads-on attack is doomed to fail. The best approach, i think, although perceived long term thinking, is via the educational system. make sure young children get in touch with different operating systems so they have the choice of what to use in the future. I recently knocked off windows from my somewhat older laptop and installed Ubuntu. My kids are now using this one for all sorts of things. My son discovered the command line and he’s getting curious. (He thinks he’s smart so I use SELinux, pretty annoying for him :-))
The thought behind this all is they also get another view of what computers can do and that there is more than MS.

As for day to day apps I think Linux still falls short on office automation. Regarding functions and features they still can’t compete with MS but the catchup game has begun.

Cheers
Erwin van Londen

Server virtualisation is the result of software development incompetence

Voila, there it is. The fox is in the hen-house.

Now let me explain before I get the entire world over me. 🙂
First I do not say that software developers are incompetent. I fact I think they are extremely smart people.
Second, the main reason for my statement is given the fact Moors’ law is still active we more or less got used to somewhat unlimited resources w.r.t. CPU/Memory/bandwidth etc. etc. developers most of the time write their own idea’s into the code without looking at better/more appropriate alternatives.


So let take a look why this server virtualization got started anyway.

The mainframe guys in the good old days already acknowledged the problem that application developers didn’t really give a “rats-ass” what else had to be installed on a system. They assumed that their application was the most important and deserved a dedicated system with likewise resources. Now this is Mainframe environment which already had strict rules regarding system utilization etc. The problem still was conflicts of shared libraries caused application havoc. So instead of flicking the application back to the developers IBM had to come up with something else and virtual instances were born. Now I’m too young to recollect the year this was introduced but I assume it was somewhere in the 70’s.

When Bill Gates came to power in the desktop industry and later in the server market you would assume that they learned something of the mistakes made in the past. Instead they came out with MS-DOS (I’m ignoring the OS/2 bit which they had some involvement in as well)
Now I’m fully aware that an Intel 8086 cpu had no were the capabilities as the CPU’s that were in the mainframe systems or mini’s but anyhow the entire architecture was build for single system and single application use.
They have ignored the fact that one system could do more than one task at the same time and application developer wrote whatever they seemed fit for their particular needs. Even today with windows and Unixes you are very often stuck with conflicting dependencies of libraries, compiler versions etc etc. Some administrators have called this the DLL Hell.

I’ve been personally involved in sorting out this mess with different applications that had to run on single systems so in that sense I know what I’m talking about.

So since the OS developers were obstructed by business requirements (in the sense that they could not enforce hard restrictions on application development) they more or less had no means to overcome this problem.

Now then there came some smart guys and dolls from Berkeley who started with a product which let you install an entire operating system in a software container and every resource needed by that operating system was directed by this container and voila: VMWare was born.

From my perspective this design has been the stupidest move ever made. I’m not saying the software is not good but from an architectural point of view this was a totally wrong decision. Why should I waste 30% or more of my resources by needing to install two, three, ten, twenty times the same kernel, libraries, functionality etc. etc.

What they should have done was build an application abstraction layer which made an inventory of underlying OS type, functionality, libraries etc etc. (You can safely assume that with current server farms each server has the same inventory if deployed from a central repository. Even if not this abstraction layer could detect and fix that) This way you can create lightweight application containers which share all common libraries and functionality from the OS that sits below this layer but if that is not enough or conflicts with these shared libraries they use a library or other settings which is locked inside this applications container.

Now here comes the fun. If I need to move applications to another sever I don’t need to move entire operating systems which rely on underlying storage infrastructures but instead I could move or even copy this application container to one or multiple servers. After that has been done you should be able to keep that application container in sync so if one gets a corrupt file for whenever reason the abstraction software should be able to correct that. This way you’re also assured that if I need to change anything I only need to that on configurations within that container.

This architecture is fare more flexible and can save organizations a lot of money.

The problem is: this software doesn’t exist yet. 🙂 (except maybe in development labs which I don’t have visibility of.)

You can’t compare it to cloud computing since currently that is far too limited in functionality. Clouds are build with a certain subset of functionality so although on the front-end side you see everything though a web-browser doesn’t mean that on the back-end in the data-centers if operates the same way. Don’t make the mistake that if you want to setup a cloud infrastructure your problems will be solved. You need a serious amount of real-estate to even think about cloud computing.

The above mentioned application container architecture let’s you grow far more easily.

Cheers,
Erwin

P.S. I used VMWare as an example since they are pretty well known but I also need to include all other server virtualisation technologies like Xen, Hyper-V etc.

Open Source Storage

Storage vendors are getting nervous. The time has come that SMB/SME level storage systems can be build from scratch with just servers, JBOD’s and some sort of connectivity.

Most notably SUN (or Oracle these days) has been very busy in this area. Most IP was already within SUN, Solaris source code has been made available, they have an excellent file-system (ZFS) which scales enormously and has a very rich feature set. Now extent that with Lustre ** and you’re steaming away. Growth is easily accomplished by adding nodes to the cluster which simultaneously increases the IO processing power as well as throughput.


But for me the absolute killer app is COMSTAR. This way you can create your own storage array with commodity hardware and make your HBA’s fibre channel targets. Present your LUNS and connect other systems to it via a fibre channel network. Better yet even iSCSI and FCOE are part of it now. Absolutely fabulous. These days there would be no reason to buy an expensive proprietary array but use the kit that you have. Ohh yes, talking about scalability, is 8 exabyte enough on one filesystem and over a couple of thousand nodes in a cluster. If you don’t have these requirements it works one a single server as well.

The only thing lacking is Mainframe support but since the majority of systems in data-centres have Windows or some sort of Unix farm anyway this can be an excellent candidate for large scale Open Source storage systems. Now that should make some vendors pretty nervous.

Regards,
Erwin

**ZFS is not yet supported in Luster clusters but on the roadmap for next year

Addres space vs. Dynamic allocation

This article is somewhat a successor to my first blog “The future of storage”. I discussed my article with Vincent Franceschini personally a while ago and although we have some different opinions on some topics, in general we agree on the setting we have to get more insight on the business value of data. This is the only way we can shift the engineering world to a more business focused mindset. Unfortunately today the engineering departments of all the major storage vendors still rely on old protocols like SCSI, NFS, CIFS which all have some sort of limitation which generally is address space.

To put this in perspective it’s like building a road with a certain amount of length and width which has a capacity for a certain number of cars per hour. This means it cannot adapt dynamically to a higher load i.e. more cars. You have to build new roads, or construct new lanes to existing ones if possible at all, to cater for more cars. With the growth of data and the changes companies are facing today it’s time to come up with something new. Basically this means we have to step away from technologies which have limitations build into their architecture. Although this might look like boiling the ocean I think we cannot afford the luxury of trying to improve current standards while the “data boom” is running like an avalanche.
Furthermore it is becoming too hard for IT department to keep up with the knowledge needed in every segment.

Question is “How do we accomplish this”. In my opinion the academic world together with the IT industry have huge potential in developing the next generation of IT. In current IT environments we run into barriers of all sorts. Performance, capacity, energy supply, etc etc.

So here’s an idea. Basically every word known to mankind has been written millions of times. So why do we need to write it over and over again. Basically what can be done is reference these words to compose an article. This leads to both a reduction of storage capacity needed as well as a reference-able index which can be searched on. The information of the index can be in a SNIA XAM format which also enables storage systems to leverage this information and dynamically allocate the required capacity or put business values to these indexes. This way the only thing that needs to be watched for is the integrity of the indexes and the words catalog. Another benefit of this is when a certain word changes it’s spelling the only thing that needs to be changed is that same word in the catalog. Since all articles just have references to this word the spelling is adjusted accordingly. (I’ll bet I will get some comments about that. :-))

As you can see this kind of information storage and retrieval totally eliminates the use of de-duplication, everything is written once anyway, which in turn has a major benefit on storage infrastructures, data integrity, authority etc etc. Since the indexes itself don’t have to grow because of auto elimination based on business value the concept of Dynamic Allocation has been achieved. OK, there are some caveats on the different formats, languages and overlapping context issues however these can be taken care of by linguists.

The Smarter Storage Admin (Work Smarter not Longer)

Lets start off with a question: Who is the best storage admin?
1. The one that starts at 07:00 AM and leaves at 18:00 PM
2. The one that starts at 09:00 AM and leaves at 16:00 PM

Two simple answers but they can make a world of difference to employers. Whenever an employer answers with no. 1 they often have the remark that this admin does a lot more work and is more loyal to the company. They might be right however the daily time spent at work is not a good qualifier for productivity so the amount of work done might be less than no.2. This means that an employer has to measure on other points and define clear milestones that have to be fulfilled.


Whenever I visit customers I often get the complaint that they spend too much time doing day to day administration like digging through log files, checking status messages, restoring files or emails etc. etc. These activities can occupy more than 60% of an administrators day which can be avoided.
To be more efficient one has to change the mindset from knowing all to knowing what doesn’t work. It’s a very simple principle however to get there you have to do a lot planning.
An example is when a server reboots do I want to know if the switch port goes offline? Maybe I do, maybe I don’t. It all depends on what the impact of that server is. Is it planned or not or maybe this server belongs to a test environment in which case I don’t want to get a phone-call in the middle of the night at all.

The software and hardware in a storage environment consists of many different components and they all have to work together. The primary goal of such an environment is to move bytes back and forth to disk, tape or another medium and they do that pretty well nowadays. The problem however is management of all these different components which require all different management tools, learning tracks and operation procedures. Even if we shift our mindset to “What doesn’t work”, we still have to spend a lot of time and effort in thing we often don’t want to know.

Currently there are no tools available who support the whole range of hardware and software so for specific tasks we still need the tools the vendors provide. However for day to day administration there are some good tools which might be very beneficial for administrators. These tools can save more than 40% of an administrators time so they can do more work in less time. It takes a simple calculation to determine the ROI and another pro is that the chances of making mistakes is drastically reduced.

Another thing to consider is if these tools fit into the business processes if these are defined within a company. Does the company have ITIL, Prince2 or any other method of IT service management in place. If so the storage management tool has to align to these processes since we don’t want to do things twice.

Last but not least is the support for open standards. The SNIA (Storage Networking Industry Association) is an non-profit organization which was founded by some storage vendors in the late 90’s. The SNIA works in conjunction with its members around the globe to make storage networking technologies understandable, simpler to implement, easier to manage, and recognized as a valued asset to business. One of the standards ,which was recently certified by ANSI, is SMI-S. This standard defines a very large subset of storage components which can be managed through a single common methodology. This means that you’ll get one common view of all your storage assets with the ability to manage it through a single interface independent of the vendor. If your storage management tool is based on this standard you do not have a vendor lock-in and day to day operations will be more efficient.
This implies however that the vendor also has to support the SMI-S standard so make sure you make the right choice if you are looking for a storage solution and ask the vendor if he supports the SMI-S standard and to what extend.

Greetz,
Erwin