Category Archives: General Info

Linux Citrix Receiver client does not connect to server farm

I’ve had the issue where the Citrix Xen App on Fedora wouldn’t start because it popped up with a screen mentioning I decided that I didn’t trust a Verisign root certificate. Well, I haven’t decided anything. the blooddy app just didn’t install the certificates with it nor did it use the standard location in /etc/ssl/certs for the certificates. Instead Citrix decides to install the Reciever app in /opt/Citrix/reciever folder which contains a “keystore/certs” subfolder.

Now there are a couple of thing you can do here but the easiest one is to download the ca-root certificates from Verisign (over here: http://www.verisign.com/support/roots.html), extract all into the above mentioned folder and rename the *.cer files to *.crt and restart the app.

 Secondly when you’re on a 64-bit version of Fedora you’re missing a shitload of 32-bits libraries which Citrix Reciever still seems to need even though they specify the RPM to be 64-bit which gives a really false view of the app.

wfica: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped

So in order to fix this you have to install some additional libs. The below ones worked for me.

alsa-lib.i686
motif-2.3.4-3.fc17.i686

Yes, these are also 32-bit libraries so they do cripple the app.

That should fix it.
Hope this helps.

Cheers
Erwin

ipcalc on Fedora

One of the most handy utilities I’ve used for a long time is ipcalc. It basically gives you all sorts of information on IP addressing, subnets, etc. The output looks like this:

ipcalculator output

It seems the RedHat engineers have a bit of their own mind and so they come up with a different version which doesn’t resemble anything at all on the above picture but looks like this but uses the same name:

ipcalc output

So in case you still want the above do a “yum install ipcalculator” and add an ipc alias for abbreviation in your .bashrc file.

ipc x.x.x.x/xx give then the above shown output.

Cheers,
Erwin

Oracle (and when the SUN doesn’t shine anymore"

I’m not hiding the fact I’ve been a SUN MicroSystems fan all my life. They had great products, great engineering philosophy and best of all great people who knew how to pick a potato. The problem was that they went down the same path as DEC (huhhh, who…???) DEC, Digital Equipment Corporation. One of those other fabulous engineering companies who fell as pray to the PC world due to their lack of marketing knowledge and sales strategies. Google around for that.

Oracle was by far the worst company to acquire SUN. They have a massively different company mindset which is 100% focussed on getting another boat for Uncle Larry “I want you.. o, no.. I want your money” and this went on to be a head-on collision with the SUN philosophy. Given the fact Oracle had a massive war-chest and SUN was struggling to keep afloat allowed them to get the entire SUN IP for a nickle and dime.

The worst thing for Oracle was that all of a sudden they inherited a hardware division with, be honest, great products but also a huge drag on sales numbers. (which was likely to be the reason for SUN’s struggling.) No easy way out here since product support and near term roadmap line-ups had to be fulfilled. Oracle is, has always been, and always will be, a software company so over the last couple of years you can already see that the majority of all hardware products are starved to death. Don’t expect any new developments here.

SUN was bought for two reasons: Java and Solaris. Well, only certain parts of Solaris. COMSTAR was one of them an ZFS the other. Java of course was the biggest fish since that piece of the pie runs in almost every device on the planet from cell-phones to toasters. ZFS allowed Oracle to create Exadata  and tailor this to very specific workloads. (Duhhhh…. lemme guess> Oracle Databases). The funny thing is that they almost give away this Exadata box since they know it only performs well with their database and this is were you start paying the big bucks.

So lets get back to what is left of SUN. SUN was also a very big supporter of the open source world. Projects like OpenOffice, Netbeans, GlasFish etc are all neglected by Oracle for them to die a certain death. OpenOffice (originally acquired by SUN as StarOffice had a really nice spin since some developers had absolutely no trust in Oracle anymore and “forked” the entire code branch in LibreOffice which is now the most actively maintained Office Suite outside M$ Office. Oracle is sidely hanging on to MySQL and allow some people to put some effort in this project. The reason is obvious. Its a stepping stone to one of Oracle own big bucks databases and suites so the biggest sponsoring is done on migration software from MySQL to OracleDB itself. If Oracle decides to pull the plug on MySQL it will be simply forked as well and continue under another name were Oracle has absolutely no insight and loses any business advantage. Don’t ever expect any Oracle IP going into MySQL. Larry needs a bigger boat.

Another product SUN “donated” to the open source world was OpenSolaris. A free (as in free beer) spin of SUN’s mainstream operating system. SUN’s intentions for OpenSolaris was to provide a free platform for developers to have easy access to Solaris. This would allow for more applications to become available and as such a larger ecosystem to live on into companies using those. The steppingstone to a revenue generating operating system for those applications would then be real easy. A similar fashion Microsoft has followed for quite a while. (Provide a real cheap consumer product for developers to hook onto and sell at a premium to companies). Unfortunately it wasn’t mend to be so as soon Oracle took over the OpenSolaris project was starved to death.

So when taking into account all things that happened with the SUN acquisition it is very sad to see that such great products and philosophy is butchered by pure greed. Many distinguished engineers  like James Gosling, Time Bray and Bryan Cantrill left immediately and many more followed. The entire Drizzle team resigned, as well as all of the jRuby engineers. In fact the only SUN blooded executive to stay was John Fowler who kept onto his hardware group.

In retrospect the only thing Oracle bought for that 5.6 billion dollars is Java which is a very heavy price for a piece of software (soon to become obsolete) and an empty shell.

This once more shows that great products will always lose against marketing, an effective salesforce and a money hungry CEO.

Don’t get me wrong, I’m not against someone making a fair chuck of money but effectively killing of an entire company and leaving so many people in the cold doesn’t really show any form of ethics. A good friend of Larry, the late Steve Jobs, had similar characteristics however he also had a heart for great products.

Regards,
Erwin

PS. The comment of Java becoming obsolete is because many major new web technologies are now being put in place to bridge the gap to Java. This includes semantics, document and extensive data control, device control etc. Within 5 years Java will likely have a serious competitor which allows developers to gain more freedom and interoperability than Java now can provide.

tcsd service failed to start

With Fedora comes an option to have tcsd installed. Well, its not really an option. It installs by default apparently. This got me a bit baffled to see failed services on an almost brand new PC.

So what is exactly this tcsd service?

It turns out that the tcsd is a user space daemon to interact with the TPM kernel module in which is needed by hardware provided encryption services. For this you will need to have this TPM chip and since I don’t have this (nor likely to have the need for such in the near future) I’m fine to turn this off with “systemctl disable tcsd.service.

The tcsd service is a small section in the overall Trusted Computing Platform stack of solutions. The over goal is to have a piece of hardware covering encryption services to all levels of the computing stack. The idea is to have a separate bulletproof section in the system providing a trust chain not relying on memory and storage. This prevents rootkits and other type of malicious stuff infecting you system. By system I don specifically mean PC or sever since the stack is meant to be open for all sorts of equipment. If you need to secure your toaster you could potentially do so. You’ll also find the TPM architecture used by companies like Hitachi, Boing, Cisco and Microsoft. From a storage perspective TPM also plays a role in the SNIA Storage Security Industry Forum.

The overall specification is outlined by the Trusted Computing Group. A fairly large group of companies who define and contribute to the specification and develop products for this specific purpose.

Many opensource resources exist on the web but for a best start go to the above mentioned link. The Trousers libraries are the Linux opensourse interfaces mainly developed by IBM with help from many around the world.

See http://trousers.sourceforge.net

This page provides an short overview of what sits were in the TCG stack.

What I don’t know (yet) is where this all might play in the UEFI discussion Microsoft started off a while ago.  It either seems to complement each other or you’ll have conflicts. Don’t know yet. Might be worthwhile investigating.

Cheers,
Erwin

NVIDIA card and Nouveau

So with the new box I ordered a NVidia GeForce GT 640 Grafx card. I need some desktop realestate and thus a very high resolution card. This one came very good in the middle from a price and performance perspective.

Since a couple of kernel version ago Linux comes with the OpenSource nouveau drivers which are the alternative for the official NVidia drivers which are still closed source. I’m not that kind of guys who buys a very good piece of machinery to let it cripple by incomplete drivers. (No offence to the Nouveau developers. It’s not their fault NVidia doesn’t play nice with the open-source world.) So I do want to use the official drivers but that lets you run into some problem since the Nouveau drivers are loaded by default.

This calls for some blacklisting so you add in /etc/modprobe.d a new file called blacklist-nouveau.conf with a oneliner:

blacklist nouveau

This prevents the nouveau driver from being loaded at boot time. At least that’s what you think 🙁

Then install the official NVidia driver with “yum localinstall “.

It turns out that the nouveau driver is also statically compiled into the kernel boot image so you have to copy or rename that one and use dracut to create a new one which also takes your balcklisted nouveau driver into account:

#> dracut -f /boot/initramfs-$(uname -r).img $(uname -r)

Then reboot the system once more ad you’re done.

The lsmod shows you a line like this:
nvidia              11262717  41
and the nouveau driver is out of the picture.

Cheers
Erwin

Some disk settings I adjusted

Given the fact I now have an SSD drive running the /boot and root partition I do want to make the most of it. So in order to improve and keep this improvement over time I did the following:

I first reduce the amount of “swappiness” to the minimum. The box has 16G ram so I have enough headroom plus I move the swap partition to the spinning disk.

In sysctl -a:

vm.swappiness = 1
vm.vfs_cache_pressure = 100

I enabled the discard option on the ext4 filesystems to enable TRIM in order to free up block upon release

In fstab:
/dev/mapper/vg_monster-lv_root /                       ext4    defaults,discard        1 1
UUID=3de72813-da36-4a6e-89e1-4805b0fc03ea /boot                   ext4    defaults        1 2
/dev/sdb1             swap                    swap    defaults        0 0

So the vg_monster-lv_root sits on the SSD drive and the swap space + /home partition on the spinning rust.

There are two reasons for this.
1. I can monitor the rotating disk for increasing faults. By default any spinning disk has some spare blocks so it can either try and rewrite the failing block to a good one or just mark the block as bad so I would most likely lose just one block or sector.
2. SSD’s don’t have the option for marking a single block as bad. Most likely an entire cell fails which in general will brick the disk. I can rebuild an OS fairly quickly but my homedrive with all settings and data is a much larger piece of work. In addition it’s much easier to rsync a single directory that the entire box to another medium. 🙂

In addition I changed the default CFQ scheduler to deadline in other to get the optimum number of queues and timeout deadlocks on read/write operations. This scheduler prevents from processes having to wait for requests by other processes too long causing them to timeout.

[root@monster ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@monster ~]# cat /sys/block/sdb/queue/scheduler
noop deadline [cfq]
[root@monster ~]#

I added some udev rules to sort this out on boot:

[root@monster ~]# cat /etc/udev/rules.d/60-disk-scheduler.rules
# set deadline scheduler for non-rotating disks
ACTION==”add|change”, KERNEL==”sd[a-z]”, ATTR{queue/rotational}==”0″, ATTR{queue/scheduler}=”deadline”

# set cfq scheduler for rotating disks
ACTION==”add|change”, KERNEL==”sd[a-z]”, ATTR{queue/rotational}==”1″, ATTR{queue/scheduler}=”cfq”

Some more to come when I figure some stuff out.

Cheers
Erwin

dot desktop in Gnome

OK so this one got me going for a while. Yes, did not read the Developers and Administrator guides. Maybe I should have.

This week I received a new PC with some serious grunt. Boot time takes on Fedora 17 +- 4 seconds including a shitload of daemons.

I also did not want to lose an of my settings and data so I rsync-ed the entire ~ folder from my old PC to this one. Beside the usual packages that are installed I also have some serious modified settings but one of the most annoying things I could not figure out was that many icons in the Gnome grid were reporting these square boxes and I also was missing some other icons I would have expected to be in the grid. On any normal interface you do a right-click and you get presented with a dialogue-box which lets you add/remove/muck-up these icons. Not so in Gnome-shell. It turns out you have to do this my hand by adding so call “xxx.desktop” files in the ~/.local/share/applications folder. Most app packages provide this file and yank it it there but if you have some which don’t then just copy and modify an existing one.

I do seriously hope the Gnome devs will sort this out asap since this looks like going back to the stone age.

Cheers
Erwin

Brocade vs Cisco. The dance around DataCentre networking

When looking at the network market there is one clear leader and that is Cisco. Their products are ubiquitous from home computing to enterprise Of course there are others like Juniper, Nortel, Ericson but these companies only scratch the surface of what Cisco can provide. These companies rely on very specific differentiators and, given the fact they are still around, do a pretty good job at it.

A few years ago there was another network provider called Foundry and they had some really impressive products and I that’s mainly why these are only found in the core of data-centres which push a tremendous amount of data. The likes of ISP’s or  Internet Exchanges are a good fit. It is because of this reason Brocade acquired Foundry in July 2008. A second reason was that because Cisco had entered the storage market with the MDS platform. This gave Brocade no counterweight in the networking space to provide customers with an alternative.

When you look at the storage market it is the other way around. Brocade has been in the Fibre Channel space since day one. They led the way with their 1600 switches and have outperformed and out-smarted every other FC equipment provider on the planet. Many companies that have been in the FC space have either gone broke of have been swallowed by others. Names like Gadzoox, McData, CNT, Creekpath, Inrange and others have all vanished and their technologies either no longer exist or have been absorbed into products of vendors who acquired them.

With two distinct different technologies (networking & storage) both Cisco and Brocade have attained a huge market-share in their respective speciality. Since storage and networking are two very different beasts this has served many companies very well and no collision between the two technologies happened. (That is until FCoE came around; you can read my other blog posts on my opinion on FCoE).

Since Cisco, being bold, brave and sitting on a huge pile of cash, decided to also enter the storage market Brocade felt it’s market-share declining. It had to do something and thus Foundry was on the target list.

After the acquisition Brocade embarked on a path to get the product lines aligned to each other and they succeeded with  their own proprietary technology called VCS (I suggest you search for this on the web, many articles have been written). Basically what they’ve done with VCS is create an underlying technology which allows a flat level 2 Ethernet network operate on a flat fabric-based one which they have experiences with since the beginning of time (storage networking that is for them). 

Cisco wanted to have something different and came up with the technology merging enabled called FCoE. Cisco uses this extensively around their product set and is the primary internal communications protocol in their UCS platform. Although I don’t have any indicators yet it might well be that because FCoE will be ubiquitous in all of Cisco’s products the MDS platform might be abolished pretty soon from a sales perspective and the Nexus platforms will provide the overall merged storage and networking solution for Cisco data centre products which in the end makes good sense.

So what is my view on the Brocade vs. Cisco discussion. Well, basically, I do like them both. As they have different viewpoints of storage and networking there is not really a good vs bad. I see Brocade as the cowboy company providing bleeding edge, up to the latest standards, technologies like Ethernet fabrics and 16G fibre channel etc whereas Cisco is a bit more conservative which improves on stability and maturity. What the pros and cons for customers are I cannot determine since the requirement are mostly different.

From a support perspective on the technology side I think Cisco has a slight edge over Brocade since many of the hardware and software problems have been resolved over a longer period of time and, by nature, for Brocade providing bleeding edge technology with a “first-to-market” strategy may sometimes run into a bumpy ride. That being said since Cisco is a very structured company they sometimes lack a bit of flexibility and Brocade has an edge on that point.

If you ask me directly which vendor to choose when deciding a product set or vendor for a new data centre I have no preference. From a technology standpoint I would still separate fibre-channel from Ethernet and wait until both FCoE and Ethernet fabrics have matured and are well past their “hype-cycle”. We’re talking data centres here and it is your data. Not Cisco’s and not Brocade’s. Both FC and Ethernet are very mature and have a very long track-record of operations, flexibility and stability. The excellent knowledge there is available on each of these specific technologies gives me more piece of mind than the outlook of having to deal with problems bringing the entire data centre to a standstill.

Erwin

Beyond the Hypervisor as we know it

And here we are again. I’ve busy doing some internal stuff for my company so the tweets and blogs were put on low maintenance.

Anyway, VMware launched its new version of vSphere and the amount of attention and noise it received is overwhelming both from a positive as well as negative side. Many customers feel they are ripped off by the new licensing schema whereas from a technical perspective all admins seem to agree the enhancements being made are fabulous. Being a techie myself I must say the new and updated stuff is extremely appealing and I can see why many admins would like to upgrade right away. I assume that’s only possible after the financial hurdles have been taken.

So why this subject? “VMware is not going to disappear and neither does MS or Xen” I hear you say. Well, probably not however let take a step back why these hypervisors were initially developed. Basically what they wanted to achieve is the option to run multiple applications on one server without having any sort of library dependency which might conflict and disturb or corrupt another application. VMware hasn’t been the initiator of this concept but the birthplace of this all was IBM’s mainframe platform. Even back in the 60’s and 70’s they had the same problem. Two or more applications had to run on the same physical box however due to conflicts in libraries and functions IBM found a way to isolate this and came up with the concept of virtual instances which ran on a common platform operating system. MVS which later became OS/390 and now zOS.

When the open systems guys spearheaded by Microsoft in the 80’s and 90’s took off they more or less created the same mess as IBM had seen before. (IBM did actually learn something and pushed that into OS/2 however that OS never really took off).
When Microsoft came up with so called Dynamic Link Libraries this was heaven for application developers. They could now dynamically load a DLL and use its functions. However they did not take into account that only one DLL with a certain function could be loaded as any one particular point. And thus when DLL got new functionality and therefore new revision levels sometimes they were not backward compatible and very nasty conflict would surface. So we were back to zero.

And along came VMware. They did for the Windows world what IBM had done many years before and created a hypervisor which would let you run multiple virtual machines each isolated from each other with no possibility of binary conflicts. And they still make good money of it.

However also the application developers have not been pulling things out of their nose and sit still. They also have seen that they no longer can utilize the development model they used for years. Every self respecting developer now programs with massive scalability and distributed systems in mind based on cloud principles. Basically this means that applications are almost solely build on web technologies with javascript (via node.js), HTML 5 or other high level languages. These applications are then loaded upon distributed systems like openstack, hadoop and one or two others. These platforms create application containers where the application is isolated and has to abide by the functionality of the underlying platform. This is exactly what I wrote almost two years ago where the application itself should be virtualised instead of the operating system. (See here)

When you take this into account you can imagine that the hypervisors, as we know them now, at some point in time will render themselves useless. The operating system itself is not important anymore and is doesn’t matter where these cloud systems run on. The only thing that is important is scalability and reliability.  Companies like VMware, Microsoft, HP and others are not stupid  and see this coming. This is also the reason why they start building these massive data centres to accommodate the customers who adopt this technology and start hosting these applications.

Now here come the problems with this concept. SLA’s. Who is going to guarantee you availability when everything is out of your control. Examples like outages with Amazon EC2, Microsoft’s cloud email service BPOS, VMware’s Cloud Foundry outage or Google GMAIL service show that even these extremely well designed systems at some point in time run into Murphy and the question is do you want to depend on these providers for business continuity. Be aware you have no vote how and were your application is hosted. That is totally at the discretion of the hosting provider. Again, its all about risk assessment versus costs versus flexibility and other arguments you can think of so I leave that up to you.

So where does this take you? Well, you should start thinking about your requirements. Does my business need this cloud based flexibility or should I adopt a more hybrid model where some applications are build and managed by myself/my staff.

In any way you will see more and more applications being developed for both internal, external and hybrid cloud models. This then brings us back to the subject line that the hypervisors as we know them today will cease to exist. It might take a while but the software world is like a diesel train, it starts slowly but when it´s on a roll its almost impossible to stop so be prepared.

Kind regards,
Erwin van Londen