Tag Archives: security

Evl Consulting Authorized 1Password reseller

As you may have seen EvL Consulting has selected 1Password as the password manager of choice for individuals, families, teams and businesses.

The decisions to select 1Password did not come as a random selection. There are various products and solutions on the market, which all have their pros and cons. 1Password was selected on a range of criteria, of which security and product architecture were on the top of the list. Next to that came ease of use within the various subscription options.

1Password has proven to be the solution of choice for many organisations and is under constant active development to lead the way with actively collaborating with as well as adopting of industry standards like FIDO2.

Furthermore, 1Password is continuously assessed by various external security auditors and the reports are made public over here: https://support.1password.com/security-assessments/

Our shop has attractive options for families and teams. If you are part of a larger organization whereby licence seats of 100+ are required, please contact us and we will be able to discuss the options.

Cyber Security awareness

As the field of risks in cyberspace expands more and more, it is imperative to understand these and reduce your areas of vulnerabilities.

From a consumer perspective this mostly touches on a few points:

  1. Reduce exposure
  2. Only provide what is required
  3. Secure credentials
  4. Maintain retention policies
  5. Change often and unique

So what do I mean by the above as these do not seem to be really “consumer” terminologies. Lets go through them one-by-one.

Continue reading

Brocade Network Advisor vulnerable to SWEET32

OK, OK, don’t panic. In 99.999% of all cases you’re BNA management system is well dug deep inside the datacenter’s behind a fair few layers of firewalls, switch ACL’s and other physical or non-physical borders so bad dudes being able to exploit the vulnerability is relatively unlikely. Just in the event you still want to prevent  from even being remotely possible here is a procedure to remove the underlying issue as well as being able to remove some older, less-secure, protocols.

Continue reading

Open Source Software (OSS) and security breaches in proprietary firmware

It is no secret that many vendors use open source software in their products and solutions. One of the most ubiquitous  is Linux which is often the base of many of these products and used as core-OS because of it’s flexibility and freely available status without the need of keeping track of licenses (to some extent) and costs.

These OSS tools have different development back-grounds and are subject to policies of the person (or people/companies) who develop it. This obviously results in the fact that defects or bugs may result in security issues especially when it involves network related applications. Recently the bugs in OpenSSL and Apache have gain much traction as some of these are fairly significant and can result in access breaches or denial of service.

Continue reading

Management Security in Brocade FOS

If you’re in my business of looking at logfiles to be able to determine what’s going on or what has happened one of the most annoying, and frightening, things to see is a sheer amount of failed login attempts. In most cases these are simply genuine mistakes where a lingering management application or forgotten script is still trying to login to obtain one piece of information or another out of the switch/fabric. The SAN switches are often well inside the datacentre management firewalls so attacks from outside the company are less likely to occur however when looking at security statistics over the last decade or so it turns out that threats are more likely to originate from inside the company boundaries. Employees mucking around with tools like nmap, MITD software like cane & able, or even an entire Kali Linux distro hooked up to the network “just to see what it does because a mate of mine recommended it”. In 99.999% of all install bases I looked at the normal embedded username/password mechanism is used for authentication and authorisation. This also means that if security management is not configured on these switches, a not so good-Samaritan is able to use significant brute force tactics to try and obtain access to these switches without anyone knowing. When using an external authentication mechanism like LDAP or TACACS+ chances are there are some monitoring procedures in place which monitor and alert on these kind of symptoms however the main issue is that the attack has already occurred and there is no mechanism to prevent these sorts of attacks on a level that really protects the switch. It is fairly simple to overload a switch with authentication attempts by firing off thousands of ssh,telnet and http(s) sessions (which is easily done from any reasonable priced laptop these days with a Linux distro like Kali installed) and therefore crippling the poor e500 CPU on the CP. This can have significant ramifications on overall fabric services in that switch which can bring down a storage network. Now, obviously there is a mechanism to try and prevent this via iptables however there are a number of back-draws.

Continue reading

7 – Fabric Security

This topic is hardly ever touched when fabric designs are developed and discussed among storage engineers but for me this always sits on my TODO list before hooking up any HBA or array port. It is as important in the storage world as it has been in the IP networking sector for decades. Historically the reasoning to not pay attention to this topic was that the SAN was always deeply embedded in tightly controlled data-centres with strict access policies. Additionally the use of fibre-optics and relatively complex architectures to the storage un-inaugurated even more, unfairly, devalued the necessity of implementing security policies.

Let me make one thing clear: Being able to gain access to a storage infrastructures is like finding the holy grail for archaeologists. If no storage infrastructure security is implemented it will allow you to obtain ALL data for good or bad purposes but even worse it also allows the non-invited guest to corrupt and destroy it. With this chapter I will outline some of the procedures I consider a MUST and some which you REALLY should take a good look at and if possible implement them.

Continue reading

Brocade AAA authentication problem

Do you get nuts about these user-names and passwords you need to remember across all these different systems, platforms and applications. LDAP or RADIUS is your friend. When you make a mistake however it can also be you biggest enemy.

Brocade offers you the option to hook up a switch to LDAP or RADIUS for central authentication. (only Authentication).
An incorrectly configured LDAP or RADIUS configuration on a Brocade switch may lock out network access. This applies to telnet, ssh, webtools and SMI-S.

When AAA configuration is done via the CLI it is very important to specify the correct parameters and specifically the double quotation marks. If ldap is configured with the local database as fall-back the command would be aaaconfig –authspec “ldap;local”. If the quotation marks are omitted the semicolon will be interpreted as a command-line separator. (These commands are executed in a so called restricted Linux bash shell and as such have to abide the rules according to this shell) In essence two commands will then be executed separately.

aaaconfig –authspec ldap
local

The first command will succeed and change the authentication method to LDAP and will immediately logout all logged in users. If LDAP is incorrectly configured all authentication requests will fail and network access is not possible.

To fix this you will need to attach a serial cable to the switch (or Active CP)

  1. Connect the serial cable to the switch serial management port. (On a blade system like DCXX or 48000 connect to the active CP)
  2. Login with either root or admin account. (Console access is still allowed).
  3. Modify the AAA configuration with the command aaaconfig –authspec “ldap;local”.
  4. Depending on the ldap authentication timeout settings the login will fall back to the local user-database for authentication.

Cheers,
Erwin

SCSI UNMAP and performance implications

When listening to Greg Knieriemens’ podcast on Nekkid Tech there was some debate on VMWare’s decisision to disable the SCSI UNMAP command on vSphere 5.something. Chris Evans (www.thestoragearchitect.com) had some questions why this has happened so I’ll try to give a short description.

Be aware that, although I work for Hitachi, I have no insight in the internal algorithms of any vendor but the T10 (INCITS) specifications are public and every vendor has to adhere to these specs so here we go.

With the introduction of thin provisioning in the SBC3 specs a whole new can of options, features and functions came out of the T10 (SCSI) committee which enabled applications and operating systems to do all sorts of nifty stuff on storage arrays. Basically it meant you could give a host a 2 TB volume whilst in the background you only had 1TB physically available. The assumption with thin provisioning (TP) is that a host or application wont use that 2 TB in one go anyway so why pre-allocate it.

So what happens is that the storage array will provide the host with a range of addressable LBA’s (Logical Block Addresses) which the host is able to use to store data. In the back-end on the array these LBA’s are then only allocated upon actual use. The array has one or more , so called, disk pools where it can physically store the data. The mapping from the “virtual addressable LBA” which the host sees and the back-end physical storage is done by mapping tables. Depending on the implementation between the different vendor certain “chunks” out of these pools are reserved as soon as one LBA is allocated. This prevents performance bottlenecks from a housekeeping perspective since it doesn’t need to manage each single LBA mapping. Each vendor has different page/chunks/segment sizes and different algorithms to manage these but the overall method of TP stay the same.

So lets say the segment size on an array is 42MB (:-)) and an application is writing to an LBA which falls into this chunk. The array updates the mapping tables, allocates cache-slots and all the other housekeeping stuff that is done when a write IO is coming in.  As of that moment the entire 42 MB is than allocated to that particular LUN which is presented to that host. Any subsequent write to any LBA which falls into this 42MB segment is just a regular IO from an array perspective. No additional overhead is needed or generated w.r.t. TP maintenance. As you can see this is a very effective way of maintaining an optimum capacity usage ratio but as with everything there are some things you have to consider as well like over provisioning and its ramifications when things go wrong.

Lets assume that is all under control and move on.

Now what happens if data is no longer needed or deleted. Lets assume a user deletes a file which is 200MB big (video for example) In theory this file had occupied at least 5 TP segments of 42MB. But since many filesystems are very IO savvy they do not scrub the entire 42MB back to zero but just delete the FS entry pointer and remove the inodes from the inode table. This means that only a couple of bytes effectively have been removed on the physical disk and array cache.
The array has no way of knowing that these couple of bytes, which have been returned to 0, represent an entire 200MB file and as such these bytes are still allocated in cache, on disk and the TP mapping table. This also means that these TP segments can never be re-mapped to other LUN’s for more effective use if needed. To overcome this there have been some solutions to overcome this like host-based scrubbing (putting all bits back to 0), de-fragmentation to re-align all used LBA’s and scrub the rest and some array base solutions to check if segments do contain on zero’s and if so remove them from the mapping table and therefore make the available for re-use.

As you can imagine this is not a very effective way of using TP. You can be busy clearing things up on a fairly regular basis so there had to be another solution.

So the T10 friends came up with two new things namely “write same” and “unmap”. Write same does exactly what it says. It issues a write command to a certain LBA and tells the array to also write this bit stream to a certain set of LBA’s. The array then executes this therefore offloading the host from keeping track of all the write commands so it can do more useful stuff than pushing bits back and forth between himself and the array. This can be very useful if you need to deploy a lot of VM’s which by definition have a very similar (if not exactly) the same pattern. The other way around it has a similar benefit that if you need to delete VM’s (or just one) the hypervisor can instruct the array to clear all LBA’s associated with that particular VM and if the UNMAP command is used in conjunction with the write same command you basically end up with the situation you want. The UNMAP command instructs the array that a certain LBA (LBA’s) are no longer in use by this host and therefore can be re-used in the free pool.

As you can imagine if you just use the UNMAP command this is very fast from a host perspective and the array can handle this very quickly but here comes the catch. If the host instructs the array to UNMAP the association between the LBA and the LUN it is basically only a pointer from the mapping table that is removed. the actual data does still exist either in cache or on disk. If that same segment is then re-allocated to another host in theory this particular host can issue a read command to any given LBA in that segment and retrieve the data that was previously written by the other system. Not only can this confuse the operating system but it also implies a huge security risk.

In order to prevent this the array has one or more background threads to clear out these segments before they are effectively returned to the pool for re-use. These tasks normally run on a pretty low priority to not interfere with normal host IO. (Remember that it still is (or are) the same CPU(s) who have to take care of this.) If CPU’s are fast and the background threads are smart enough under normal circumstances you hardly see any difference in performance.

As with all instruction based processing the work has to be done either way, being it the array or the host. So if there is a huge amount of demand where hypervisors move around a lot of VM’s between LUN’s and/or arrays, there will be a lot of deallocation (UNMAP), clearance (WRITE SAME) and re-allocation of these segments going on. It depends on the scheduling algorithm at what point the array will decide to reschedule the background and frontend processes so that the will be a delay in the status response to the host. On the host it looks like a performance issue but in essence what you have done is overloading the array with too many commands which normally (without thin provisioning) has to be done by the host itself.

You can debate if using a larger or smaller segment size will be beneficial but that doesn’t matter at all. If you use a smaller segment size the CPU has much more overhead in managing mapping tables whereas using bigger segment sizes the array needs to scrub more space on deallocation.

So this is the reason why VMWare had disabled the UNMAP command in this patch since a lot of “performance problems” were seen across the world when this feature was enabled. Given the fact that it was VMWare that disabled this you can imagine that multiple arrays from multiple vendors might be impacted in some sense otherwise they would have been more specific on array vendors and types which they haven’t done.

OpenDNS with DNS-O-Matic

A while ago I wrote a short article that I found a nice way to “secure” or at least be able to monitor my childrens’ web behavior called OpenDNS. I soon found out that you have at least one problem and that is dynamic IP addresses which your ISP shoves to you when you link up your router. Problem is these are never the same and the DHCP lifetime is 0 seconds. So even in a small link bounce of 2 or 3 seconds you get a new IP address on your WAN side.

This renders the security features of OpenDNS (DNS Domain blocking) more or less useless since the DNS queries that are now made from one of your PC on the LAN side are now exposed to the OpenDNS with another public IP address and OpenDNS can therefore not link your profile to this address.

So lets take an example:
Your internal LAN is using 10.1.2.0/24 and is NAT-ed on your router to the outside world. Your ISP provides you with an adress of, let say, 152.43.50.2.

On the OpenDNS website you create a profile called “My Home network” and you link  this address to the profile. The profile also allows you to block certain websites manually or entire categories like Adult, Weapons, Gambling etc. so all in all important to keep this away from your children.
Now what happens if one of your computers does a DNS query is that OpenDNS takes the from address (ie your public IP address 152.43.50.2), link this to your profile to verify if your requested page/domain falls in one of the criteria you configured and if the action is for this site to be blocked it redirects you to a page which just shows an explanation why this site is blocked. You can customize this as well.

The problem is however that if your ISP provided address changes OpenDNS cannot link this WAN (152.43.50.2) address to your profile anymore and will just return the IP address of that site after which your computer just connects to it and shows the page.

This so called Dynamic IP address problem is also acknowledged by OpenDNS and their recommendation is in these cases to install a little tool which on regular intervals checks if this address changes or not and if it has it updates your OpenDNS profile with the new address. “Problem solved” you might say. Well, not exactly. The problem is that this little tool has to be installed on a PC which either runs Windows or MaxOS. Secondly this PC has to be secured from tampering since kids become smarter as well and it gives them the option to just remove this or fumble around as they seem fit which in essence renders it useless. I also don’t want too much of these tools installed on PC’s since I’m being seen as the household admin I want to do as little as possible. Admins should be lazy. Improves effectiveness 🙂 I decided not to use this agent so this has put me in some sort of catch22 situation. Again I should be lazy from an admin standpoint so I don’t have the time nor urge to check the OpenDNS website every 10 minutes if my address has changed so I worked something out with another service from OpenDNS which is called DNS-O-Matic (DOM). This service allowed me to write a simple script which enbled me to automate the entire process.

So In my case I’ve done the following.
I have an OpenDNS account with a network profile which blocks certain categories of websites.
Next to that I created an DOM account and linked the OpenDNS service to the DOM account. This basically means that if I update DOM with my new, ISP provided, IP address it will propagate this to my OpenDNS account. (DNS-O-Matic provides many more options to link this service to but I leave this up to you to check this out.)

Now you might say “How does this fix things?”. Well, the solution is easy. DOM provides a simple API which you can write a script or program against. This allows you to update DOM automatically via this API which in turn updates your OpenDNS profile with your new IP address. So the first thing you need to do is obtain your current IP address. If you query the OpenDNS servers with the myip.opendns.com destination it will always return your actual (ISP provided) IP address. (This is basically the source address on which the OpenDNS service should return the answers to).
Next thing you need to do is to verify if this address is the same as your “old” address and if not, update DOM with this new address.

I made a little script which I hooked up to cron so it does this for me automatically every 5 minutes.

#!/bin/bash
## Script to update OpenDNS and DNS-O-Matic
## Check www.dnsomatic.com. opendns is linked to this.
##
## Documentation
## https://www.dnsomatic.com/wiki/api
##
##
## This script runs in cron every 5 minutes.

## First get your public IP address
ip=$(dig @208.67.222.222 myip.opendns.com +short)
## Get my IP I know I use to have from a hidden file
oldip=$(cat /home/erwin/.oldip)

## If needed update the IP address on the web. If not do nothing.
if [ $ip != $oldip ]
then

curl https://:@updates.dnsomatic.com/nic/update?hostname=all.dnsomatic.com&myip=$ip&wildcard=NOCHG&mx=NOCHG&backmx=NOCHG

## Write the new IP address to the hidden file again.
echo $ip > /home/erwin/.oldip

fi

That’s it. I’m sure this can be achieved on Windows as well with either batch files or commandlets and vb script but I just had bash at hand.

My crontab entry looks like this:

*/5 * * * * /home/erwin/Desktop/scripts/DNS-O-Matic/update.sh

And it works perfectly I must say.

Now there are two “Gotchas”:

  1. How do you prevent from kids just choosing another DNS service like the default ones that come with your ISP.
  2. This still requires you to have your computer online.

The answer to 1 is to create a frame redirect rule in your router firewall so that every DNS query (UDP port 53) is directed to OpenDNS. And the answer to 2 is “You are correct :-)”. Since I work from home my Linux box is always on. (At least during the time I’m working and during the time my kids are allowed on the net.

Some newer generation routers have this functionality build in so its a one time setup on your router and you wouldn’t have to worry about it anymore.

Hope this helps in one of your situations.

Regards,
Erwin