Richard's Project Blog

In this blog you will be reading about my final year computer science project at university. Feel free to make any comments and suggest any ideas!

Friday, October 07, 2005

My project - a brief overview

Well, all is well and good, I've looked at various bits of infrastructure and I have to complete my literature review for a few books on software I know I don't need now (like a book on beowulf) and came up with a project.

As mentioned before I have looked into X11 forwarding - but my requirements as set by myself is to make a "high availability - load balanced cluster for X programs."

Monday, October 03, 2005

The library - and a good days work

Today has been a successful weekend of research, and actually putting that research into practice. I have been studying X-Windows and the X11 protocol, and trying to implement my own X-Windows server. To be honest I was really stumped as to how to get my ubuntu machine to display xclock, but with a lot of effort, and a little help from my friends and wiki's all over the google world I got it working.

The basics of it are that an SSH connection to the server with the X programs on it, then by issuing the command "xclock &" will ensure that the "xclock" program appears on the client OS, but ultimately it is run on the X server.

Testing:

I wanted to ensure that I was not running the "xclock" on the client OS, so by opening up a new terminal and typing "ps -ef" I found that "xclock" was importantly NOT in the process list, and running the same command (ps -ef) on the X server sure enough xclock was been run!

Quite simple really! But here's where the fun begins - failover and load balancing - I've had an idea with further SSH tunneling - but there are some vital flaws. I will be presenting my findings to my Project tutor so I will see how it goes...

I've also got some light reading from the library: SSH the secure shell - its an O'reilly book so should be good, and a book on Windows server clustering.

Monday, September 12, 2005

Fault tolerant web servers and DNS - on the cheap

The simplest ideas are the best, and despite a small amount of ridicule, I think I have seen possibly the most efficient failover solution yet. It is quite simple, there are two web servers, say A and B and one DNS server. According to the DNS server A is www.wlv.ac.uk – Server B is an exact clone – but is not used. A script is then used to ping (or similar) A, when it does not reply, the script considers A to be down, and therefore switches its DNS entry to B’s IP address, an email is sent and an action is taken to fix the server A, whilst the script continues to monitor B.

This is obviously not the most ideal solution for a serious company thinking about failover – as there are many single points of failure, but hey – if it works it works.

Thursday, September 01, 2005

The cost of high availability (HA)

I have been looking at HA – and in its current form it is expensive. Part of a previous blog I had discussed the use of a live distribution to act as a backup – so if anything happened a CD could be inserted into a piece of hardware that was available to replace the role of the server.

Although this is feasible initial investigation into live Linux Distributions clearly pointed out that with a small amount of configuration they could already take the role of a web server.

I have been discussing with my project tutor about the benefits, but major cost implications of HA. HA is “exactly what it says on the tin” it makes a server have complete failover by removing every single point of failure.

A HA server is only one part of a complete infrastructure, as the underlying components – have to be doubled up too – such as routers and switches.

A recent example of “supposed” HA was experienced whilst I was working one day last week. As the company I work for provides services to customers it is imperative that we have two Internet links into the building – which we do – the only problem being the same supplier provides them. Both lines go off to different exchanges and have different routes to the backbone – but when the backbone goes (which it did) we are still left with unhappy customers – so it seems no matter how hard you try, and how much money you spend HA there is still a single point of failure.

I have two ideas:

High availability on the cheap and

Dynamic high availability.

High availability on the cheap:

  • Throw away parts
  • Cheap and reusable
  • Many nodes

Dynamic high availability

  • Able to add nodes easily (and cheaply)
  • Low configuration cost

Wednesday, August 31, 2005

High availabiltiy and heartbeats

I'm sitting at the moment on my 'new' laptop – its amazing specification of a near 300 Mhz PII processor with 128Mb RAM – I am happy to be sitting hear listening to the fan whizzing away – as when I installed Ubuntu (www.ubuntulinux.org/) a common problem with ACPI got in the way of most of the laptops power management. Despite many failed attempts to get it working I rebuilt the laptop with the command “linux no acpi” - and therefore it defaulted to apm – which sorted out all my problems.

Anyway – to the point – the reason why I installed Ubuntu was down to a friend – Linux hardcore that chap – who pretty much said “It's linux for human beings” - sounds good to me i thought – but despite the shoddy power management – I have spent some time playing with various features.

One of the best features is the synaptic package manager – A simple interface where you point and click what packages you want and what you want to remove, the best thing about it, is that it sorts out all the package dependencies for you... Enter heartbeat!

In my initial research into high availability pointed to heartbeat - http://www.linux-ha.org/ and when a somewhat lazy attempt to get it installed onto my RedHat 9 dual boot box failed I thought there must be something more to this small package – Ubuntu, thankfully, did all the nasty work for me – so I am sitting with a fully installed system with the possibilities to make my laptop a high availability server in one of the following domains:

  • Web servers

  • LVS director servers

  • Mail servers

  • Database servers

  • Firewalls

  • File servers

  • DNS servers

  • DHCP servers

  • Proxy Caching servers

  • etc.

    My next step is to configure a virtual machine to be my other heartbeat and then I can really play – and see what this thing can do – plus i really need an idea!

Tuesday, June 28, 2005

This could be a good idea ...

My heads buzzing with an idea I had at work yesterday. OK - so your web server goes down - that don't impress you much. Your server monitor says "hey i'm down, it's 2AM come into work and fix it" you have a choice -

1. Come into work
2. Stay in bed / club /pub

I know I'd prefer "2" but it's not always pratical - especially if you have to meet uptime targets, or even if your paid to do that.

The point i'm trying to get at is, that if the server has informed me that the server is down, and needs my immediate attention - why can't it inform another peripheral device that it is their turn to take over for a short while.

The best, but rather expensive solution is to have two of every server. So if webserver "A" goes down webserver "B" takes over. As I say - expensive - but it will give you those 100% uptime figures you dream of.

Another solution could be have a single server as a backup for all your servers, so if one goes "pop" you can rebuild the server with a new OS. But this is time consuming and inpratical for a short period of time.

But if you had a quicker way of doing this (installing the OS), it would be perfect - AH - but we have - in the realm of live linux distributions - such as Knoppix. I've recently been playing with knoppix and i've found it to be a great learning aid for fairly new Linux users (me) and its great for fixing the old FAT file system based Windows distro's.

After being suitably impressed I have decided to buy a book on the subject with the thought of making a "live linux distribution backup" of a server - and therefore change the face of a backup disaster and recovery proceedure - as their will be a live CD that can be plugged straight in to take over a server.

My vision is: If a server goes down, your server monitor informs another system which boots your live distribution with a copy of your database / webserver / mailserver etc...

Thursday, June 16, 2005

Brain dump ...

What services do you monitor? A service to help you choose what to monitor - based on what you have installed - and then recommends how to do it / software how to monitor it.

This could be something you run on the client - or on the monitoring server? based on RPM's - or defualt windows packages?

You may be able to do this on-line?

The above probably makes little sense - but it is afterall a brain dump!