1)	Provide support for logging accesses to a FILE* in buffered mode so
	as to not slow down the system under heavy load. At the moment I
	use syslog(3) which can bog down under heavy load.

2)	Add a no daemonise mode (-d option I think).

3)	Overhaul the request parsing mechanism. We want to be able to parse
	multiple requests from the client to the proxy, and translate the
	first line of each request. Hopefully this will allow full HTTP/1.1
	persistent connections. For example the following request stream...

		GET /index.html HTTP/1.1\r\n
		Host: www.yahoo.com\r\n
		[More headers terminated with \r\n]
		\r\n
		GET /background.gif HTTP/1.1\r\n
		Host: www.yahoo.com\r\n
		[More headers terminated with \r\n]
		\r\n

	Would be translated into...

		GET http://www.yahoo.com/index.html HTTP/1.1\r\n
		Host: www.yahoo.com\r\n
		[More headers terminated with \r\n]
		\r\n
		GET http://www.yahoo.com/background.gif HTTP/1.1\r\n
		Host: www.yahoo.com\r\n
		[More headers terminated with \r\n]
		\r\n

	The problem is look-ahead. One approach is to read until the sequence
	\r\n\r\n is found, storing the data as we go. Then search back for the
	Host: header and get its contents. Then modify the entire request and
	write it to the proxy. Then repeat again, etc etc. However this reading
	should be merged into the main select loop which does the error
	handling for reset connections, etc.

4)	Convert it into a non-forking server that handles multiple connections
	in a single process. Either by a threads library, or the more
	conventional state machine approach. I like the threads library
	approach. Use Squid if you like the state machine approach.
