
		     TO DO LIST FOR FUTURE VERSIONS OF DOMTOOLS

* networktbl, netmasktbl don't need "-n" and "-d" options, just call "type"!

* Write real manpages already.

* Combine  address, hinfo, uinfo, mx, txt, wks, cname  into a single script (argv[0]).

* Combine  soa, ns  into a single script (argv[0]).

* CAN'T REALLY combine  ptr, axfr, any  into a single script (argv[0]).

* RFC1183 support: AFSDB RR, X25 RR, ISDN RR, RP RR, RT RR

* RFC2052 support: SRV RR

* RFC1876 support: LOC RR

* IPv6 support
   * Add tools for the new RR specified in RFC1886 (AAAA)
   * Support new IP6.INT. domain (in-addr.arpa. equivalent)
   * Colon-separated hex quads for IP6 numeric addresses
		i.e. 4321:0:1:2:3:4:567:89ab

* Rewrite all sh and awk scripts in Perl.

* Add an option to disable the sorting of domain names in lookups
  like in "hosts" and "subdom" tools.  Because higher-level tools
  may do sorting, and there's no reason to slow things down
  in the lower levels!  Except that you may need some kind of uniq
  to be done at these lower levels, which requires sorting first.

* Write a program to search for all nets and all gateways like this:
	1. For every gateway, make sure every network it's connected to
	   has a "PTR" record that says this machine is a gateway to it.
	2. For every network, make sure every gateway it knows about
	   has an "A" record for that network.
	TRICKY PART: must do netmasks properly!  Require RFC1101?
	   What about sites that don't (won't) implement RFC1101?
	   How to generate error message ("...or you may have set up
	   your RFC 1101 records incorrectly.")

* Domain lint program: verify (recursively) that all the records in a domain
  seem reasonable.  My Dlint implementation is 95% finished, WITHOUT Domtools.
  Rewrite it for Domtools I guess.  Should make it much smaller!
  Things to examine:
	* ...lots from Dlint package...
	* WKS records for a host includes an IP address; make sure there is
	  an associated A RR by that host for that address!
	* see if any records have "#" as first character, if so warn that
	  it could be a typo of the administrator tried to use "#" sign to
	  comment out records!
	* scan a "net" for all "gateways" that connect to it.
	  scan a "gateway" for all "nets" it is connected to.
	  report differences between these two lists.
	* loop through all hosts (A RRs) recursively in a domain and
	  make sure they each have an in-addr.arpa. domain PTR record
	  pointing back to their hostname.
	* loop through all in-addr.arpa. type records (PTR RRs) recursively
	  and make sure they each point to a real A RR.
	* check for any element in the nameserver having more than one
	  CNAME record on it (i.e., if two hosts' 3-char abbreviations are
	  identical but the administrator didn't notice.)
	  ("agate" and "agassiz" might both have "aga IN CNAME ..." recs!)


* The resource-record tools now query each nameserver in turn if no records
  can be found.  This results in a slow response if there are no records
  of the requested kind, even if authoritative answers are coming back saying
  "there are NO records like that for the domain you are specifying!"
  We need to be able to get back two types of error-responses:
	1. "couldn't find the answer" (so we ask next nameserver)
	2. "the answer is NO RECORDS" (so we stop looking & error)
  This gives us a total of 3 (actually 4) types of responses to handle.
  Studying the output from many dig queries by hand, we should deal with:
	1. dig returns ";; ANSWERS:", so print answers & exit 0.
	2. dig returns nothing between its header & footer lines,
		so ERROR and exit 1.  (authoritative answer)
	3. dig returns ";; AUTHORITY RECORDS:" that contain an SOA record,
		so try querying the primary server listed therein.  If it
		gives a better answer, return it; else ERROR & exit 1.
	4. dig returns ";; AUTHORITY RECORDS:" that contain NS records,
		so continue looping through name-servers.  One of them should
		get us a better answer, otherwise ERROR & exit 1.

* Write a perl filter that parses all DiG output lines, and generates
  output lines that are much easier to parse, in the form:
		HEADER status NXDOMAIN
		HEADER some other header line
		ANSWER nau.edu. SOA ...
		ANSWER nau.edu. NS ...
		ANSWER nau.edu. NS ...
		AUTHORITY nau.edu. NS ...
		AUTHORITY nau.edu. A ...
		[...]
  Then, each particular tool could be recoded:
		dig ... | perl thisscript | sed -n -e 's/^ANSWER //p'
  This extracts only ANSWER lines from the DiG output.  Some tools need
  only answer lines, others need a variety.  (This would replace the
  digoutany.awk script).  The output of that, for the
  "soa" tool for example, could be sent thru a last filter:
		perl -n -e 'if (/(.*) SOA (.*)/) {print $1," ",$2,"\n"}'
  This says to print the domain name and all SOA field numbers on stdout
  for any SOA lines seen.  All non-SOA lines are ignored.
  This method is more concise, easier to code, read, and debug.

* Experiment with Domtools behind a firewall.
