My part of the ATM switch project at GTE Government Systems was to build an SNMP agent for monitoring and managing the switch.

An ATM switch is a complicated beast, not really because of the ATM (that's Asynchronous Transfer Mode, not Automatic Teller Machine ;-), but because an ATM switch needs to interface to other sorts of network hardware. The GTE ATM switch supports a number of cards, including Ethernet (of course), SONET, HIPPI, and so on.

The GTE switch design has these cards as autonomous processors running their own embedded control software, and talking to each other using an internal Ethernet. Part of the configuration is a Sun workstation, which runs the non-realtime control and management software. This is where the SNMP agent normally runs. It talks UDP to the outside world, and also uses UDP to talk to the control software that runs as a flock of processes (daemons) on the Sun.

The SNMP agent was based on the public-domain CMU agent, but with a number of major alterations. The primary changes were caused by the need for a proxy agent. The term "proxy" in this case just means that the agent gets part of its data by sending a request to another process and waiting for the reply.

If taken literally, this description means very bad performance, due to the delays in inter-process messaging. So, rather than waiting, what I did was to modify the agent so it could send off a request and then put the SNMP request on its "to-do" list. Later, when a reply comes in, the agent puts the data into its cache, and then does a scan of the outstanding SNMP request, looking for messages that want any of the data in the reply.

Part of the reason for doing it this way is that the various components of the GTE ATM switch exchange messages in an internal binary format, and most messages contain more than one datum. Some are hundreds of bytes in length, and contain dozens of related data fields. So when a message is sent to the SNMP agent, there is a good chance that it contains the data for a number of SNMP variables in a number of requests. The most straightforward way of handling this is via a chache of recent messages, and a queue of incomplete SNMP requests.

An interesting part of this project was the way in which the various inter-process messages were standardized. They were kept in a single large file, which was an Excel spreadsheet. New versions of this would appear on a weekly basis, as development proceeded. Translating them into data delarations in the various languages (C, C++, Ada) is potentially a huge sinkhole for developers' time, and it is all too easy to miss changes.

My solution to this (which was adopted by several other developers on the project) was to write a perl program that reads the spreadsheet, and generates much of the needed code (C #defines and structs, C++ classes, Ada consts and records). The spreadsheet thus became the source, and the C/C++/Ada files were derived from the spreadsheet. This greatly speeds up dealing with changes to the message formats, and the compilers will tell you about most of the formatting mistakes.

I also wrote a perl program that parses ASN.1 MIB files, and produces the agent's MIB table. In the case of variables in the switch's inter-process messages, this translator could very often generate the agent's GET routines, too, fully automating the usually laborious task of writing SNMP agent routines for a private MIB.

The resulting agent had a private MIB of around 700 variables, and on a Sun SPARC 10, could field around 300-500 per second. This is an aggregate throughput, of course; delays in the UDP inter-process messaging would typically make a single request take 0.1 to 1 second.

Another aspect of this project was working on some GUI tools. The most successful were a set of tools written in tcl.

To be continued ...