As with other service-providing processes in QNX Neutrino, the networking services execute outside the kernel. Developers are presented with a single unified interface, regardless of the configuration and number of networks involved.
This architecture allows:
Our native network subsystem consists of the network manager executable (io-net), plus one or more shared library modules. These modules can include protocols (e.g. npm-qnet.so, npm-tcpip.so), drivers (e.g. devn-ne2000.so), and filters.
The io-net process.
The io-net component is the active executable within the network subsystem. Acting as a kind of packet redirector/multiplexer, io-net is responsible for loading protocol and driver modules based on the configuration given to it on its command line (or via the mount command after it's started).
Employing a zero-copy architecture, the io-net executable efficiently loads multiple networking protocols, filters, or drivers (e.g.npm-qnet.so, npm-tcpip.so) on the fly -- these modules are shared objects that install into io-net.
The io-net framework lets you set up a filter module, which can be registered above or below a producer module, such as a driver or protocol module.
A filter module allows you to intercept data packets as they're passed from this producer io-net module. This allows you to modify, drop, or simply monitor these packets. You can also direct packets to other interfaces (e.g. bridge or forward). Typically, a filter module would be registered above a network driver module (e.g. Ethernet).
The basic role of the converter module is to encapsulate and de-encapsulate packets as they pass from one network layer to another (e.g. IP to Ethernet). You use converters to connect producer modules together (e.g. a network protocol stack to a network driver).
These modules may also implement the protocols used to resolve the addressing used by the network protocol module to the physical network addresses supported by the network driver. For example, the ARP protocol (IP-to-Ethernet address translation) could be implemented as part of a converter module.
The networking protocol module is responsible for implementing the details of a particular protocol (e.g. Qnet, TCP/IP, etc.). Each protocol component is packaged as a shared object (e.g. npm-qnet.so). One or more protocol components may run concurrently.
For example, the following line from a buildfile shows io-net loading two protocols (TCP/IP and Qnet) via its -p protocol command-line option:
io-net -dne2000 -ptcpip -pqnet
Qnet also provides Quality of Service policies to help ensure reliable network transactions.
For more information on the Qnet and TCP/IP protocols, see the following chapters in this book:
The network driver module is responsible for managing the details of a particular network adaptor (e.g. an NE-2000 compatible Ethernet controller). Each driver is packaged as a shared object and installs into the io-net component.
Once io-net is running, you can dynamically load drivers at the command line using the mount command. For example:
io-net & mount -T io-net devn-ne2000.so
would start io-net and then mount the driver for an NE-2000 Ethernet adapter. All network device drivers are shared objects of the form:
Once the shared object is loaded, io-net will then initialize it. The driver and io-net are then effectively bound together -- the driver will call into io-net (for example when packets arrive from the interface) and io-net will call into the driver (for example when packets need to be sent from an application to the interface).
You can also use the umount command to unload a driver:
For more information on network device drivers, see their individual utility pages (devn-*) in the Utilities Reference.
Although several network drivers are shipped with the OS, you may want to write your own driver for your particular networking hardware. The Network Driver Development Kit makes this task relatively easy. The DDK provides full source code for several sample drivers as well as detailed instructions for handling the hardware-dependent issues involved in developing custom drivers for the io-net infrastructure.