Console dæmons for Linux

This is a white paper outlining a proposal for modifying Linux to employ application-mode console dæmons, removing console functionality from the kernel itself and addressing some problems that have been in Linux for over a decade.

Specifications: What this is intended to achieve

Linux consoles should support the full range of Unicode output. In other words: All Unicode glyphs (or, at the very least, all glyphs for all code points in the Basic Multilingual Plane) should be displayable simultaneously.

Currently, this is not the case. This is for the simple reason that in order to display more than 256 glyphs simultaneously one needs a so-called "framebuffer console" (i.e. where the console is displayed using the display adapter in graphics mode, with character glyphs being rendered in software), but this in turn requires that font bitmaps be loaded into kernel space (for use by the console driver). A font such as Code2000 would occupy 998KiB of nonpageable kernel space (16 bytes per glyph × 63888 glyphs).

Linux should support simple multi-head operation.1 In other words: Linux should not require administrators to jump through hoops simply in order to get two sets of entirely separate virtual consoles to come up on a machine with two mice, two keyboards, and two display adapters.

Currently, this is not the case. Aside from the fact that virtual consoles all come up on a single display adapter initially, and have to be explicitly moved (using con2fb) to another display adapter, virtual consoles on different display adapters are more strongly coupled to one another than they should be. The virtual console switching mechanism, in particular, is geared towards there only being one actual user.

A simple display mechanism should remain available in the kernel for boot-time kernel messages, kernel trap ("OOPS") reports, and shutdown messages issued after all user processes have been terminated. Such messages are issued in circumstances where user processes are not running, or cannot be run. Non-fatal messages should go through another mechanism, however, and the functionality of the in-kernel display mechanism need not be that of a full console — merely enough to display the fatal and bootstrap/shutdown messages.

Design: Where we are intending to go

Our aim is as follows:

Virtual console devices are completely divorced from actual hardware. A virtual console device is similar to a pseudo-terminal, in that it has a "master" side and a "slave" side. The "slave" side remains the same as it was (i.e. character devices with major number 4), presenting the same I/O API to applications programs, complete with cooked and raw input queues, TTY ioctl()s, and whatnot; whereas the "master" side allows applications programs to read and consume the output stream written to the console (from the "slave" side) and to generate and inject keyboard and mouse events into the console's input queue.

One difference from pseudo-terminals is that the "slave" side of a virtual console does not require that there be an open file descriptor attached to the "master" side in order to operate. If the "master" side is not open, "slave" side operations continue to append to the output queue and to drain the input event queue as normal.

The physical hardware is presented to applications programs as "raw" devices. The kernel exports a "frame buffer" device for each physical display adapter, a "raw keyboard" device for each physical keyboard, a "raw mouse" device for each physical mouse, and so forth.

A "raw" keyboard device simply returns the individual scan codes obtained from the keyboard, and provides ioctl() interfaces for switching scan code sets, turning LEDs on and off, and similar low-level functionality. It does not know about any kind of keyboard maps, hot keys, or even about shift states. Mapping from scan codes to virtual key codes, and thence to characters, is the purview of user-mode processes, not kernel-mode code.

Similarly, a "raw" mouse device simply returns the raw mouse data and provides ioctl() interfaces for configuring the mouse hardware at a low level (such as altering its sensitivity, or changing the mouse data packet format that it employs). The device does not actually interpret the raw data stream sent from the mouse.

The "frame buffer" device exports mechanisms for directly accessing the frame buffer, setting and querying graphics modes, controlling any associated graphics acceleration hardware, controlling hardware sprites (used if available for the cursor and the pointer), and so forth. The "frame buffer" device driver is responsible for handling cleanup of the device hardware to a known state when a server process unexpectedly dies. It is also responsible for plug-and-play resource allocation and PCI/AGB bus control, alleviating the need for application-mode processes to unilaterally manage these.

The same ethos is adopted here as for the original KGI project in 1998: "Frame buffer" device drivers only export "safe" operations, i.e. manipulations of the display hardware that cannot crash the machine (e.g. lock the PCI bus).

If a display adapter card supports multiple frame buffers, the frame buffer device driver presents these as individual "frame buffer" devices.

The actual rendering of virtual consoles on physical hardware is the responsibility of console dæmon processes. Think of these processes as a cross between an X Window System server and the screen program if you like. Each process opens a set of raw human-interface devices; employs one or more "virtual console capture" devices; and operates the "master" sides of one or more virtual console devices. Exactly which devices an individual console dæmon process controls, and how many console dæmon processes are run, are both configured by the system administrator.

Thus all of the tasks of (a) understanding individual mouse protocols, (b) mapping keyboard scan codes to characters, (c) turning escape codes into character cell operations, and (d) looking up font bitmaps in order to rasterize character cells, are all performed in user mode. In particular, all of the maps for doing so (such as Unicode fonts) and code for doing so (such as the ANSI/VT100 escape code interpreter) are held in application space, not in kernel space, and are pageable.

Console dæmons are responsible for updating "virtual console capture" devices (i.e. character devices with major number 7). Console dæmons can, in essence, use virtual console capture devices as shared memory areas containing the character cell buffers and cursor position information that their terminal emulator code updates when processing output. The kernel does not manipulate these devices itself, but simply acts as a passive provider of a communications mecahnism between the console dæmons and any other process that wants to access the actual character cell buffers and cursor position information.

Indeed, the VCC devices can simply be replaced by shared memory areas, or by ordinary files.

The terminal capabilities provided to kernel-mode code are an exceedingly thin layer over the raw frame buffer devices. Basically the kernel contains a very simple terminal emulator that contains a 256-glyph font, that understands a few basic control codes such as backspace, horizontal tab, vertical tab, carriage return, form-feed, and line-feed, and that does nothing else. The in-kernel terminal emulator does not need to implement escape sequences, font maps, or alternate character sets, or to provide any sort of input mechanism at all. It is just enough of a terminal emulator to provide the underpinnings for "glass tty" output using printk(), and nothing more.

The in-kernel terminal emulator is not even capable of changing display modes. The frame buffer device is constrained so that the in-kernel terminal emulator is always capable of rendering characters in whatever the current display mode happens to be. In general, this merely requires that the frame buffer device provide direct access to video RAM and information about the current colour depth, "stride" of the display, and so forth that is necessary to bit-blit character bitmaps directly into video RAM.

Each frame buffer device driver registers itself with the in-kernel terminal emulator when it is initialized, providing this information (and keeping it up-to-date as video modes change) for each individual frame buffer device.

The in-kernel terminal emulator needs to know how to switch power states, in order to turn on the display when a kernel trap report is displayed. There is also, optionally, an ioctl() API provided by frame buffer devices so that console dæmons and the in-kernel terminal emulator can share a notion of the current text output position. (Frame buffer drivers make no use of this information themselves. They merely act as conduits of shared state between the terminal emulators in the console dæmons and the in-kernel terminal emulator.)

On IBM PC compatible platforms, the font used by the in-kernel terminal emulator can just be the EGA font in the PC firmware ROM, snaffled by architecture-dependent code at bootstrap time. On other systems, it can be a compiled in font.

Examples of this design in action

For the sake of example, posit a system with:

In /etc/inittab (System V style init), or via /service (runit), the system administrator spawns a mingetty process on each of the virtual consoles, /dev/vcs/0 to /dev/vcs/15. This is unchanged from how login on virtual consoles operated before. Login on virtual consoles proceeds exactly as always.

In the initial startup script (e.g. /etc/runit/1 on runit systems), console dæmons are spawned in the background. The (example) dæmon program is console-dæmon, and it simply takes a list of physical devices, the locations of the VC and VCC device directories, and virtual console numbers as its arguments (In this simplistic example, the arguments are just a plain list. In a real implementation, a console dæmon would be capable of combining an arbitrary number of physical devices, and would also take parameters to specify the initial font, terminal type, and suchlike.):

/sbin/console-dæmon /dev/fb/0 /dev/kbd/0 /dev/mouse/0 /dev/vcm /dev/vcc 0 1 2 3 4 5 6 7 &
/sbin/console-dæmon /dev/fb/1 /dev/kbd/1 /dev/mouse/1 /dev/vcm /dev/vcc 8 9 10 &

The first console dæmon uses the first frame buffer, keyboard, and mouse devices, and controls the first 8 virtual consoles, employing their associated virtual console capture devices. The second console dæmon uses the second set of physical devices, and controls the next 3 VCs. As soon as the dæmons are running, the virtual consoles become "attached" to the physical devices.

The system administrator decides, for whatever reason, to move VCs 8, 9, and 10 from the second "head" to the third. Xe simply kills the existing console dæmon process and starts another one (In this simplistic example, no effort is made to preserve font and terminal emulator state. In a real implementation, the console dæmon would probably have a saved state file of some sort, or use an augmented VCC device for saving state.):

kill ${console-dæmon-1-pid}
/sbin/console-dæmon /dev/fb/2 /dev/kbd/2 /dev/mouse/2 /dev/vc /dev/vcc 8 9 10 &

The user on the first head presses ctrl-alt-F2. The first console dæmon receives this as a series of three keyboard scancode sequences from /dev/kbd/0. The first two simply manipulate its current shift state. It recognizes F2, when ctrl and alt are pressed, as a hotkey. In response it repaints the contents of /dev/vcc/1 onto /dev/fb/0 and switches its input focus to /dev/vc/1.

The user on the second head presses ctrl-alt-del. The second console dæmon receives this as a series of three keyboard scancode sequences from /dev/kbd/1. The first two simply manipulate its current shift state. It recognizes del, when ctrl and alt are pressed, as a hotkey. In response it issues an ioctl() against /dev/vcm/8 that instructs it to simulate a terminal hangup.

Consequences of this design

Some of the consequences of this design, in no particular order, are:

The mapping from virtual consoles to heads is not fixed. In other schemes for multi-headed operation, the mapping from virtual consoles to heads is fixed, and hardwired into the kernel. The Nth bank of M virtual consoles maps to head number N, in such schemes. In this scheme, not only is the mapping not hardwired, it is not even required that each head comprise the same number of virtual consoles or physical I/O devices as all of the others, or that virtual consoles be grouped in contiguous ranges.

Virtual consoles can be migrated across heads without rebooting. Migrating virtual consoles across different physical I/O devices is simply a case of stopping, reconfiguring, and restarting the relevant console dæmon processes. Since the "slave" sides of virtual consoles can operate without an open file descriptor attached to their "master" sides, temporarily shutting down a console dæmon is entirely invisible to the processes that are actually using the virtual console as their TTY.

There is no strong coupling between printk() and ordinary console output. In particular, there is no need for the console dæmon's terminal emulator to be in a sane state when printk() is called. The console dæmon could be halfway through receiving an ANSI CSI sequence, for example, and the operation of printk() would be unaffected.

Similarly, there is no coupling of font and character set state. The in-kernel terminal emulator is wholly ignorant of the font and character set state of the console dæmon's terminal emulation code. printk() always displays in the hardwired font, and thus is immune from escape sequences that would, if a single shared terminal emulator were used, render kernel panic messages illegible.

User-visible virtual console operation is unchanged. With the exception that each head's set of virtual consoles is now wholly independent of all other sets, the ctrl-alt-Fn hotkeys operate exactly as before.

Capturing console output with the TIOCCONS ioctl(), is obsolete. There's nothing left to capture. Normal non-fatal kernel messages pass through the /proc/kmsg mechanism, and kernel panic and trap screens are not, and should not be, captured in the first place.

Large swathes of kernel code and data simply vanish, their functionality transferred to application-mode programs. The kernel no longer contains the code of a full ANSI/VT100 terminal emulator. It no longer contains session switching code. It no longer contains charset→Unicode translation code for terminals. It no longer contains keyboard scancode→character translation code. It no longer contains fonts and keymaps.

Accessibility support, for the disabled, becomes an applications programming issue, and is no longer a kernel programming issue. For example: A console dæmon does not have to realize its character cell buffers by blitting glyphs onto a frame buffer device. For blind users, one could implement an alternative console dæmon that took the contents of the character cell buffers and sent them through a speech synthesizer. The point to note here is not the detail of the implementation but that choosing between the two, or implementing a third approach, is purely an applications programming exercise. No kernel mode programming is required.

The recognition and handling of the secure attention key becomes purely an application-mode issue. The kernel no longer need know anything at all about SAKs. Issues of handling SAKs from interrupt mode simply go away entirely.

Perhaps the simplest way for a console dæmon to handle an SAK is to issue a vhangup() on the current VC.

The character cell buffers need not occupy kernel memory space. VCCs only need support read and write operations, to access the character cell buffer, size, cursor position, and pointer position information. They can just be ordinary files.

Thus all VCC code can be removed from the kernel, and major device number 7 can be re-used for something else. Prime candidates for "something else" are the new "master" side devices for VCs. Thus it is not required that a new major device number be allocated for those.

There's no special "system console". There's no need for one. Having non-fatal kernel messages visible via a hot-key is done the same way with this design as it was before: One allocates a virtual console to the task (and doesn't run a mingetty on it), configures syslogd to write to that virtual console, and configures a console dæmon to display that virtual console. Boot-time kernel output and fatal kernel messages (trap screens) come up on all frame buffers (once the drivers for those frame buffers have actually been loaded). And the secure attention key is not properly a function of a "system console" in the first place.

There is strong security partitioning between console dæmons and the processes using VCs, and console dæmons do not require superuser privileges. Console dæmons only need permission to open and to use the human interface devices and the VC "master" devices. They do not need superuser privileges, merely read/write/ioctl access to the devices. Moreover, the physical human interface devices need not, and should not, be directly accessible to ordinary users. All user access to the physical devices is mediated either by a console dæmon process or an X server process.

Console dæmons execute with normal user privileges. The device driver API is require to only export "safe" operations, that an unprivileged user (with permission to use the device, of course) may execute. No superuser privileges are required. Even the secure attention key does not require a console dæmon to have superuser privileges. (The secure attention key causes a terminal hangup. It is the login process executing on the "slave" side of the VC that then calls vhangup().)

Console dæmons do not execute with the same privileges as the logged on user. There isn't necessarily a 1-to-1 relationship in the first place between console dæmons and users logged on to the virtual consoles that they manage. Different VCs may have different users logged on.

In a vanilla configuration, console dæmons execute under the aegis of a special-purpose user account ("console", say) and the human interface devices are owned by that account and have rwx------ permissions.

No mechanism is thus required to adjust the ownership of human interface devices on the fly as users log in and log out (this in itself also one less requirement in the system for something with superuser privileges).

1 console dæmon is required when the system is booted to "single-user" mode. However, it can be as simple as a single VC on the first head:

/sbin/console-dæmon /dev/fb/0 /dev/kbd/0 /dev/mouse/0 /dev/vc /dev/vcc 0 &

Indeed, it can even be a special-purpose console dæmon that is compiled solely for use in "single-user" mode. (e.g. It may be statically linked. It may contain only a limited set of device drivers, comprising solely those for the physical human interface devices used in single-user mode. It may omit the code for performing session switching, for performing secure attention key processing, and for handling multiple fonts and keyboard mappings. It may have merely "basic" device drivers that do not exploit advanced features of the human interface device hardware.)

Interaction with X

On machines where all applications employ a textual user interface (albeit one that employs the whole of Unicode), the console-dæmon described in the preceding sections suffices. In other words: If no X server is required, the described console-dæmon is all that need be employed.

However, many machines require the presence of an X server, which will compete with the console dæmon for use of the physical I/O hardware.

Previously, this would have involved VT_WAITACTIVE and friends, mechanisms that have been acknowledged as being badly designed (because they are prone to race conditions and lost events) since at least 1996. With console dæmons, there are two ways of adding X servers to the mix. Neither requires VT_WAITACTIVE et al..

Inferior method: Use IPC

The simplest means of arranging a console dæmon and an X server to share the same hardware is to realize that this is nothing more than a simple exercise in normal inter-process communication and negotiation over the use of a shared resource. The solution for this is well-known: mutual exclusion.

So the solution is to just use the normal Linux IPC mechanisms. Each "head" is assigned a mutual exclusion semaphore. Both the console dæmon and the X server procure that semaphore. The owner of the semaphore is the one that has control over the physical I/O devices. The name of the semaphore is configured by an administrator, and passed to the console dæmon and to the X server when they are started up.

As far as the X server is concerned, session switches proceed as follows:

  1. The X server begins in a state where it has acquired the mutex semaphore.

  2. The X server reads a ctrl-alt-Fn hotkey sequence, from the raw keyboard device, that indicates some session other than its own should be switched to.

  3. The X server saves the current state of the framebuffer, keyboard, and mouse devices, and notes that it no longer controls those devices.

  4. The X server releases ownership of the mutex semaphore.

  5. The X server attempts to regain ownership of the mutex semaphore. Until it gains ownership, it does not touch the physical I/O devices at all. It does not consume any keyboard or mouse events, and drawing requests from X clients are either buffered or discarded, as appropriate.

  6. Once it regains ownership of the mutex semaphore, the X server restores the prior saved state of the physical I/O devices and resumes its consumption of input events and its drawing on the frame buffer.

As far as the console dæmon is concerned, session switches proceed as follows:

  1. The console dæmon begins in a state where it has acquired the mutex semaphore.

  2. The console dæmon reads a ctrl-alt-Fn hotkey sequence, from the raw keyboard device, that indicates the X server session should be switched to.

  3. The console dæmon saves the current state of the framebuffer, keyboard, and mouse devices, and notes that it no longer controls those devices.

  4. The console dæmon releases ownership of the mutex semaphore.

  5. The console dæmon attempts to regain ownership of the mutex semaphore. Until it gains ownership, it does not touch the physical I/O devices at all. It does not consume any keyboard or mouse events. It still processes output received from the virtual console "master" devices that it is listening to, processing terminal escape sequences and writing to the VCC devices. But processing stops there. It does not paint the character cells from any VCC device onto the frame buffer.

  6. Once it regains ownership of the mutex semaphore, the console dæmon restores the prior saved state of the physical I/O devices and resumes its consumption of input events and its drawing on the frame buffer.

Superior method: Just have one server

The superior method of arranging a console dæmon and an X server to share the same hardware is to realize that in fact the two are doing exactly the same job. They both have a set of physical I/O devices. They both have a set of clients that they poll for display output requests (sockets for communicating with X clients in the case of the X server, "master" sides of virtual consoles in the case of the console dæmon). They both consume raw input events from the keyboard and mouse, process them, and hand them over (in a post-processed form) to their clients.

It is a logical approach, therefore, to just have one single server process that does both tasks, for any given "head".2 This has several benefits:

Note that because virtual consoles may be migrated from one console dæmon to another on the fly, it is possible to start a system up with a TUI console dæmon on its own, and then later on transfer control of the VCs that it manages to another all-in-one combined X server and console dæmon.

Implementation: How to get there from here

Write a basic (not unified with an X server) console dæmon

Most of the code for this already exists, in one form or another. The screen program contains ANSI/VT100 character processing, as does the existing kernel, for example. And the X server contains keyboard scan code processing. Such code can simply be migrated over wholesale. screen also shares the same basic event-driven structure as a console dæmon, so there's already a prior example of how to implement such a program. (A console dæmon doesn't do everything that screen does, though. In particular, it isn't involved in spawning the actual processes that use VCs.)

A basic console dæmon's operation is as follows:

  1. Open handles for all of the physical devices, VCs, and VCCs.
  2. Repaint the current VC onto the frame buffer, using the character cell buffer in the associated VCC.
  3. Enter a loop, calling poll(), with a timeout, on all of the handles that it owns (i.e. that the X server does not own) and processing events as they occur:

Decouple VC devices from all physical hardware

VC devices should have "master" sides that are like the "master" sides of PTY devices. If a process writes a character to the "slave" side of a VC device, it should be simply punted through to the "master" device after passing through the line discipline. con_write() should be much more like pty_write().

They aren't quite the same, though. "slave" sides of VCs can operate without any handle to the associated "master" side being open. (This allows console dæmons to detach from and reattach to VCs without disturbing the processes that are using those VCs.)

Modify the X server

The X server allocates a VC and uses it to negotiate ownership of the frame buffer, mouse, and keyboard devices with the VC subsystem in the kernel. It uses various ioctl()s to allocate a VC, switch into "graphics" mode (so that the VC subsystem doesn't attempt to draw text on it), and switch the keyboard and mouse devices into "raw" mode. There are two known and insoluble race conditions with this design. (Two servers could grab the same VC, if they've both been configured to allocate the first free VC; and activation events can become lost.)

With a console dæmon, all of this goes away. The X server does not need to have any dealings whatsoever with VC devices, and all of the code in it that deals with VCs can be removed. Negotiation of ownership of the physical devices with the console dæmon is an ordinary IPC issue involving a mutex semaphore; and the input devices are always in "raw" mode.

Eliminate the code and data from the kernel that are now in application space

Almost all of linux/include/consolemap.h, linux/include/console_struct.h, drivers/char/consolemap.c, drivers/char/keyboard.c, drivers/char/vt.c, and drivers/char/vt_ioctl.c, are superceded by the console dæmon. In other words: Most of the things that deal with a struct vc_data and keyboard scancode handlers (called from kbd_keycode()) are now in application space.


Footnotes

  1. Don't confuse multi-head operation with the simpler "dual monitor" operation. Dual monitor operation is where a single user employs two or more displays. Multi-head operation is where more than one user each has xyr own separate set of displays, mice, and keyboards. Also don't infer that multi-head operation requires multiple display adapter cards. A single display adapter card that provides two or more separate frame buffers and outputs can be employed for multi-head operation.

  2. This is not a new idea. The CSRSS process in Windows NT versions 3.1 and 3.5 was exactly this kind of unified console server and graphics server. It employed access to "raw" physical I/O devices, and a client-server inter-process communication mechanism for communicating with both TUI and GUI client processes. (Hence the name "Client-Server Runtime Subsystem".) Graphics drivers were simply DLLs loaded into the CSRSS process.


© Copyright 2006 Jonathan de Boyne Pollard. "Moral" rights asserted.
Permission is hereby granted to copy and to distribute this web page in its original, unmodified form as long as its last modification datestamp information is preserved.