< Back to index

DragonFly BSD is a free, Unix-like operating system which was forked from FreeBSD 4.8. Matt Dillon, a long-time FreeBSD and Amiga developer, started work on DragonFly BSD in June 2003 and announced it on the FreeBSD mailing lists on July 16, 2003 [http://lists.freebsd.org/pipermail/freebsd-current/2003-July/006889.html].

Dillon started DragonFly in the belief that the methods and techniques being adopted for threading and SMP in FreeBSD 5 would lead to a poorly performing system that would be very difficult to maintain. He sought to correct these suspected problems within the FreeBSD project. Due to ongoing conflicts with other FreeBSD developers over the implementation of his ideas, and other reasons, his ability to directly change the FreeBSD code was eventually revoked. Despite this, the DragonFly BSD and FreeBSD projects still work together contributing bug fixes, driver updates and other system improvements to each other.

Intended to be "the logical continuation of the FreeBSD 4.x series", DragonFly is being developed in an entirely different direction from FreeBSD 5, including a new Light Weight Kernel Threads (LWKT) implementation and a light weight ports/messaging system. Many concepts planned for DragonFly were inspired by AmigaOS.

Kernel design


Like most modern kernels, DragonFly is a hybrid, containing features of both monolithic and microkernels, attempting to make the best use of both technologies, such as the message passing capability of microkernels enabling larger portions of the OS to benefit from protected memory, as well as retaining the speed of monolithic kernels for certain critical tasks. The messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. DragonFly's messaging subsystem has the ability to act in either a synchronous or asynchronous fashion, and attempts to use this capability to achieve the best performance possible in any given situation.

There is progress being made to provide both device input/output (I/O) and virtual file system (VFS) messaging capabilities that will allow the remainder of the project goals to be met. The new infrastructure will allow many parts of the kernel to be migrated out into userland, whereby they will be more easily debugged as they will be smaller, isolated programs, instead of being small parts entwined in a larger chunk of code. The migration of select kernel code into userspace has the additional benefit of making the system more robust; if a userspace driver crashes, it will not crash the kernel.

System calls are being split into userland and kernel versions, as well as being encapsulated into messages. This will help reduce the size and complexity of the kernel by moving variants of standard system calls into a userland compatibility layer, as well as help maintain forwards and backwards compatibility between DragonFly versions. Linux and other Unix-like OS compatibility code is being migrated out similarly. Multiple instances of the 'native' userland compatibility layer created in jails could give DragonFly functionality similar to that found in User Mode Linux (UML). Unlike UML (which is essentially a port of Linux to itself as if the host kernel was a different hardware platform), DragonFly's virtualization will not require special drivers to communicate with the real hardware on the computer.

CPU localization


In DragonFly, threads are locked to CPUs by design, and each processor has its own LWKT scheduler. Threads are never preemptively switched from one processor to another; they are only migrated by the passing of an "Interprocessor Interrupt" (IPI) message between the CPUs involved. Interprocessor thread scheduling is also accomplished by sending asynchronous IPI messages. One advantage to this clean compartmentalization of the threading subsystem is that the processors' on-board caches in SMP systems do not contain duplicated data, allowing for higher performance by giving each processor in the system the ability to use its own cache to store different things to work on.

The LWKT subsystem is being employed to partition work among multiple kernel threads (for example in the networking code; one thread per protocol), reducing contention by removing the need to share certain resources among various kernel tasks. This thread partitioning implementation of CPU localization algorithms is arguably the key differentiating feature of DragonFly's design.

Protecting shared resources


In order to run safely on multiprocessor machines, access to shared resources (files, data structures etc.) must be serialized so that threads or processes do not attempt to modify the same resource at the same time. Atomic operations, spinlocks, critical sections, mutexes, serializing tokens and message queues are all possible methods that can be used to prevent concurrent access. Whereas both Linux and FreeBSD 5 employ fine-grained mutex models to achieve higher performance on multiprocessor systems, DragonFly does not. In order to prevent multiple threads from accessing or modifying a shared resource simultaneously, DragonFly employs critical sections, and serializing tokens to prevent concurrent access. Until recently, DragonFly also employed SPLs, but these were replaced with critical sections.

Much of the system's core including the LWKT subsystem, the IPI messaging subsystem and the new kernel memory allocator among other things are lockless, meaning that they work without using mutexes, and operate on a per-CPU basis. Critical sections are used to protect against local interrupts and operate on a per-CPU basis, guaranteeing that a thread currently being executed will not be preempted.

Serializing tokens are used to prevent concurrent accesses from other CPUs and may be held simultaneously by multiple threads, ensuring that only one of those threads is running at any given time. Blocked or sleeping threads therefore do not prevent other threads from accessing the shared resource unlike a thread that is holding a mutex. Among other things, the use of serializing tokens prevents many of the situations that could result in deadlocks and priority inversions when using mutexes, as well as greatly simplifying the design and implementation of a many-step procedure that would require a resource to be shared among multiple threads. The serializing token code is evolving into something quite similar to the "Read-copy-update" feature now available in Linux. Unlike Linux's current RCU implementation, DragonFly's is being implemented such that only processors competing for the same token are affected rather than all processors in the computer.

Additional features


Early on in its development, DragonFly acquired a slab allocator, which replaced the aging FreeBSD 4 kernel memory allocator. The new slab allocator requires neither mutexes nor blocking operations for memory assignment tasks, and unlike the code it replaced, it is multiprocessor safe.

DragonFly uses SFBUFs (Super-Fast BUFfers) and MSFBUFs (Multi-SFBUFs). A SFBUF is used to manage ephemeral single-page mappings and cache them when appropriate. They are used for retrieving a reference to data that is held by a single VM page. This simple, yet powerful, abstraction gives a broad number of abilities, such as zero-copy achieved in the sendfile(2) system call.

SFBUFs are used in numerous parts of the kernel, such as the Vnode Object Pager and the PIPE subsystems (indirectly via XIOs) for supporting high-bandwidth transfers. An SFBUF can only be used for a single VM page; MSFBUFs are used for managing ephemeral mappings of multiple-pages.

The SFBUF concept was devised by David Greenman of the FreeBSD Project when he wrote the sendfile(2) system call; it was later revised by Dr. Alan L. Cox and Matthew Dillon. MSFBUFs were designed by Hiten Pandya and Matthew Dillon.

Development and distribution


DragonFly forked from FreeBSD 4.8 and imports features and bug fixes from FreeBSD 4 and 5 where appropriate, such as ACPI and a new ATA driver framework from FreeBSD 4. As the number of DragonFly developers is currently small, with most of them focused on implementing basic functionality, device drivers are being kept mostly in sync with FreeBSD 5.x, the branch of FreeBSD where all new drivers are being written. The DragonFly developers are slowly moving toward using the "busdma" APIs, which will help to make the system easier to port to new architectures, but it is not a major focus at this time.

As with OpenBSD, the developers of DragonFly BSD are actively replacing "K&R" style C code with more modern, ANSI equivalents. Also like OpenBSD, DragonFly's version of the GNU Compiler Collection has an enhancement called the "Stack-Smashing Protector" (formerly known as "ProPolice") enabled by default, providing some additional protection against buffer overflow based attacks. It should be noted that as of Saturday, July 23, 2005, the kernel is no longer built with this protection by default.

Being a derivative of FreeBSD, DragonFly has inherited an easy-to-use integrated build system that can rebuild the entire base system from source with only a few commands. Like people from the other BSD projects, the DragonFly developers use a version control system called CVS to manage changes to the DragonFly source code. Unlike its parent FreeBSD, DragonFly will have both stable and unstable releases in a single source tree, due to a smaller developer base.

Like the other BSD kernels (and those of most modern operating systems), DragonFly employs a built-in kernel debugger to help the developers find kernel bugs. Furthermore, as of Wednesday, October 20, 2004, a debug kernel, which makes bug reports more useful for tracking down kernel-related problems, is installed by default, at the expense of a relatively small quantity of disk space. When a new kernel is installed, the backup copy of the previous kernel and its modules are stripped of their debugging symbols to further minimize disk space usage.

The operating system is distributed as a live CD that boots into a complete DragonFly system. It includes the base system and a complete set of manual pages, and may include source code and useful packages in future versions. The advantage of this is that with a single CD you can install the software onto a computer, use a full set of tools to repair a damaged installation, or demonstrate the capabilities of the system without installing it. Daily snapshots are available from Simon 'corecode' Schubert via [ftp://chlamydia.fs.ei.tum.de/pub/DragonFly/snapshots/i386/ISO-IMAGES/ FTP] and [http://chlamydia.fs.ei.tum.de/pub/DragonFly/snapshots/i386/ISO-IMAGES/ HTTP] for those who want to install the most recent versions of DragonFly without building from source.

Like the other free, open source BSDs (NetBSD being the notable exception, still preferring the original 4-clause BSD license), DragonFly is distributed under the terms of the modern version of the BSD license.

Releases


Version 1.0



DragonFly BSD 1.0, released July 12, 2004, was meant to be a "technology showcase" rather than an integrated production release. It featured the new "BSD Installer," the LWKT subsystem and the associated LW ports/messaging system, a mostly MP safe networking stack, lockless memory allocator and the FreeBSD 4.x ports and packages system (which was very briefly broken following the release).

Amiga-style 'resident' application support was added which takes a snapshot of a large, dynamically linked program's virtual memory space after loading, allowing future instances of the program to start much more quickly than it otherwise would have. This replaces the prelinking capability that was being worked on earlier in the project's history, as the resident support is much more efficient. Large programs like those found in KDE with many shared libraries will benefit the most from this support.

Other features introduced in this release include variant symlinks and application checkpointing support.

Due to a serious bug in the installer, an updated 1.0A release of DragonFly was released shortly afterward.

Version 1.2


This second release of DragonFly, on April 8, 2005, contained many bug fixes and new features. [http://www.dragonflybsd.org/main/release1_2.cgi] New to this release were things such as TCP SACK, ALTQ and PF (OpenBSD's firewall), TLS (thread-local storage) support, DCONS support (console over firewire), IPv6 improvements, and the rewritten namecache infrastructure, which is now distinct from the VFS code, and now capable of allowing the DragonFly developers to implement namecache based security mechanisms.

Like the first release, 1.2 still utilizes the FreeBSD ports system for third party packages, but NetBSD's "pkgsrc" now natively supports DragonFly, and is available as an option.

Dillon has stated that this will be the last release of DragonFly that employs the MP lock in common code paths.

Version 1.4


The third release of DragonFly was made available on January 7, 2006. Many new drivers and bug fixes have gone into the system. GCC version 3.4 is now required to build the system, and the older compiler suite will no longer work, due to the increasing use of TLS support. NetBSD's pkgsrc is now the default packaging system, although the buildtools are not yet included in DragonFly's CVS repository. So far, there is not an official set of prebuilt packages made specifically for this release, and many packages (KDE and GNOME most notably) in the current pkgsrc snapshot do not build cleanly on the system. Citrus from the NetBSD project has also been imported.

Version 1.6


The fourth major release of DragonFly, on July 25, 2006. The biggest user-visible changes in this release are a new random number generator, a massive reorganization of the 802.11 (wireless) framework, and extensive bug fixes in the kernel. Also made significant progress in pushing the big giant lock inward and made extensive modifications to the kernel infrastructure with an eye towards DragonFly's main clustering and userland VFS goals. DragonFly's team consider 1.6 to be more stable than 1.4.

Future directions


Supported processors


Currently, DragonFly runs on x86 (Intel and AMD) based computers, both single processor and SMP models. A port to the x86-64 architecture has been started, but is not yet usable. A port to the PowerPC processor has been speculated about sometime following the eventual x86-64 port.

Package management


DragonFly used to use FreeBSD's Ports system for third party software, with NetBSD's "pkgsrc" available as an option, but since the 1.4 release, pkgsrc [http://leaf.dragonflybsd.org/mailarchive/users/2005-08/msg00347.html] is the official package management system. By supporting pkgsrc, the DragonFly developers are largely freed of having to maintain a large number of third party programs, while still having access to up to date applications. The pkgsrc developers also benefit from this arrangement as it helps to ensure the portability of the code.

Pacman has recently been ported to DragonFly BSD [http://wiki.dragonflybsd.org/index.cgi/Pacman_Packages]. It is currently in a working alpha phase.

Threading and messaging


Although both system calls and device I/O have been largely converted to DragonFly's threaded messaging interface, both still operate synchronously. Ultimately, the entire messaging system is to be capable of both synchronous and asynchronous operation.

Userland threading support is also a focus for upcoming releases. Currently, DragonFly has only basic userland threading support that does not take advantage of multiprocessor systems (N:1 threading). Work to address this has been ongoing almost since the inception of the project, and a modern implementation is expected for version 2.0. Matt has said that ideally, an M:N implementation is preferable (N userland threads making use of a smaller number of kernel threads (M)), but the support planned for 2.0 is likely going to be a 1:1 implementation (one kernel thread for every userland thread.)

Userland VFS and journaling


Userland VFS - the ability to migrate filesystem drivers into userspace, will take a lot of work to accomplish. Some of this work is already complete, though there is still much to do. The namecache code has been extracted from and made independent of the VFS code, and converting the VFS code to DragonFly's threaded messaging interface is Matt's next major focus. This will be more difficult than converting the device I/O and system calls was, due to the fact that the VFS system inherited from FreeBSD uses a massively reentrant model.

The userland VFS system is a prerequisite of a number of desired features to be incorporated into DragonFly. Dillon envisions a new package management system based at least in part, on "VFS environments" which give the packages the environment they expect to be in, independent of the larger filesystem environment and its quirks. In addition to system call message filtering, VFS environments are also to play a role in future security mechanisms, by restricting users or processes to their own isolated environments.

A new journaling layer is being developed for DragonFly for the purpose of transparently backing up entire filesystems in real-time, securely over a network. What remains to be done is the ability to restore a filesystem to a previous state, as well as general stability enhancements. This differs from traditional meta-data journaling filesystems in two ways: (1) it will work with all supported filesystems, as it is implemented in the VFS layer instead of in the individual filesystem drivers, and (2) it will back up all of the data contained on a disk or partition, instead of just meta-data, allowing for the recovery of even the most damaged of installations.

While working on the journaling code, Dillon realized that the userland VFS he envisioned may be closer than he initially thought, though it is still some ways off.

Matt Dillon stated porting ZFS to DragonFly as a plan for the 1.6 release.

SSI clustering


Ultimately, Dillon wants DragonFly to natively enable "secure anonymous system clustering over the Internet", and the light weight ports/messaging system will help to provide this capability. Security settings aside, there is technically no difference between messages created locally or on another computer over a network. Achieving this "single-system image" capability transparently will be a big job, and will take quite some time to properly implement, even with the new foundation fully in place. While some of the short term goals of the project will be completed in months, other features may take years to complete. SSI clustering will have applications in scientific computing.
This entry uses material from from Wikipedia, the leading user-contributed encyclopedia. It is licensed under the GNU Free Documentation License. Disclaimer.