🍁 nix-topology
With nix-topology you can automatically generate infrastructure and network diagrams as SVGs directly from your NixOS configurations, and get something similar to the diagram above. It defines a new global module system where you can specify what nodes and networks you have. Most of the work is done by the included NixOS module which automatically collects all the information from your hosts.
- 🌱 Extracts a lot of information automatically from your NixOS configuration:
- 🔗 Interfaces from systemd-networkd
- 🍵 Known configured services
- 🖥️ Guests from microvm.nix
- 🖥️ Guests from nixos containers
- 🌐 Network information from kea
- 🗺️ Renders both a main diagram (physical connections) and a network-centric diagram
- ➡️ Automatically propagates assigned networks through your connections
- 🖨️ Allows you to add external devices like switches, routers, printers ...
Have a look at the examples on the left for some finished configurations and inspiration.
Why?
I became a little envious of all the manually crafted infrastructure diagrams on r/homelab. But who's got time for that?! I'd rather spend a whole lot more time to create a generator that I will use once or twice in my life 🤡👍. Maybe it will be useful for somebody else, too.
❤️ Contributing
Contributions are whole-heartedly welcome! Please feel free to suggest new features, implement extractors, other stuff, or generally help out if you'd like. We'd be happy to have you. There's more information in CONTRIBUTING.md and the Development Chapter in the docs.
📜 License
Licensed under the MIT license (LICENSE or https://opensource.org/licenses/MIT). Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you, shall be licensed as above, without any additional terms or conditions.
📦 Installation
Installation should be as simple as adding nix-topology to your flake.nix, defining the global module and adding the NixOS module to your systems:
- Add nix-topology as an input to your flake
inputs.nix-topology.url = "github:oddlama/nix-topology";
- Add the exposed overlay to your global pkgs definition, so the necessary tools are available for rendering
pkgs = import nixpkgs { inherit system; overlays = [nix-topology.overlays.default]; };
- Import the exposed NixOS module
nix-topology.nixosModules.default
in your host configsnixosConfigurations.host1 = lib.nixosSystem { system = "x86_64-linux"; modules = [ ./host1/configuration.nix nix-topology.nixosModules.default ]; };
- Create the global topology by using
topology = import nix-topology { pkgs = /*...*/; };
. Expose this as an output in your flake so you can access it.inputs.nix-topology.url = "github:oddlama/nix-topology"; topology = import nix-topology { inherit pkgs; # Only this package set must include nix-topology.overlays.default modules = [ # Your own file to define global topology. Works in principle like a nixos module but uses different options. ./topology.nix # Inline module to inform topology of your existing NixOS hosts. { nixosConfigurations = self.nixosConfigurations; } ]; };
- Render your topology via
nix build .#topology.<current-system>.config.output
, the resulting directory will contain your finished svgs. Note that this can take a minute, depending on how many hosts you have defined. Evaluating many nixos configurations just takes some time, and the renderer sometimes struggles with handling bigger PNGs in a timely fashion.
Example flake.nix
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nix-topology.url = "github:oddlama/nix-topology";
nix-topology.inputs.nixpkgs.follows = "nixpkgs";
};
outputs = { self, flake-utils, nixpkgs, nix-topology, ... }: {
# Example. Use your own hosts and add the module to them
nixosConfigurations.host1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./host1/configuration.nix
nix-topology.nixosModules.default
];
};
}
// flake-utils.lib.eachDefaultSystem (system: rec {
pkgs = import nixpkgs {
inherit system;
overlays = [ nix-topology.overlays.default ];
};
topology = import nix-topology {
inherit pkgs;
modules = [
# Your own file to define global topology. Works in principle like a nixos module but uses different options.
./topology.nix
# Inline module to inform topology of your existing NixOS hosts.
{ nixosConfigurations = self.nixosConfigurations; }
];
};
});
}
🌱 Defining additional things
After rendering for the first time, the initial diagram might look a little unstructured. That's simply because nix-topology will be missing some important connections that can't be derived from a bunch of NixOS configurations, like physical connections. You'll probably also want to add some common devices like an image for the internet, switches, routers and stuff like that.
There are two locations where you can add stuff to the topology: In the global topology module or locally in one of the participating nixos configurations.
Globally
To add something in the new global topology module, simply extend the configuration
like you would with a classical NixOS module. In this example we configured
the global topology module to include definitions from ./topology.nix
:
topology = import nix-topology {
inherit pkgs;
modules = [
# Your own file to define global topology. Works in principle like a nixos module but uses different options.
./topology.nix
# Inline module to inform topology of your existing NixOS hosts.
{ nixosConfigurations = self.nixosConfigurations; }
];
};
So you can add things by defining one of the available node or network options in this file:
# ./topology.nix
{
nodes.toaster = {
name = "My Toaster";
deviceType = "device";
};
}
Locally
The same can be done from within any NixOS configuration that you've given
to the global module by specifying nixosConfigurations = ...
above. All
topology options will be grouped under topology.<...>
, so it doesn't interfere
with your NixOS configuration. All of these definitions from your NixOS hosts
and those from the global module will later be merged together.
So instead of defining the device globally, you could choose to define it in host1
.
This makes it possible to modify the topology of other nodes from within your node,
which can be very handy:
# ./host1/configuration.nix
{
topology.nodes.toaster = {
name = "My Toaster";
deviceType = "device";
};
}
Since it is a very common thing to modify the node assigned to your current NixOS configuration,
there's an alias topology.self
which aliases topology.nodes.${config.topology.id}
:
# ./host2/configuration.nix
{
topology.self.hardware.info = "Raspberry Pi 5";
}
Follow through the next pages for examples on defining nodes, networks and connections.
Helpers
You will probably find yourself in a situation where you want to add multiple external devices to your topology. You can of course just define new nodes for them manually, but there are some helper functions that you can use to quickly define switches, routers or other external devices (for example "The Internet").
These helpers are available under config.lib.topology
, regardless of whether you are within a
topology module, or a NixOS modules.
mkConnection & mkConnectionRev
A small helper that allows you to write mkConection "host" "lan"
instead of
the longer form:
{ node = "host"; interface = "lan"; }
Also comes in a reversed variant mkConnectionRev
, that marks the connection as reversed
to the renderer. See the chapter on connections for why this is sometimes needed.
mkInternet
A small utility that creates a cloud image to represent the internet.
It already has an interface called *
where you can connect stuff.
An optional parameter connections
can take either a single connection or a list
of connections for this interface.
Example:
nodes.internet = mkInternet {
connections = mkConnection "node1" "interface1";
};
mkSwitch
This function simplifies creation of a switch. It accepts the name of the switch as the first argument and then some optional arguments in an attrset. It particular, it will:
- Set the device type to
switch
- Set hardware image and info if given
- Accept a list called
interfaceGroups
. This list itself contains multiple lists of interface names. All interfaces are created, regardless of the list in which they appear. All interfaces that stand together in one list will automatically share their network with other interfaces in that specific list, like a dumb switch would. - Accepts an attrset
connections
where you can specify additional connections for each interface in the form of a single connection or list of connections. ``.
Example:
nodes.switch1 = mkSwitch "Switch 1" {
info = "D-Link DGS-105";
image = ./image-dlink-dgs105.png;
interfaceGroups = [["eth1" "eth2" "eth3" "eth4" "eth5"]];
connections.eth1 = mkConnection "host1" "lan";
connections.eth2 = [(mkConnection "host2" "wan") (mkConnection "host3" "eth0")];
# any other attributes specified here are directly forwarded to the node:
interfaces.eth1.network = "home";
};
mkRouter
Exactly the same as mkSwitch
, but sets the device type to router
instead of switch
,
which changes the icon to the right.
Example:
nodes.router = mkRouter "Router" {
info = "Some Router Type XY";
# eth1-4 are switched, wan1 is the external DSL connection
interfaceGroups = [["eth1" "eth2" "eth3" "eth4"] ["wan1"]];
connections.wan1 = mkConnection "host1" "wan";
};
mkDevice
Exactly the same as mkSwitch
, but sets the device type to device
instead of switch
,
which has no icon to the right by default.
🔗 Connections
The physical connections of a node are usually not something we can know from the configuration alone. To connect stuff together, you can define the physical connections of an interface globall like this:
{
# Connect node1.lan -> node2.wan
nodes.node1.interfaces.lan.physicalConnections = [
{ node = "node2"; interface = "wan"; }
];
}
Or by utilizing the helper:
let
inherit (lib.topology) mkConnection;
in {
# Connect node1.lan -> node2.wan
nodes.node1.interfaces.lan.physicalConnections = [(mkConnection "node2" "wan")];
}
Reversed connections
Sometimes it is easier to define a connection from node1.eth to node2.eth on the "destination" node. While connections are technically undirected, the layouter unfortunately doesn't think so. Since the layouter will try to align as many edges in the same direction as possible with the dominant layouting order being left-to-right, a connection going the other way can cause weird edge routing or unpleasant node locations.
By adding renderer.reverse = true;
to the connection attrset (or by using mkConnectionRev
),
you can change the direction of the edge.
🖨️ Adding nodes (switches, routers, other devices)
Adding new nodes is in principle very simple. All you need to do
is assign an id and (arbitrary) deviceType
. Based on the deviceType
this may pre-select some configuration options such as the rendering style.
{
nodes.toaster = {
deviceType = "device";
hardware.info = "ToasterMAX 3000";
};
}
Nodes have many options, so be sure to read through the option reference if you want to manually add something more complex.
👾 Icons
There are several icons included in nix-topology, which you can access by setting
any of the icon options to a string "<category>.<name>"
. Have a look at the
icons folder to see what's available already. You can also add your own
icons to the registry by defining icons.<category>.<name>
.
🖼️ Images
In several places you will be able to set an icon or image to be displayed
in a node's rendering. Usually you can either reference an existing icon with "<category>.<name>"
,
or specify a path to an image instead. Currently nix-topology supports svg, png and jpeg files.
While svg is always recommended for quality, beware that a viewBox
must be set and it
must be square, otherwise it may be streched.
To create a viewbox for any svg and optimize it, you can use scour
and svgo
:
nix-shell -p nodePackages.svgo scour
scour --enable-viewboxing -i in.svg -o out.svg
svgo -i in.svg -o out.svg
🖇️ Networks
By defining networks you can have connections within this network rendered in a common style/color. This will also cause the network connections to appear in the logical network-centric view.
You only need to assign a network to one interface and it will automatically be propagated through its connections and any switches/routers on the path. Networks will even automatically use a predefined styles that isn't used by any other network, unless you override it.
To create a network, give it a name and optionally some information about the covered ipv4/ipv6 address space. Then assign it to any participating interface:
{
networks.home = {
name = "Home Network";
cidrv4 = "192.168.1.1/24";
};
nodes.myhost.interfaces.lan1.network = "home";
}
Some extractors (such as the kea extractor) can create networks automatically, so all you need to do there is to assign a friendly name.
Style
All connections in a network can be styled by setting the style attribute on the network. You can have solid, dashed or dotted connections with one or two colors:
{
networks.home.style = {
primaryColor = "#70a5eb";
secondaryColor = null; # only relevant for dashed and dotted, null means transparent background
pattern = "solid"; # one of "solid", "dashed", "dotted"
};
}
_module.args
Additional arguments passed to each module in addition to ones
like lib
, config
,
and pkgs
, modulesPath
.
This option is also available to all submodules. Submodules do not
inherit args from their parent module, nor do they provide args to
their parent module or sibling submodules. The sole exception to
this is the argument name
which is provided by
parent modules to a submodule and contains the attribute name
the submodule is bound to, or a unique generated name if it is
not bound to an attribute.
Some arguments are already passed by default, of which the following cannot be changed with this option:
-
lib
: The nixpkgs library. -
config
: The results of all options after merging the values from all modules together. -
options
: The options declared in all modules. -
specialArgs
: ThespecialArgs
argument passed toevalModules
. -
All attributes of
specialArgs
Whereas option values can generally depend on other option values thanks to laziness, this does not apply to
imports
, which must be computed statically before anything else.For this reason, callers of the module system can provide
specialArgs
which are available during import resolution.For NixOS,
specialArgs
includesmodulesPath
, which allows you to import extra modules from the nixpkgs package tree without having to somehow make the module aware of the location of thenixpkgs
or NixOS directories.{ modulesPath, ... }: { imports = [ (modulesPath + "/profiles/minimal.nix") ]; }
For NixOS, the default value for this option includes at least this argument:
pkgs
: The nixpkgs package set according to thenixpkgs.pkgs
option.
Type: lazy attribute set of raw value
Declared by:
assertions.*.assertion
The thing to assert.
Type: boolean
Declared by:
assertions.*.message
The error message.
Type: string
Declared by:
icons
All predefined icons by category.
Type: attribute set of attribute set of (submodule)
Default:
{ }
Declared by:
icons.<name>.<name>.file
The icon file
Type: path
Declared by:
lib
This option allows modules to define helper functions, constants, etc.
Type: attribute set of (attribute set)
Default:
{ }
Declared by:
networks
Defines logical networks that are present in your topology.
Type: attribute set of (submodule)
Default:
{ }
Declared by:
networks.<name>.cidrv4
The CIDRv4 address space of this network or null if it doesn’t use ipv4
Type: null or string
Default:
null
Declared by:
networks.<name>.cidrv6
The CIDRv6 address space of this network or null if it doesn’t use ipv6
Type: null or string
Default:
null
Declared by:
networks.<name>.icon
The icon representing this network. Must be a path to an image or a valid icon name (<category>.<name>).
Type: null or path or string
Default:
null
Declared by:
networks.<name>.id
The id of this network
Type: string (read only)
Default:
"‹name›"
Declared by:
networks.<name>.name
The name of this network
Type: string
Default:
"Unnamed network '‹name›'"
Declared by:
networks.<name>.style
A style for this network, usually used to draw connections. Must be an attrset consisting of three attributes:
- primaryColor (#rrggbb): The primary color, usually the color of edges.
- secondaryColor (#rrggbb): The secondary color, usually the background of a dashed line and only shown when pattern != solid. Set to null for transparent.
- pattern (solid, dashed, dotted): The pattern to use.
Type: (attribute set) or (attribute set) convertible to it
Default:
"<one of the unused predefined styles>"
Declared by:
nixosConfigurations
The list of nixos configurations to process for topology rendering. All of these must include the relevant nixos topology module.
Type: unspecified value
Default:
{ }
Declared by:
nodes
Defines nodes that are shown in the topology graph. Nodes usually correspond to nixos hosts or other devices in your network.
Type: attribute set of (submodule)
Default:
{ }
Declared by:
nodes.<name>.deviceIcon
The icon representing this node’s type. Must be a path to an image or a valid icon name (<category>.<name>). By default an icon will be selected based on the deviceType.
Type: null or path or string
Default:
null
Declared by:
nodes.<name>.deviceType
The device type of the node. This can be set to anything, but some special values exist that will automatically set some other defaults, most notably the deviceIcon and renderer.preferredType.
Type: one of “nixos”, “internet”, “router”, “switch”, “device” or string
Declared by:
nodes.<name>.guestType
If the device is a guest of another device, this will tell the type of guest it is.
Type: null or one of “microvm”, “nixos-container” or string
Default:
null
Declared by:
nodes.<name>.hardware.image
An image representing this node, usually shown larger than an icon.
Type: null or path
Default:
null
Declared by:
nodes.<name>.hardware.info
A single line of information about this node’s hardware. Usually the model name or a description the most important components.
Type: string
Default:
""
Declared by:
nodes.<name>.icon
The icon representing this node. Usually shown next to the name. Must be a path to an image or a valid icon name (<category>.<name>).
Type: null or path or string
Default:
null
Declared by:
nodes.<name>.id
The id of this node
Type: string (read only)
Default:
"‹name›"
Declared by:
nodes.<name>.interfaces
Collects information about a specific interface of this node.
Type: attribute set of (submodule)
Default:
{ }
Declared by:
nodes.<name>.interfaces.<name>.addresses
The configured address(es), or a descriptive string (like DHCP).
Type: list of string
Default:
[ ]
Declared by:
nodes.<name>.interfaces.<name>.gateways
The configured gateways, if any.
Type: list of string
Default:
[ ]
Declared by:
nodes.<name>.interfaces.<name>.icon
The icon representing this interface’s type. Must be a path to an image or a valid icon name (<category>.<name>). By default an icon will be selected based on the type.
Type: null or path or string
Default:
null
Declared by:
nodes.<name>.interfaces.<name>.id
The id of this interface
Type: string (read only)
Default:
"‹name›"
Declared by:
nodes.<name>.interfaces.<name>.mac
The MAC address of this interface, if known.
Type: null or string
Default:
null
Declared by:
nodes.<name>.interfaces.<name>.network
The id of the network to which this interface belongs, if any.
Type: (null or string) or (null or string) convertible to it
Default:
{
_lazyValue = null;
}
Declared by:
nodes.<name>.interfaces.<name>.physicalConnections
A list of other node interfaces to which this node is physically connected.
Type: list of (submodule)
Default:
[ ]
Declared by:
nodes.<name>.interfaces.<name>.physicalConnections.*.interface
The other node’s interface id.
Type: string
Declared by:
nodes.<name>.interfaces.<name>.physicalConnections.*.node
The other node id.
Type: string
Declared by:
nodes.<name>.interfaces.<name>.physicalConnections.*.renderer.reverse
Whether to reverse the edge. Can be useful to affect node positioning if the layouter is directional.
Type: boolean
Default:
false
Declared by:
nodes.<name>.interfaces.<name>.renderer.hidePhysicalConnections
Whether to hide physical connections of this interface in renderings. Affects both outgoing connections defined here and incoming connections defined on other interfaces.
Usually only affects rendering of the main topology view, not network-centric views.
Type: boolean
Default:
false
Declared by:
nodes.<name>.interfaces.<name>.sharesNetworkWith
Defines a list of predicates that determine whether this interface shares its connected network with another provided local interface. The predicates take the name of another interface and returns true if our network should be shared with the given interface. It suffices if any of the predicates return true.
Sharing here means that if a network is set on this interface, it will also be set as the network for any shared interface. Setting the same predicate on multiple interfaces causes them to share a network regardless on which port the network is actually defined.
An unmanaged switch for example would set this to const true
, effectively
propagating the network set on one port to all other ports. Having two assigned
networks within one predicate group will cause a warning to be issued.
Type: list of function that evaluates to a(n) boolean
Default:
[ ]
Declared by:
nodes.<name>.interfaces.<name>.type
The type of this interface
Type: string
Default:
"ethernet"
Declared by:
nodes.<name>.interfaces.<name>.virtual
Whether this is a virtual interface.
Type: boolean
Default:
false
Declared by:
nodes.<name>.name
The name of this node
Type: string
Default:
"<name>"
Declared by:
nodes.<name>.parent
The id of the parent node, if this node has a parent.
Type: null or string
Default:
null
Declared by:
nodes.<name>.renderer.preferredType
An optional hint to the renderer to specify whether this node should preferrably rendered as a full card, or just as an image with name. If there is no hardware image, this will usually still render a small card.
Type: one of “card”, “image”
Default:
"\"card\" # defaults to card but is also derived from the deviceType if possible."
Declared by:
nodes.<name>.services
Defines a service that is running on this node.
Type: attribute set of (submodule)
Default:
{ }
Declared by:
nodes.<name>.services.<name>.details
Additional detail sections that should be shown to the user.
Type: attribute set of (submodule)
Default:
{ }
Declared by:
nodes.<name>.services.<name>.details.<name>.name
The name of this section
Type: string (read only)
Default:
"‹name›"
Declared by:
nodes.<name>.services.<name>.details.<name>.order
The order determines how sections are ordered. Lower numbers first, default is 100.
Type: signed integer
Default:
100
Declared by:
nodes.<name>.services.<name>.details.<name>.text
The additional information to display
Type: strings concatenated with “\n”
Declared by:
nodes.<name>.services.<name>.hidden
Whether this service should be hidden from graphs
Type: boolean
Default:
false
Declared by:
nodes.<name>.services.<name>.icon
The icon for this service. Must be a path to an image or a valid icon name (<category>.<name>).
Type: null or path or string
Default:
null
Declared by:
nodes.<name>.services.<name>.id
The id of this service
Type: string (read only)
Default:
"‹name›"
Declared by:
nodes.<name>.services.<name>.info
Additional high-profile information about this service, usually the url or listen address. Most likely shown directly below the name.
Type: strings concatenated with “\n”
Default:
""
Declared by:
nodes.<name>.services.<name>.name
The name of this service
Type: string
Declared by:
output
The derivation containing the rendered output
Type: path (read only)
Default:
config.renderers.elk.output
Declared by:
renderer
Which renderer to use for the default output. Available options: elk svg
Type: null or one of “elk”, “svg”
Default:
"elk"
Declared by:
renderers.elk.output
The derivation containing the rendered output
Type: path (read only)
Declared by:
renderers.elk.overviews.networks.enable
Include a networks overview in the main output
Type: boolean
Default:
true
Declared by:
renderers.elk.overviews.services.enable
Include a services overview in the main output
Type: boolean
Default:
true
Declared by:
renderers.svg.output
The derivation containing the rendered output
Type: path (read only)
Declared by:
⚙️ Architecture
The architecture of nix-topology is intentionally designed in a very modular way
in order to decouple information gathering from rendering. While there currently
are two closely coupled renderers (svg
to create information cards and elk
to render them in a layouted diagram),
you can simply define your own renderer and it will have access to all the gathered information.
In a nutshell, you create a topology module in your own flake.nix. This is where all the information
will be gathered into and where you will have access to all of it and also to the rendered outputs.
By pointing the topology module to your NixOS systems (e.g. nixosConfigurations
),
their information can automatically be incorporated into the global topology.
Instead of having the global topology module aggregate your system information,
we opted to create a nixos-module that has to be included on all nixos hosts. This
module then exposes a new option topology
which reflects the layout from the global
topology module and tallows you to also add new global information (like a switch, device or connections)
from within a node. This also makes it easier to define the structural information of the
node asssigned to your nixos host, available as topology.self
.
Each NixOS host automatically extracts information from your NixOS configuration and exposes them in a structured format that is defined by the main topology module. For an overview over what is available, please refer to the available options
🤬 Caveats
There definitely are some caveats. Currently we use ELK to layout the diagram, because it really was the only option that was configurable enough to support ports, svg embeddings, orthogonal edges and stuff like that. But due to the way the layered ELK layouter works this can create cluttered diagrams if edges are facing the wrong way (even though they are technically undirected). I've considered using D2 with TALA, but as of writing this, it wasn't configurable enough to be a viable option.
Due to the long round trip required to create the svg right now (nix -> html (nix) -> svg (html-to-svg) -> import from derivation (IFD) -> elk (nix) -> svg (elk-to-svg)), we had to accept some tradeoffs like import-from-derivation and having to maintain two additional small tools to convert html and elk to svg.
Extractors
Each NixOS host automatically extracts information from its own configuration and exposes it by defining services, interfaces or networks in its topology. All extractors can be disabled individually, and you can define your own extractors if you want to.
The most prominent extractor is probably the services extractor, which adds service icons and information for each known service that is enabled on a NixOS host. Usually they just consist of one mkIf to show the service if it is enabled. For example have a look at the vaultwarden service extractor, which is one of the more complex ones (it's still quite simple):
vaultwarden = let
domain = config.services.vaultwarden.config.domain or config.services.vaultwarden.config.DOMAIN or null;
address = config.services.vaultwarden.config.rocketAddress or config.services.vaultwarden.config.ROCKET_ADDRESS or null;
port = config.services.vaultwarden.config.rocketPort or config.services.vaultwarden.config.ROCKET_PORT or null;
in
mkIf config.services.vaultwarden.enable {
name = "Vaultwarden";
icon = "services.vaultwarden";
info = mkIf (domain != null) domain;
details.listen = mkIf (address != null && port != null) {text = "${address}:${toString port}";};
};
If you want to add support for new services, feel free to create a PR. Contributions are wholeheartedly welcome! There's more information in the Development Chapter.
Development
First of all, here's a very quick overview over the codebase:
examples/
each folder in here should contain aflake.nix
as a user would write it for their configuration. Each such example flake will automatically be evaluated and rendered in the documentation.icons/
contains all the service, device and interface icons. Placing a new file here will automatically register it.nixos/
contains anything related to the provided NixOS module. Mostly information extractors.options/
contains shared options. Shared means that all of these options will be present both in the global topology module and also in each NixOS module under thetopology
option, which will be merged into the global topology. If you need NixOS specific options (like extractors) or topology specific options (like renderers) have a look atnixos/module.nix
andtopology/default.nix
.pkgs/
contains our nixpkgs overlay and packages required to render the graphs.topology/
contains anything related to the global topology module. This is where the NixOS configurations are merged into the global topology and where the renderers are defined.
Criteria for new extractors and service extractors
I'm generally happy to accept any extractor, but please make sure they meet the following criteria:
-
It must provide an
enable
option so it can be disabled. It may be enabled by default, if it is a no-op for systems that don't use the relevant parts. The services extractor for example guards all assignments behindmkIf config.services.<service_name>.enable
to make sure that it works for systems that don't use a particular service. -
The default settings of an extractor (or any part of it) should try not to cause clutter in the generated cards. Take for example OpenSSH which is usually enabled on all machines. Showing it by default would cause a lot of unnecessary clutter, so it should be guarded behind an additional option that is disabled by default. Same goes for common local services like smartd, fwupd, ...
-
Extractors should be made for things available in nixpkgs or the broader list of community projects. So I'm happy to accept extractors for projects like (microvm.nix, disko, ...) but if you want to extract information from very niche or personal projects, it might be better to include the extractor in there.
If you write an extractor for such a third party dependency, make sure that it is either disabled by default, or guarded in a way so that it doesn't affect vanilla systems. Check the microvm extractor for an example which makes sure that the microvm module is acutually used on the target system.
Adding a new extractor
To add a whole new extractor, all you need to do is to create nixos/extractors/foo.nix
,
and add a new option for your extractor:
{ config, lib, ... }: let
inherit (lib) mkEnableOption mkIf;
in {
options.topology.extractors.foo.enable = mkEnableOption "topology foo extractor" // {default = true;};
config = mkIf (config.topology.extractors.foo.enable && /* additional checks if necessary */) {
topology = {
# Modify topology based on the information you gathered
};
};
}
The file will automatically be included by the main NixOS module.
Adding a service to the services extractor
To add a new service to the services extractor, all you need to do is to add the relevant attribute to the body of the extractor. Please keep them sorted alphabetically.
Imagine we want to add support for vaultwarden, we would start by extracting some information, while making sure there's a sensible fallback value at all times if the value isn't set. Don't depend on any value being set, only observe what the user has actually configured!
vaultwarden = let
# Extract domain, address and port if they were set
domain = config.services.vaultwarden.config.domain or config.services.vaultwarden.config.DOMAIN or null;
address = config.services.vaultwarden.config.rocketAddress or config.services.vaultwarden.config.ROCKET_ADDRESS or null;
port = config.services.vaultwarden.config.rocketPort or config.services.vaultwarden.config.ROCKET_PORT or null;
in
# Only assign anything if the service is enabled
mkIf config.services.vaultwarden.enable {
# The service's proper name.
name = "Vaultwarden";
# An icon from the icon registry. We will add a new icon for the service later.
icon = "services.vaultwarden";
# One line of information that should be shown below the service name.
# Usually this should be the hosted domain name (if it is known), or very very important information.
# For vaultwarden, we the domain in the config. If you are unsure for your service, just leave
# it out so users can set it manually via topology.self.services.<service>.info = "...";
info = mkIf (domain != null) domain;
# In details you can add more information specific to the service.
# Currently I tried to include a `listen` detail for any service listening on an address/port.
# Samba for example shows the configured shares and nginx the configured reverse proxies.
# If you are unsure what to do here, just leave it out.
details.listen = mkIf (address != null && port != null) {text = "${address}:${toString port}";};
};
Now we still need to add an icon to the registry, but then the extractor is finished.
nix-topology supports svg, png and jpeg files, but we prefer SVG wherever possible.
Usually you should be able to find the original logo of any project in their main repository
on GitHub by searching for .svg
by pressing t.
But before we put it in icons/services/<service_name>.svg
, we should optimize the svg
and make sure it has a square viewBox="..."
property set. For this I recommend passing it through
scour
and svgo
once, this will do all of the work automatically, except for making sure that the
logo is square:
nix-shell -p nodePackages.svgo scour
scour --enable-viewboxing -i original.svg -o tmp.svg
svgo -i tmp.svg -o service_name.svg
If you open the file and see a viewBox like viewBox="0 0 100 100"
, then you are ready to go.
The actual numbers don't matter, but the last two (width and height) should be the same. If they
are not, open the file in any svg editor of your liking, for example Inkscape or boxy-svg.com
and make it square manually. (In theory you can also just increase the smaller number and x
or y
by half of that
in the viewBox by hand. Run it through svgo
another time and you are all set.
Adding an example
Examples are currently our tests. They are automatically built together with the documentation and the rendered result is also included in the docs. If you have made a complex extractor, you might want to test it by creating a new example.
All you need to do for that to happen is to create a flake.nix
in a new example folder,
for example examples/foo/flake.nix
. Just copy the simple example and start from there.
The flake should be a regular flake containing nixosConfigurations
, just like what a user would
write. Beware that you currently cannot add inputs to that flake, let me know if you need that.
To build the example, just build the documentation:
nix build .#docs`
Example: complex
Main view (click to enlarge)
Network view (click to enlarge)
flake.nix
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nix-topology.url = "github:oddlama/nix-topology";
nix-topology.inputs.nixpkgs.follows = "nixpkgs";
};
outputs = {
self,
nixpkgs,
nix-topology,
flake-utils,
...
}:
{
nixosConfigurations.host1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
{
networking.hostName = "host1";
# Network interfaces from systemd are detected automatically:
systemd.network.enable = true;
systemd.network.networks.wan = {
matchConfig.Name = "wan";
address = ["192.168.178.100/24"];
};
systemd.network.networks.lan = {
matchConfig.Name = "lan";
address = ["192.168.1.1/24"];
};
# Hosts a DHCP server with kea, this will become a network automatically
services.kea.dhcp4 = {
# ... (skipped unnecessary options for brevity)
enable = true;
settings = {
interfaces-config.interfaces = ["lan"];
subnet4 = [
{
interface = "lan-self";
subnet = "192.168.1.0/24";
}
];
};
};
# We can change our own node's topology settings from here:
topology.self.name = "🧱 Small Firewall";
topology.self.interfaces.wg0 = {
addresses = ["10.0.0.1"];
network = "wg0"; # Use the network we define below
virtual = true; # doesn't change the immediate render yet, but makes the network-centric view a little more readable
type = "wireguard"; # changes the icon
};
# You can add stuff to the global topology from a nixos configuration, too:
topology = {
# Let's say this node acts as a wireguard server, so it would make sense
# that it defines the related network:
networks.wg0 = {
name = "Wireguard network wg0";
cidrv4 = "10.0.0.0/24";
};
};
}
nix-topology.nixosModules.default
];
};
nixosConfigurations.host2 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
(
{config, ...}: {
networking.hostName = "host2";
# This host has a wireless connection, as indicated by the wlp prefix
systemd.network.enable = true;
systemd.network.networks.eth0 = {
matchConfig.Name = "eth0";
address = ["192.168.1.100/24"];
};
# Containers will automatically be rendered if they import the topology module!
containers.vaultwarden.macvlans = ["vm-vaultwarden"];
containers.vaultwarden.config = {
imports = [nix-topology.nixosModules.default];
networking.hostName = "host2-vaultwarden";
# This node host's a vaultwarden instance, which nix-topology
# will automatically pick up on
services.vaultwarden = {
enable = true;
config = {
rocketAddress = "0.0.0.0";
rocketPort = 8012;
domain = "https://vault.example.com/";
# ...
};
};
};
containers.test.config = {
imports = [nix-topology.nixosModules.default];
networking.hostName = "host2-test";
};
# We can change our own node's topology settings from here:
topology.self = {
name = "☄️ Powerful host2";
hardware.info = "2U Server with loads of RAM";
interfaces.wg0 = {
addresses = ["10.0.0.2"];
# Rendering virtual connections such as wireguard connections can sometimes
# clutter the view. So by hiding them we will only see the connections
# in the network centric view
renderer.hidePhysicalConnections = true;
virtual = true; # doesn't change the immediate render yet, but makes the network-centric view a little more readable
type = "wireguard"; # changes the icon
# No need to add the network wg0 explicitly, it will automatically be propagated via the connection.
physicalConnections = [
(config.lib.topology.mkConnection "host1" "wg0")
];
};
};
}
)
nix-topology.nixosModules.default
];
};
nixosConfigurations.desktop = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
{
networking.hostName = "desktop";
# This host has a wireless connection, as indicated by the wlp prefix
systemd.network.enable = true;
systemd.network.networks.eth0 = {
matchConfig.Name = "eth0";
address = ["192.168.1.123/24"];
};
topology.self = {
name = "🖥️ Desktop";
hardware.info = "AMD Ryzen 7850X, 64GB RAM";
};
}
nix-topology.nixosModules.default
];
};
nixosConfigurations.laptop = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
{
networking.hostName = "laptop";
# This host has a wireless connection, as indicated by the wlp prefix
systemd.network.enable = true;
systemd.network.networks.eth0 = {
matchConfig.Name = "eth0";
address = ["192.168.1.142/24"];
};
systemd.network.networks.wlp1s1 = {
matchConfig.Name = "wlp1s1";
};
topology.self = {
name = "💻 Laptop";
hardware.info = "Framework 16";
};
}
nix-topology.nixosModules.default
];
};
}
// flake-utils.lib.eachDefaultSystem (system: rec {
pkgs = import nixpkgs {
inherit system;
overlays = [nix-topology.overlays.default];
};
# This is the global topology module.
topology = import nix-topology {
inherit pkgs;
modules = [
({config, ...}: let
inherit
(config.lib.topology)
mkInternet
mkRouter
mkSwitch
mkConnection
;
in {
inherit (self) nixosConfigurations;
# Add a node for the internet
nodes.internet = mkInternet {
connections = mkConnection "router" "wan1";
};
# Add a router that we use to access the internet
nodes.router = mkRouter "FritzBox" {
info = "FRITZ!Box 7520";
image = ./images/fritzbox.png;
interfaceGroups = [
["eth1" "eth2" "eth3" "eth4"]
["wan1"]
];
connections.eth1 = mkConnection "host1" "wan";
interfaces.eth1 = {
addresses = ["192.168.178.1"];
network = "home-fritzbox";
};
};
networks.home-fritzbox = {
name = "Home Fritzbox";
cidrv4 = "192.168.178.0/24";
};
networks.host1-kea.name = "Home LAN";
nodes.switch-main = mkSwitch "Main Switch" {
info = "D-Link DGS-1016D";
image = ./images/dlink-dgs1016d.png;
interfaceGroups = [["eth1" "eth2" "eth3" "eth4" "eth5" "eth6"]];
connections.eth1 = mkConnection "host1" "lan";
connections.eth2 = mkConnection "host2" "eth0";
connections.eth3 = mkConnection "switch-livingroom" "eth1";
};
nodes.switch-livingroom = mkSwitch "Livingroom Switch" {
info = "D-Link DGS-105";
image = ./images/dlink-dgs105.png;
interfaceGroups = [["eth1" "eth2" "eth3" "eth4" "eth5"]];
connections.eth2 = mkConnection "desktop" "eth0";
connections.eth3 = mkConnection "laptop" "eth0";
};
})
];
};
});
}
Example: simple
Main view (click to enlarge)
Network view (click to enlarge)
flake.nix
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nix-topology.url = "github:oddlama/nix-topology";
nix-topology.inputs.nixpkgs.follows = "nixpkgs";
};
outputs = {
self,
nixpkgs,
nix-topology,
flake-utils,
...
}:
{
nixosConfigurations.host1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
{
networking.hostName = "host1";
# Network interfaces from systemd are detected automatically:
systemd.network.enable = true;
systemd.network.networks.eth0 = {
matchConfig.Name = "eth0";
address = ["192.168.178.100/24"];
};
# This node host's a vaultwarden instance, which nix-topology
# will automatically pick up on
services.vaultwarden = {
enable = true;
config = {
rocketAddress = "0.0.0.0";
rocketPort = 8012;
domain = "https://vault.example.com/";
# ...
};
};
# We can change our own node's topology settings from here:
topology.self.interfaces.wg0 = {
addresses = ["10.0.0.1"];
network = "wg0"; # Use the network we define below
type = "wireguard"; # changes the icon
};
# You can add stuff to the global topology from a nixos configuration, too:
topology = {
# Let's say this node acts as a wireguard server, so it would make sense
# that it defines the related network:
networks.wg0 = {
name = "Wireguard network wg0";
cidrv4 = "10.0.0.0/24";
};
};
}
nix-topology.nixosModules.default
];
};
nixosConfigurations.host2 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
(
{config, ...}: {
networking.hostName = "host2";
# This host has a wireless connection, as indicated by the wlp prefix
systemd.network.enable = true;
systemd.network.networks.wlp3s0 = {
matchConfig.Name = "wlp3s0";
address = ["192.168.178.42/24"];
};
# We can change our own node's topology settings from here:
topology.self = {
name = "🥔 Potato host2";
# ^^-- utf8 small space, required to not collapse spaces
hardware.info = "It's running on a potato, i swear";
interfaces.wg0 = {
addresses = ["10.0.0.2"];
# Rendering virtual connections such as wireguard connections can sometimes
# clutter the view. So by hiding them we will only see the connections
# in the network centric view
renderer.hidePhysicalConnections = true;
type = "wireguard"; # changes the icon
# No need to add the network wg0 explicitly, it will automatically be propagated via the connection.
physicalConnections = [
(config.lib.topology.mkConnection "host1" "wg0")
];
};
};
}
)
nix-topology.nixosModules.default
];
};
}
// flake-utils.lib.eachDefaultSystem (system: rec {
pkgs = import nixpkgs {
inherit system;
overlays = [nix-topology.overlays.default];
};
# This is the global topology module.
topology = import nix-topology {
inherit pkgs;
modules = [
({config, ...}: let
inherit (config.lib.topology) mkInternet mkRouter mkConnection;
in {
inherit (self) nixosConfigurations;
# Add a node for the internet
nodes.internet = mkInternet {
connections = mkConnection "router" "wan1";
};
# Add a router that we use to access the internet
nodes.router = mkRouter "FritzBox" {
info = "FRITZ!Box 7520";
image = ./images/fritzbox.png;
interfaceGroups = [
["eth1" "eth2" "eth3" "eth4" "wifi"]
["wan1"]
];
connections.eth1 = mkConnection "host1" "eth0";
connections.wifi = mkConnection "host2" "wlp3s0";
interfaces.eth1 = {
addresses = ["192.168.178.1"];
network = "home";
};
};
networks.home = {
name = "Home";
cidrv4 = "192.168.178.0/24";
};
})
];
};
});
}