54 Commits

Author SHA1 Message Date
916bd8cf62 WIP: router: wireguard: finalize wg0 config for now 2025-06-03 19:46:38 -07:00
e6a9ab8d29 WIP: router: wireguard: finalize wg0 config for now 2025-06-03 19:27:35 -07:00
b39b5abb3e WIP: router: wireguard: finalize wg0 config for now 2025-06-03 19:18:35 -07:00
fc4cd6e56f WIP: router: wireguard: move wg0 to vars.ifs, streamline some things 2025-06-02 00:29:17 -07:00
fd1e7b4724 WIP: router: wireguard: change wg0 subnets to not conflict with opnsense 2025-06-01 20:44:56 -07:00
378d3a53b3 WIP: router: wireguard: slighly more successful conversion of peers to attrset 2025-06-01 20:44:56 -07:00
38ece9125b WIP: router: wireguard: someone forgot to add the network config for the wireguard interface 2025-06-01 20:44:55 -07:00
1c0871f54e WIP: router: wireguard: attempt to convert wg0Peers from list to attrset (gone not well) 2025-06-01 20:44:54 -07:00
4641775e54 router: firewall: add wireguard interface to lan zone (stupidity moment) 2025-06-01 20:44:54 -07:00
3fb966c728 router: firewall: allow ssh, wireguard input globally 2025-06-01 20:44:53 -07:00
67e1a6ef2f router: firewall: add entries for wireguard 2025-06-01 20:44:53 -07:00
5e19bc16f5 router: wireguard: add wg0 interface with some peers for testing 2025-06-01 20:44:46 -07:00
68d49ad45d refactor: support different pc/laptop configs 2025-06-01 19:09:26 -07:00
2f85b081ab laptop: add initial config for Yura-TPX13 2025-05-30 18:30:05 -07:00
eae016b50c updates: nixpkgs, home-manager, plasma-manager 2025-05-29 13:47:11 -07:00
fce994ae9f flake: fix vm-proxmox package 2025-05-20 23:30:14 -07:00
e0af380656 updates: linux 6.14, nixpkgs, home-manager, nixos-generators 2025-05-20 21:58:24 -07:00
585ff678b8 refactor: add encrypted private.nix to hold private values 2025-05-18 01:07:48 -07:00
80b7bf0ed4 refactor: move user configs into separate dir 2025-05-18 00:00:00 -07:00
4807a091c4 router: add glance (very pretty) 2025-05-15 01:32:38 -07:00
9cee4d75c4 router: dns: remove default adguard rate limit to fix intermittent slow queries 2025-05-13 02:10:38 -07:00
4ffdb4da4f router: caddy http3 and compression 2025-05-12 00:11:03 -07:00
4fce23e446 renovate: add nix lock file to config 2025-05-11 21:41:34 -07:00
49c781c1a8 router: option to disable desktop to save space
# Conflicts:
#	hosts/router/default.nix
2025-05-11 21:36:28 -07:00
1fbba65785 router: add secrix for secrets; add cloudflare api key 2025-05-11 21:35:03 -07:00
bb633e5bce router: services: caddy acme dns provider cloudflare 2025-05-11 20:29:16 -07:00
2aa3d87184 router: services: caddy subpath proxies for grafana and adguardhome 2025-05-11 18:41:59 -07:00
05d558e836 router: refactor firewall nftables config 2025-05-11 17:56:17 -07:00
8f7e00f27a router: add vnStat service 2025-05-11 15:58:51 -07:00
renovate[bot]
5e023e2982 Add renovate.json 2025-05-06 00:27:13 -07:00
0674c870c7 updates: nixpkgs, home-manager; add texlive 2025-04-30 16:58:15 -07:00
e484d6baa3 updates: nixpkgs, home-manager 2025-04-18 14:01:48 -07:00
9487d5bdea router: add static routes to opnsense to fix vpn issues 2025-04-15 10:35:18 -07:00
9bbd0cfbdd updates: linux 6.13, nixpkgs, home-manager 2025-04-09 00:27:22 -07:00
49278204a4 router: ifconfig: disable linux arp proxy behavior by default
By default, Linux will respond to ARP requests that belong to other interfaces. Normally this isn't a problem, but it causes issues since my WAN and LAN20 are technically bridged.
2025-03-29 23:01:40 -07:00
02bab65de8 router: firewall: proper filtering for hosts proxied by cloudflare 2025-03-26 15:20:15 -07:00
ac1f427677 router: dns: add more upstream providers; add sysdomain hosts for truenas, debbi, etappi 2025-03-26 00:21:19 -07:00
c353ec4020 router: refactor config into separate files, add workaround for networkd issues 2025-03-26 00:18:45 -07:00
ec6b149bfa updates: nixpkgs, home-manager, plasma-manager, nixos-generators 2025-03-25 20:58:32 -07:00
0a959ac3c7 updates: nixpkgs 2025-03-13 10:56:20 -07:00
a265d9b844 router: migrate remaining VLANs, add ULA prefix router adverts 2025-03-04 21:29:09 -08:00
06dbcec84d WIP: router: migrate vlan 1, 30, 40 from opnsense, add DNS records for alpina services 2025-03-01 22:35:36 -08:00
32b3775709 pc: add gleam 2025-02-27 22:19:34 -08:00
d134f0758e home: add kde keyboard layouts, fish, starship config 2025-02-26 21:37:19 -08:00
d5d34f48b4 WIP: router: add remaining VLANs, temporary network configs
Retiring OPNsense will take a while, in the meantime it should work together
2025-02-26 00:00:13 -08:00
17e6b33bde home: plasma start with empty session 2025-02-25 23:54:59 -08:00
f2704d6103 pc: disable docker zfs driver. updates: nixpkgs 2025-02-24 23:48:45 -08:00
1923c3814b home: add gnome-keyring, adjust plasma settings 2025-02-21 01:56:42 -08:00
e17d61e5b6 WIP: pc: add plasma-manager, darkman 2025-02-19 21:15:02 -08:00
f3bf750fb2 WIP: pc: add home-manager 2025-02-19 17:25:12 -08:00
ba48dd8706 WIP: router: refactor dhcp configs 2025-02-17 22:56:06 -08:00
b1bc0c3923 updates: nixpkgs 2025-02-17 22:55:22 -08:00
cb3690c6e2 WIP: router firewall refactor 2025-02-12 10:59:15 -08:00
17fdd35fb2 WIP: router interfaces refactor 2025-02-12 10:58:14 -08:00
39 changed files with 1952 additions and 817 deletions

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
private.nix filter=crypt diff=crypt merge=crypt

5
.gitignore vendored
View File

@@ -0,0 +1,5 @@
### Nix template
# Ignore build outputs from performing a nix-build or `nix build` command
result
result-*

80
flake.lock generated
View File

@@ -1,5 +1,25 @@
{ {
"nodes": { "nodes": {
"home-manager": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1748529677,
"narHash": "sha256-MJEX3Skt5EAIs/aGHD8/aXXZPcceMMHheyIGSjvxZN0=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "da282034f4d30e787b8a10722431e8b650a907ef",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"nixlib": { "nixlib": {
"locked": { "locked": {
"lastModified": 1736643958, "lastModified": 1736643958,
@@ -23,11 +43,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1737057290, "lastModified": 1747663185,
"narHash": "sha256-3Pe0yKlCc7EOeq1X/aJVDH0CtNL+tIBm49vpepwL1MQ=", "narHash": "sha256-Obh50J+O9jhUM/FgXtI3he/QRNiV9+J53+l+RlKSaAk=",
"owner": "nix-community", "owner": "nix-community",
"repo": "nixos-generators", "repo": "nixos-generators",
"rev": "d002ce9b6e7eb467cd1c6bb9aef9c35d191b5453", "rev": "ee07ba0d36c38e9915c55d2ac5a8fb0f05f2afcc",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -38,11 +58,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1738142207, "lastModified": 1748370509,
"narHash": "sha256-NGqpVVxNAHwIicXpgaVqJEJWeyqzoQJ9oc8lnK9+WC4=", "narHash": "sha256-QlL8slIgc16W5UaI3w7xHQEP+Qmv/6vSNTpoZrrSlbk=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "9d3ae807ebd2981d593cddd0080856873139aa40", "rev": "4faa5f5321320e49a78ae7848582f684d64783e9",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -52,10 +72,56 @@
"type": "github" "type": "github"
} }
}, },
"plasma-manager": {
"inputs": {
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1748196248,
"narHash": "sha256-1iHjsH6/5UOerJEoZKE+Gx1BgAoge/YcnUsOA4wQ/BU=",
"owner": "nix-community",
"repo": "plasma-manager",
"rev": "b7697abe89967839b273a863a3805345ea54ab56",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "plasma-manager",
"type": "github"
}
},
"root": { "root": {
"inputs": { "inputs": {
"home-manager": "home-manager",
"nixos-generators": "nixos-generators", "nixos-generators": "nixos-generators",
"nixpkgs": "nixpkgs" "nixpkgs": "nixpkgs",
"plasma-manager": "plasma-manager",
"secrix": "secrix"
}
},
"secrix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1746643487,
"narHash": "sha256-dcB/DArJObCvqE/ZEdQSDW2BZMeDyF83Se5KPfJvz60=",
"owner": "Platonic-Systems",
"repo": "secrix",
"rev": "4c64203fa5b377953b1fb6d5388187df8b60c6d5",
"type": "github"
},
"original": {
"owner": "Platonic-Systems",
"repo": "secrix",
"type": "github"
} }
} }
}, },

View File

@@ -5,24 +5,65 @@
nixpkgs = { nixpkgs = {
url = "github:NixOS/nixpkgs/nixos-unstable"; url = "github:NixOS/nixpkgs/nixos-unstable";
}; };
nixos-generators = {
url = "github:nix-community/nixos-generators";
inputs.nixpkgs.follows = "nixpkgs";
};
home-manager = { home-manager = {
url = "github:nix-community/home-manager"; url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
plasma-manager = {
url = "github:nix-community/plasma-manager";
inputs.nixpkgs.follows = "nixpkgs";
inputs.home-manager.follows = "home-manager";
};
nixos-generators = {
url = "github:nix-community/nixos-generators";
inputs.nixpkgs.follows = "nixpkgs";
};
secrix = {
url = "github:Platonic-Systems/secrix";
inputs.nixpkgs.follows = "nixpkgs";
};
}; };
outputs = { self, nixpkgs, nixos-generators, home-manager }: { outputs = { self, nixpkgs, home-manager, plasma-manager, nixos-generators, secrix }:
let
hmModule = file: {
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.sharedModules = [ plasma-manager.homeManagerModules.plasma-manager ];
home-manager.users.cazzzer = import file;
# Optionally, use home-manager.extraSpecialArgs to pass
# arguments to home.nix
};
in
{
apps.x86_64-linux.secrix = secrix.secrix self;
nixosConfigurations = { nixosConfigurations = {
Yura-PC = nixpkgs.lib.nixosSystem { Yura-PC = nixpkgs.lib.nixosSystem {
system = "x86_64-linux"; system = "x86_64-linux";
modules = [ modules = [
./modules ./modules
./hosts/common.nix ./hosts/common.nix
./hosts/common-desktop.nix
./hosts/Yura-PC ./hosts/Yura-PC
./users/cazzzer
# https://nix-community.github.io/home-manager/index.xhtml#sec-flakes-nixos-module
home-manager.nixosModules.home-manager
(hmModule ./home/cazzzer-pc.nix)
];
};
Yura-TPX13 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./modules
./hosts/common.nix
./hosts/common-desktop.nix
./hosts/Yura-TPX13
./users/cazzzer
# https://nix-community.github.io/home-manager/index.xhtml#sec-flakes-nixos-module
home-manager.nixosModules.home-manager
(hmModule ./home/cazzzer-laptop.nix)
]; ];
}; };
VM = nixpkgs.lib.nixosSystem { VM = nixpkgs.lib.nixosSystem {
@@ -30,25 +71,19 @@
modules = [ modules = [
./modules ./modules
./hosts/common.nix ./hosts/common.nix
./hosts/hw-vm.nix
./hosts/vm ./hosts/vm
./users/cazzzer
home-manager.nixosModules.home-manager
{
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.users.jdoe = import ./home.nix;
# Optionally, use home-manager.extraSpecialArgs to pass
# arguments to home.nix
}
]; ];
}; };
router = nixpkgs.lib.nixosSystem { router = nixpkgs.lib.nixosSystem {
system = "x86_64-linux"; system = "x86_64-linux";
modules = [ modules = [
secrix.nixosModules.default
./modules ./modules
./hosts/common.nix ./hosts/common.nix
./hosts/router ./hosts/router
./users/cazzzer
]; ];
}; };
}; };
@@ -59,11 +94,25 @@
modules = [ modules = [
./modules ./modules
./hosts/common.nix ./hosts/common.nix
./hosts/vm/proxmox.nix ./hosts/hw-proxmox.nix
./hosts/vm ./hosts/vm
./users/cazzzer
]; ];
format = "proxmox"; format = "proxmox";
}; };
vm-proxmox = let
image = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./modules
./hosts/common.nix
./hosts/hw-proxmox.nix
./hosts/vm
./users/cazzzer
];
};
in
image.config.system.build.VMA;
}; };
}; };
} }

21
home/cazzzer-laptop.nix Normal file
View File

@@ -0,0 +1,21 @@
{ config, lib, pkgs, ... }:
{
imports = [
./modules
];
programs.plasma = {
kwin.virtualDesktops.number = 6;
kwin.virtualDesktops.rows = 2;
shortcuts.kwin = {
"Switch to Desktop 1" = "Meta+F1";
"Switch to Desktop 2" = "Meta+F2";
"Switch to Desktop 3" = "Meta+F3";
"Switch to Desktop 4" = "Meta+Z";
"Switch to Desktop 5" = "Meta+X";
"Switch to Desktop 6" = "Meta+C";
};
};
}

9
home/cazzzer-pc.nix Normal file
View File

@@ -0,0 +1,9 @@
{ config, lib, pkgs, ... }:
{
imports = [
./modules
];
programs.plasma.kwin.virtualDesktops.number = 2;
}

83
home/modules/default.nix Normal file
View File

@@ -0,0 +1,83 @@
{ config, lib, pkgs, ... }:
let
username = "cazzzer";
in
{
imports = [
./fish.nix
./starship.nix
./plasma.nix
];
# Home Manager needs a bit of information about you and the paths it should
# manage.
home.username = username;
home.homeDirectory = "/home/${username}";
# Let Home Manager install and manage itself.
programs.home-manager.enable = true;
home.sessionVariables = {
EDITOR = "micro";
SHELL = "fish";
};
services.darkman = {
enable = true;
settings = {
lat = 37.3387;
lng = -121.8853;
};
lightModeScripts = {
plasma-color = "plasma-apply-colorscheme BreezeLight";
};
darkModeScripts = {
plasma-color = "plasma-apply-colorscheme BreezeDark";
};
};
# This value determines the Home Manager release that your configuration is
# compatible with. This helps avoid breakage when a new Home Manager release
# introduces backwards incompatible changes.
#
# You should not change this value, even if you update Home Manager. If you do
# want to update the value, then make sure to first check the Home Manager
# release notes.
home.stateVersion = "24.11"; # Please read the comment before changing.
# The home.packages option allows you to install Nix packages into your
# environment.
# home.packages = [
# # Adds the 'hello' command to your environment. It prints a friendly
# # "Hello, world!" when run.
# pkgs.hello
# # It is sometimes useful to fine-tune packages, for example, by applying
# # overrides. You can do that directly here, just don't forget the
# # parentheses. Maybe you want to install Nerd Fonts with a limited number of
# # fonts?
# (pkgs.nerdfonts.override { fonts = [ "FantasqueSansMono" ]; })
# # You can also create simple shell scripts directly inside your
# # configuration. For example, this adds a command 'my-hello' to your
# # environment:
# (pkgs.writeShellScriptBin "my-hello" ''
# echo "Hello, ${config.home.username}!"
# '')
# ];
# Home Manager is pretty good at managing dotfiles. The primary way to manage
# plain files is through 'home.file'.
# home.file = {
# # Building this configuration will create a copy of 'dotfiles/screenrc' in
# # the Nix store. Activating the configuration will then make '~/.screenrc' a
# # symlink to the Nix store copy.
# ".screenrc".source = dotfiles/screenrc;
# # You can also set the file content immediately.
# ".gradle/gradle.properties".text = ''
# org.gradle.console=verbose
# org.gradle.daemon.idletimeout=3600000
# '';
# };
}

27
home/modules/fish.nix Normal file
View File

@@ -0,0 +1,27 @@
{ config, lib, pkgs, ... }:
{
programs.fish = {
enable = true;
shellInit = "set fish_greeting";
shellAliases = {
# Replace ls with exa
ls = "exa -al --color=always --group-directories-first --icons"; # preferred listing
la = "exa -a --color=always --group-directories-first --icons"; # all files and dirs
ll = "exa -l --color=always --group-directories-first --icons"; # long format
lt = "exa -aT --color=always --group-directories-first --icons"; # tree listing
"l." = "exa -a | rg '^\.'"; # show only dotfiles
# Replace cat with bat
cat = "bat";
};
# alias for nix shell with flake packages
functions.add.body = ''
set -x packages 'nixpkgs#'$argv
nix shell $packages
'';
interactiveShellInit = ''
fastfetch
'';
};
}

64
home/modules/plasma.nix Normal file
View File

@@ -0,0 +1,64 @@
{ config, lib, pkgs, osConfig, ... }:
{
programs.plasma = {
enable = true;
overrideConfig = true;
resetFilesExclude = [
"plasma-org.kde.plasma.desktop-appletsrc"
];
# Use tela circle icon theme if installed in system packages
workspace.iconTheme = if builtins.elem pkgs.tela-circle-icon-theme osConfig.environment.systemPackages then "Tela-circle" else null;
fonts = let
defaultFont = {
family = "Noto Sans";
pointSize = 14;
};
in {
general = defaultFont;
fixedWidth = defaultFont // { family = "Hack"; };
small = defaultFont // { pointSize = defaultFont.pointSize - 2; };
toolbar = defaultFont;
menu = defaultFont;
windowTitle = defaultFont;
};
input.keyboard.layouts = [
{ layout = "us"; displayName = "us"; }
{ layout = "minimak-4"; displayName = "us4"; }
{ layout = "ru"; displayName = "ru"; }
];
session.sessionRestore.restoreOpenApplicationsOnLogin = "startWithEmptySession";
shortcuts = {
# kmix.mic_mute = "ScrollLock";
kmix.mic_mute = ["Microphone Mute" "ScrollLock" "Meta+Volume Mute,Microphone Mute" "Meta+Volume Mute,Mute Microphone"];
plasmashell.show-barcode = "Meta+M";
kwin."Window Maximize" = [ "Meta+F" "Meta+PgUp,Maximize Window" ];
"KDE Keyboard Layout Switcher"."Switch to Next Keyboard Layout" = "Meta+Space";
};
hotkeys.commands."launch-konsole" = {
name = "Launch Konsole";
key = "Meta+Alt+C";
command = "konsole";
};
configFile = {
kdeglobals.KDE.AnimationDurationFactor = 0.5;
kdeglobals.General.accentColorFromWallpaper = true;
kwinrc.Wayland.InputMethod = {
value = "org.fcitx.Fcitx5.desktop";
shellExpand = true;
};
dolphinrc.General.ShowFullPath = true;
dolphinrc.DetailsMode.PreviewSize.persistent = true;
kactivitymanagerdrc = {
activities."809dc779-bf5b-49e6-8e3f-cbe283cb05b6" = "Default";
activities."b34a506d-ac4f-4797-8c08-6ef45bc49341" = "Fun";
activities-icons."809dc779-bf5b-49e6-8e3f-cbe283cb05b6" = "keyboard";
activities-icons."b34a506d-ac4f-4797-8c08-6ef45bc49341" = "preferences-desktop-gaming";
};
};
};
xdg.configFile = {
"fcitx5/conf/wayland.conf".text = "Allow Overriding System XKB Settings=False";
};
}

63
home/modules/starship.nix Normal file
View File

@@ -0,0 +1,63 @@
{ config, lib, pkgs, ... }:
{
programs.starship = {
enable = true;
enableFishIntegration = true;
settings = {
format = lib.concatStrings [
"$all"
"$time"
"$cmd_duration"
"$line_break"
"$jobs"
"$status"
"$character"
];
username = {
format = " [$user]($style)@";
style_user = "bold red";
style_root = "bold red";
show_always = true;
};
hostname = {
format = "[$hostname]($style) in ";
style = "bold dimmed red";
ssh_only = false;
};
directory = {
style = "purple";
truncation_length = 0;
truncate_to_repo = true;
truncation_symbol = "repo: ";
};
git_status = {
style = "white";
ahead = "\${count}";
diverged = "\${ahead_count}\${behind_count}";
behind = "\${count}";
deleted = "x";
};
cmd_duration = {
min_time = 1000;
format = "took [$duration]($style) ";
};
time = {
format = " 🕙 $time($style) ";
time_format = "%T";
style = "bright-white";
disabled = false;
};
character = {
success_symbol = " [λ](bold red)";
error_symbol = " [×](bold red)";
};
status = {
symbol = "🔴";
format = "[\\[$symbol$status_common_meaning$status_signal_name$status_maybe_int\\]]($style)";
map_symbol = true;
disabled = false;
};
};
};
}

View File

@@ -10,13 +10,9 @@
./hardware-configuration.nix ./hardware-configuration.nix
# <nixpkgs/nixos/modules/profiles/qemu-guest.nix> # <nixpkgs/nixos/modules/profiles/qemu-guest.nix>
]; ];
mods.kb-input.enable = true; opts.kb-input.enable = true;
# Bootloader. # Bootloader.
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
# boot.plymouth.enable = true;
# boot.plymouth.theme = "breeze";
boot.kernelParams = [ boot.kernelParams = [
"amd_iommu=on" "amd_iommu=on"
"iommu=pt" "iommu=pt"
@@ -30,239 +26,18 @@
options kvm ignore_msrs=1 options kvm ignore_msrs=1
''; '';
boot.loader.timeout = 3;
boot.loader.systemd-boot.configurationLimit = 5;
boot.kernelPackages = pkgs.linuxKernel.packages.linux_6_12;
# https://nixos.wiki/wiki/Accelerated_Video_Playback # https://nixos.wiki/wiki/Accelerated_Video_Playback
hardware.graphics = { hardware.graphics.extraPackages = with pkgs; [
enable = true;
extraPackages = with pkgs; [
intel-media-driver # LIBVA_DRIVER_NAME=iHD intel-media-driver # LIBVA_DRIVER_NAME=iHD
]; ];
};
environment.etc.hosts.mode = "0644";
networking.hostName = "Yura-PC"; # Define your hostname. networking.hostName = "Yura-PC"; # Define your hostname.
networking.hostId = "110a2814"; # Required for ZFS. networking.hostId = "110a2814"; # Required for ZFS.
# networking.wireless.enable = true; # Enables wireless support via wpa_supplicant.
# Configure network proxy if necessary
# networking.proxy.default = "http://user:password@proxy:port/";
# networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";
# Enable networking
networking.networkmanager.enable = true;
# Enable the X11 windowing system.
# You can disable this if you're only using the Wayland session.
services.xserver.enable = false;
# Enable the KDE Plasma Desktop Environment.
services.displayManager.sddm.enable = true;
services.displayManager.sddm.wayland.enable = true;
services.desktopManager.plasma6.enable = true;
# Enable CUPS to print documents.
services.printing.enable = true;
# Enable sound with pipewire.
services.pulseaudio.enable = false;
security.rtkit.enable = true;
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true;
pulse.enable = true;
# If you want to use JACK applications, uncomment this
#jack.enable = true;
# use the example session manager (no others are packaged yet so this is enabled by default,
# no need to redefine it in your config for now)
#media-session.enable = true;
};
# services.qemuGuest.enable = true;
# services.spice-vdagentd.enable = true;
services.openssh.enable = true;
services.flatpak.enable = true;
# services.geoclue2.enable = true;
location.provider = "geoclue2";
# services.gnome.gnome-keyring.enable = true;
security.pam.services.sddm.enableGnomeKeyring = true;
# security.pam.services.sddm.gnupg.enable = true;
# Enable touchpad support (enabled default in most desktopManager).
# services.xserver.libinput.enable = true;
# Define a user account. Don't forget to set a password with passwd.
users.groups = {
cazzzer = {
gid = 1000;
};
};
users.users.cazzzer = {
isNormalUser = true;
description = "Yura";
uid = 1000;
group = "cazzzer";
extraGroups = [ "networkmanager" "wheel" "docker" "wireshark" "geoclue" ];
packages = with pkgs; [
# Python
python3
poetry
# Haskell
haskellPackages.ghc
haskellPackages.stack
# Node
nodejs_22
pnpm
bun
# Nix
nixd
];
};
hardware.bluetooth.enable = true;
hardware.bluetooth.powerOnBoot = false;
# Install firefox.
# programs.firefox.enable = true;
programs.kdeconnect.enable = true;
programs.fish.enable = true;
programs.git.enable = true;
programs.git.lfs.enable = true;
# https://nixos.wiki/wiki/Git
programs.git.package = pkgs.git.override { withLibsecret = true; };
programs.lazygit.enable = true;
programs.neovim.enable = true;
programs.gnupg.agent.enable = true;
programs.gnupg.agent.pinentryPackage = pkgs.pinentry-qt;
# programs.starship.enable = true;
programs.wireshark.enable = true;
programs.wireshark.package = pkgs.wireshark; # wireshark-cli by default
programs.bat.enable = true;
programs.htop.enable = true;
# https://nixos.wiki/wiki/Docker
virtualisation.docker.enable = true;
virtualisation.docker.enableOnBoot = false;
virtualisation.docker.package = pkgs.docker_27;
virtualisation.docker.storageDriver = "zfs";
# https://discourse.nixos.org/t/firefox-does-not-use-kde-window-decorations-and-cursor/32132/3
# programs.dconf.enable = true;
# programs.firefox = {
# enable = true;
# preferences = {
# "widget.use-xdg-desktop-portal.file-picker" = 1;
# "widget.use-xdg-desktop-portal.mime-handler" = 1;
# };
# };
# Allow unfree packages
nixpkgs.config.allowUnfree = true;
# List packages installed in system profile. To search, run:
# $ nix search wget
# https://github.com/flatpak/flatpak/issues/2861
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];
programs.nix-ld.enable = true;
programs.nix-ld.libraries = with pkgs; [
# Add any missing dynamic libraries for unpackaged
# programs here, NOT in environment.systemPackages
# For JetBrains stuff
# https://github.com/NixOS/nixpkgs/issues/240444
];
# attempt to fix flatpak firefox cjk fonts
# fonts.fontconfig.defaultFonts.serif = [
# "Noto Serif"
# "DejaVu Serif"
# ];
# fonts.fontconfig.defaultFonts.sansSerif = [
# "Noto Sans"
# "DejaVu Sans"
# ];
workarounds.flatpak.enable = true;
fonts.packages = with pkgs; [
fantasque-sans-mono
nerd-fonts.fantasque-sans-mono
noto-fonts
noto-fonts-emoji
noto-fonts-cjk-sans
noto-fonts-cjk-serif
jetbrains-mono
];
# fonts.fontDir.enable = true;
# fonts.fontconfig.allowBitmaps = false;
environment.systemPackages = with pkgs; [
darkman
dust
efibootmgr
eza
fastfetch
fd
ffmpeg
host-spawn # for flatpaks
kdePackages.filelight
kdePackages.flatpak-kcm
kdePackages.kate
kdePackages.yakuake
gcr
gnome-keyring # config for this and some others
gnumake
helix
jetbrains-toolbox # or maybe do invidual ones?
# jetbrains.rust-rover
jetbrains.clion
jetbrains.pycharm-professional
jetbrains.webstorm
android-studio
mediainfo
micro
mpv
nextcloud-client
lxqt.pavucontrol-qt
pinentry
rbw
ripgrep
rustup
starship
tealdeer
tela-circle-icon-theme
virt-viewer
waypipe
whois
yt-dlp
];
# Some programs need SUID wrappers, can be configured further or are
# started in user sessions.
# programs.mtr.enable = true;
# programs.gnupg.agent = {
# enable = true;
# enableSSHSupport = true;
# };
# List services that you want to enable:
# Enable the OpenSSH daemon.
# services.openssh.enable = true;
# Open ports in the firewall. # Open ports in the firewall.
# networking.nftables.enable = true; # networking.nftables.enable = true;
networking.firewall.allowedTCPPorts = [ 8080 ]; networking.firewall.allowedTCPPorts = [ 8080 22000 ];
# networking.firewall.allowedUDPPorts = [ ... ]; networking.firewall.allowedUDPPorts = [ 22000 ];
# Or disable the firewall altogether. # Or disable the firewall altogether.
# networking.firewall.enable = false; # networking.firewall.enable = false;

View File

@@ -0,0 +1,44 @@
{ config, lib, pkgs, ... }:
{
imports =
[
./hardware-configuration.nix
];
# Bootloader.
boot.kernelParams = [
"sysrq_always_enabled=1"
];
networking.hostName = "Yura-TPX13"; # Define your hostname.
networking.hostId = "8425e349"; # Required for ZFS.
services.fprintd.enable = true;
# Install firefox.
programs.firefox.enable = true;
# Some programs need SUID wrappers, can be configured further or are
# started in user sessions.
# programs.mtr.enable = true;
# programs.gnupg.agent = {
# enable = true;
# enableSSHSupport = true;
# };
# Open ports in the firewall.
# networking.nftables.enable = true;
# networking.firewall.allowedTCPPorts = [ ];
# networking.firewall.allowedUDPPorts = [ ];
# Or disable the firewall altogether.
# networking.firewall.enable = false;
# This value determines the NixOS release from which the default
# settings for stateful data, like file locations and database versions
# on your system were taken. Its perfectly fine and recommended to leave
# this value at the release version of the first install of this system.
# Before changing this value read the documentation for this option
# (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
system.stateVersion = "24.11"; # Did you read the comment?
}

View File

@@ -5,21 +5,21 @@
{ {
imports = imports =
[ (modulesPath + "/profiles/qemu-guest.nix") [ (modulesPath + "/installer/scan/not-detected.nix")
]; ];
boot.initrd.availableKernelModules = [ "uhci_hcd" "ehci_pci" "ahci" "virtio_pci" "virtio_scsi" "sd_mod" "sr_mod" ]; boot.initrd.availableKernelModules = [ "nvme" "ehci_pci" "xhci_pci" "usb_storage" "sd_mod" "rtsx_pci_sdmmc" ];
boot.initrd.kernelModules = [ ]; boot.initrd.kernelModules = [ ];
boot.kernelModules = [ ]; boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ]; boot.extraModulePackages = [ ];
fileSystems."/" = fileSystems."/" =
{ device = "/dev/disk/by-uuid/da85e220-e2b0-443a-9a0c-a9516b8e5030"; { device = "zroot/e/ROOT/nixos/default";
fsType = "ext4"; fsType = "zfs";
}; };
fileSystems."/boot" = fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/3F96-8974"; { device = "/dev/disk/by-uuid/C6A4-8931";
fsType = "vfat"; fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ]; options = [ "fmask=0022" "dmask=0022" ];
}; };
@@ -31,7 +31,9 @@
# still possible to use this option, but it's recommended to use it in conjunction # still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`. # with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true; networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s18.useDHCP = lib.mkDefault true; # networking.interfaces.enp2s0f0.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp3s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
} }

167
hosts/common-desktop.nix Normal file
View File

@@ -0,0 +1,167 @@
{ config, lib, pkgs, ... }:
{
opts.kb-input.enable = true;
boot.kernelParams = [
"sysrq_always_enabled=1"
];
boot.kernelPackages = pkgs.linuxKernel.packages.linux_6_14;
boot.loader = {
efi.canTouchEfiVariables = true;
timeout = 3;
systemd-boot = {
enable = true;
configurationLimit = 5;
};
};
# https://nixos.wiki/wiki/Accelerated_Video_Playback
hardware.graphics.enable = true;
environment.etc.hosts.mode = "0644";
# Enable networking
networking.networkmanager.enable = true;
# Enable the X11 windowing system.
# You can disable this if you're only using the Wayland session.
services.xserver.enable = false;
# Enable the KDE Plasma Desktop Environment.
services.displayManager.sddm.enable = true;
services.displayManager.sddm.wayland.enable = true;
services.desktopManager.plasma6.enable = true;
# Enable CUPS to print documents.
services.printing.enable = true;
# Enable sound with pipewire.
services.pulseaudio.enable = false;
security.rtkit.enable = true;
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true;
pulse.enable = true;
# If you want to use JACK applications, uncomment this
#jack.enable = true;
};
services.flatpak.enable = true;
hardware.bluetooth.enable = true;
hardware.bluetooth.powerOnBoot = false;
programs.kdeconnect.enable = true;
programs.fish.enable = true;
programs.git.enable = true;
programs.git.lfs.enable = true;
# https://nixos.wiki/wiki/Git
programs.git.package = pkgs.git.override { withLibsecret = true; };
programs.lazygit.enable = true;
programs.neovim.enable = true;
programs.wireshark.enable = true;
programs.wireshark.package = pkgs.wireshark; # wireshark-cli by default
programs.bat.enable = true;
programs.htop.enable = true;
# https://nixos.wiki/wiki/Docker
virtualisation.docker.enable = true;
virtualisation.docker.enableOnBoot = false;
virtualisation.docker.package = pkgs.docker_28;
# https://github.com/flatpak/flatpak/issues/2861
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];
workarounds.flatpak.enable = true;
fonts.packages = with pkgs; [
fantasque-sans-mono
nerd-fonts.fantasque-sans-mono
noto-fonts
noto-fonts-emoji
noto-fonts-cjk-sans
noto-fonts-cjk-serif
jetbrains-mono
];
environment.systemPackages = with pkgs; [
dust
eza
fastfetch
fd
helix
micro
openssl
ripgrep
starship
tealdeer
transcrypt
] ++ [
efibootmgr
ffmpeg
file
fq
gnumake
ijq
jq
ldns
mediainfo
rbw
restic
resticprofile
rclone
ripgrep-all
rustscan
whois
wireguard-tools
yt-dlp
] ++ [
bitwarden-desktop
darkman
host-spawn # for flatpaks
kdePackages.filelight
kdePackages.flatpak-kcm
kdePackages.kate
kdePackages.yakuake
mpv
nextcloud-client
lxqt.pavucontrol-qt
pinentry
tela-circle-icon-theme
virt-viewer
waypipe
] ++ [
# jetbrains.rust-rover
# jetbrains.goland
jetbrains.clion
jetbrains.idea-ultimate
jetbrains.pycharm-professional
jetbrains.webstorm
android-studio
rustup
zed-editor
] ++ [
# Python
python3
poetry
# Haskell
haskellPackages.ghc
haskellPackages.stack
# Node
nodejs_22
pnpm
bun
# Nix
nil
nixd
nixfmt-rfc-style
# Gleam
gleam
beamMinimal26Packages.erlang
];
}

View File

@@ -1,11 +1,4 @@
{ config, pkgs, inputs, ... }: { { config, pkgs, ... }: {
imports = [
inputs.home-manager.nixosModules.home-manager
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
# Allow unfree packages # Allow unfree packages
nixpkgs.config.allowUnfree = true; nixpkgs.config.allowUnfree = true;
@@ -35,4 +28,8 @@
formatted = builtins.concatStringsSep "\n" sortedUnique; formatted = builtins.concatStringsSep "\n" sortedUnique;
in in
formatted; formatted;
services.openssh.enable = true;
services.openssh.settings.PasswordAuthentication = false;
services.openssh.settings.KbdInteractiveAuthentication = false;
} }

16
hosts/hw-proxmox.nix Normal file
View File

@@ -0,0 +1,16 @@
{ config, lib, pkgs, modulesPath, ... }:
{
imports = [
"${modulesPath}/virtualisation/proxmox-image.nix"
];
# boot.kernelParams = [ "console=tty0" ];
proxmox.qemuConf.bios = "ovmf";
proxmox.qemuExtraConf = {
machine = "q35";
# efidisk0 = "local-lvm:vm-9999-disk-1";
cpu = "host";
};
proxmox.cloudInit.enable = false;
}

26
hosts/hw-vm.nix Normal file
View File

@@ -0,0 +1,26 @@
{ config, lib, pkgs, modulesPath, ... }: {
imports = [
"${modulesPath}/profiles/qemu-guest.nix"
];
boot.initrd.availableKernelModules = lib.mkDefault [ "uhci_hcd" "ehci_pci" "ahci" "virtio_pci" "virtio_scsi" "sd_mod" "sr_mod" ];
fileSystems."/" = lib.mkDefault {
device = "/dev/disk/by-label/nixos";
autoResize = true;
fsType = "ext4";
};
fileSystems."/boot" = lib.mkDefault {
device = "/dev/disk/by-label/ESP";
fsType = "vfat";
};
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s18.useDHCP = lib.mkDefault true;
# nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View File

@@ -1,38 +1,22 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
let let
domain = "cazzzer.com"; vars = import ./vars.nix;
ldomain = "l.${domain}"; enableDesktop = false;
if_wan = "wan";
if_lan = "lan";
if_lan10 = "lan.10";
if_lan20 = "lan.20";
wan_ip4 = "192.168.1.61/24";
wan_gw4 = "192.168.1.254";
lan_p4 = "10.19.1"; # .0/24
lan10_p4 = "10.19.10"; # .0/24
lan20_p4 = "10.19.20"; # .0/24
pd_from_wan = ""; # ::/60
lan_p6 = "${pd_from_wan}9"; # ::/64
lan10_p6 = "${pd_from_wan}a"; # ::/64
lan20_p6 = "${pd_from_wan}2"; # ::/64
ula_p = "fdab:07d3:581d"; # ::/48
lan_ula_p = "${ula_p}:0001"; # ::/64
lan10_ula_p = "${ula_p}:0010"; # ::/64
lan20_ula_p = "${ula_p}:0020"; # ::/64
lan_ula_addr = "${lan_ula_p}::1";
lan10_ula_addr = "${lan10_ula_p}::1";
lan20_ula_addr = "${lan20_ula_p}::1";
in in
{ {
imports = imports =
[ # Include the results of the hardware scan. [ # Include the results of the hardware scan.
./hardware-configuration.nix ./hardware-configuration.nix
./ifconfig.nix
./wireguard.nix
./firewall.nix
./dns.nix
./kea.nix
./glance.nix
./services.nix
]; ];
# Secrix for secrets management
secrix.hostPubKey = vars.pubkey;
# Bootloader. # Bootloader.
boot.loader.systemd-boot.enable = true; boot.loader.systemd-boot.enable = true;
@@ -41,502 +25,32 @@ in
"sysrq_always_enabled=1" "sysrq_always_enabled=1"
]; ];
boot.loader.timeout = 2;
boot.loader.systemd-boot.configurationLimit = 5; boot.loader.systemd-boot.configurationLimit = 5;
boot.kernelPackages = pkgs.linuxKernel.packages.linux_6_12; boot.kernelPackages = pkgs.linuxKernel.packages.linux_6_12;
boot.growPartition = true; boot.growPartition = true;
environment.etc.hosts.mode = "0644";
networking.hostName = "grouter"; networking.hostName = "grouter";
# It is impossible to do multiple prefix requests with networkd,
# so I use dhcpcd for this
# https://github.com/systemd/systemd/issues/22571
networking.dhcpcd.enable = true;
# https://github.com/systemd/systemd/issues/22571#issuecomment-2094905496
# https://gist.github.com/csamsel/0f8cca3b2e64d7e4cc47819ec5ba9396
networking.dhcpcd.extraConfig = ''
duid
ipv6only
nodhcp6
noipv6rs
nohook resolv.conf, yp, hostname, ntp
option rapid_commit
interface ${if_wan}
ipv6rs
dhcp6
# this doesn't play well with networkd
# ia_na
# ia_pd 1 ${if_lan}/0
# ia_pd 2 ${if_lan10}/0
# ia_pd 3 ${if_lan20}/0
# request the leases just for routing (so that the att box knows we're here)
# actual ip assignments are static, based on $pd_from_wan
ia_pd 1 -
ia_pd 2 -
# ia_pd 3 -
# ia_pd 4 -
# ia_pd 5 -
# ia_pd 6 -
# ia_pd 7 -
# ia_pd 8 -
'';
networking.useNetworkd = true;
systemd.network.enable = true;
systemd.network = {
# Global options
config.networkConfig = {
IPv4Forwarding = true;
IPv6Forwarding = true;
};
# This is applied by udev, not networkd
# https://nixos.wiki/wiki/Systemd-networkd
# https://nixos.org/manual/nixos/stable/#sec-rename-ifs
links = {
"10-wan" = {
matchConfig.PermanentMACAddress = "bc:24:11:4f:c9:c4";
linkConfig.Name = if_wan;
};
"10-lan" = {
matchConfig.PermanentMACAddress = "bc:24:11:83:d8:de";
linkConfig.Name = if_lan;
};
};
netdevs = {
"10-vlan10" = {
netdevConfig = {
Kind = "vlan";
Name = if_lan10;
};
vlanConfig.Id = 10;
};
"10-vlan20" = {
netdevConfig = {
Kind = "vlan";
Name = if_lan20;
};
vlanConfig.Id = 20;
};
};
networks = {
"10-wan" = {
matchConfig.Name = if_wan;
networkConfig = {
# start a DHCP Client for IPv4 Addressing/Routing
# DHCP = "ipv4";
# accept Router Advertisements for Stateless IPv6 Autoconfiguraton (SLAAC)
# let dhcpcd handle this
Address = [ wan_ip4 ];
IPv6AcceptRA = false;
};
routes = [ { Gateway = wan_gw4; } ];
# make routing on this interface a dependency for network-online.target
linkConfig.RequiredForOnline = "routable";
};
"20-lan" = {
matchConfig.Name = "lan";
vlan = [
if_lan10
if_lan20
];
networkConfig = {
IPv4Forwarding = true;
IPv6SendRA = true;
Address = [ "${lan_p4}.1/24" ];
};
ipv6Prefixes = [
{
# AddressAutoconfiguration = false;
Prefix = "${lan_p6}::/64";
Assign = true;
# Token = [ "static:::1" "eui64" ];
Token = [ "static:::1" ];
}
{
Prefix = "${lan_ula_p}::/64";
Assign = true;
Token = [ "static:::1" ];
}
];
ipv6SendRAConfig = {
Managed = true;
OtherInformation = true;
EmitDNS = true;
DNS = [ lan_ula_addr ];
};
};
"30-vlan10" = {
matchConfig.Name = if_lan10;
networkConfig = {
IPv6SendRA = true;
Address = [ "${lan10_p4}.1/24" ];
};
ipv6Prefixes = [
{
Prefix = "${lan10_p6}::/64";
Assign = true;
Token = [ "static:::1" ];
}
{
Prefix = "${lan10_ula_p}::/64";
Assign = true;
Token = [ "static:::1" ];
}
];
};
"30-vlan20" = {
matchConfig.Name = if_lan20;
networkConfig = {
IPv6SendRA = true;
Address = [ "${lan20_p4}.1/24" ];
};
ipv6Prefixes = [
{
Prefix = "${lan20_p6}::/64";
Assign = true;
Token = [ "static:::1" ];
}
{
Prefix = "${lan20_ula_p}::/64";
Assign = true;
Token = [ "static:::1" ];
}
];
};
};
};
networking.firewall.enable = false;
networking.nftables.enable = true;
networking.nftables.tables.firewall = {
family = "inet";
content = ''
define WAN_IF = "${if_wan}"
define LAN_IF = "${if_lan}"
define LAN_IPV4_SUBNET = ${lan_p4}.0/24
define LAN_IPV6_SUBNET = ${lan_p6}::/64
define LAN_IPV6_ULA = ${lan_ula_p}::/64
define LAN_IPV4_HOST = ${lan_p4}.100
define LAN_IPV6_HOST = ${lan_p6}::1:1000
define ALLOWED_TCP_PORTS = { ssh, https, 19999 }
define ALLOWED_UDP_PORTS = { domain }
chain input {
type filter hook input priority filter; policy drop;
# Allow established and related connections
ct state established,related accept
# Allow all traffic from loopback interface
iifname lo accept
# Allow ICMPv6 on link local addrs
ip6 nexthdr icmpv6 ip6 saddr fe80::/10 accept
ip6 nexthdr icmpv6 ip6 daddr fe80::/10 accept # TODO: not sure if necessary
# Allow all ICMPv6 from LAN
iifname $LAN_IF ip6 saddr { $LAN_IPV6_SUBNET, $LAN_IPV6_ULA } ip6 nexthdr icmpv6 accept
# Allow DHCPv6 client traffic
ip6 daddr { fe80::/10, ff02::/16 } udp dport dhcpv6-server accept
# Allow all ICMP from LAN
iifname $LAN_IF ip saddr $LAN_IPV4_SUBNET ip protocol icmp accept
# Allow specific services from LAN
iifname $LAN_IF ip saddr $LAN_IPV4_SUBNET tcp dport $ALLOWED_TCP_PORTS accept
iifname $LAN_IF ip6 saddr { $LAN_IPV6_SUBNET, $LAN_IPV6_ULA } tcp dport $ALLOWED_TCP_PORTS accept
iifname $LAN_IF ip saddr $LAN_IPV4_SUBNET udp dport $ALLOWED_UDP_PORTS accept
iifname $LAN_IF ip6 saddr { $LAN_IPV6_SUBNET, $LAN_IPV6_ULA } udp dport $ALLOWED_UDP_PORTS accept
# Allow SSH from WAN (if needed)
iifname $WAN_IF tcp dport ssh accept
}
chain forward {
type filter hook forward priority filter; policy drop;
# Allow established and related connections
ct state established,related accept
# Port forwarding
iifname $WAN_IF tcp dport https ip daddr $LAN_IPV4_HOST accept
# Allowed IPv6 ports
iifname $WAN_IF tcp dport https ip6 daddr $LAN_IPV6_HOST accept
# Allow traffic from LAN to WAN
iifname $LAN_IF ip saddr $LAN_IPV4_SUBNET oifname $WAN_IF accept
iifname $LAN_IF ip6 saddr $LAN_IPV6_SUBNET oifname $WAN_IF accept
}
chain output {
# Accept anything out of self by default
type filter hook output priority filter; policy accept;
}
chain prerouting {
# Initial step, accept by default
type nat hook prerouting priority dstnat; policy accept;
# Port forwarding
iifname $WAN_IF tcp dport https dnat ip to $LAN_IPV4_HOST
}
chain postrouting {
# Last step, accept by default
type nat hook postrouting priority srcnat; policy accept;
# Masquerade LAN addrs
# theoretically shouldn't need to check the input interface here,
# as it would be filtered by the forwarding rules
oifname $WAN_IF ip saddr $LAN_IPV4_SUBNET masquerade
# Optional IPv6 masquerading (big L if enabled)
# oifname $WAN_IF ip6 saddr $LAN_IPV6_ULA masquerade
}
'';
};
services.kea.dhcp4.enable = true;
services.kea.dhcp4.settings = {
interfaces-config.interfaces = [
if_lan
];
dhcp-ddns.enable-updates = true;
ddns-qualifying-suffix = "default.${ldomain}";
subnet4 = [
{
id = 1;
subnet = "${lan_p4}.0/24";
ddns-qualifying-suffix = "lan.${ldomain}";
pools = [ { pool = "${lan_p4}.100 - ${lan_p4}.199"; } ];
option-data = [
{
name = "routers";
data = "${lan_p4}.1";
}
{
name = "domain-name-servers";
data = "${lan_p4}.1";
}
];
reservations = [
{
hw-address = "bc:24:11:b7:27:4d";
hostname = "archy";
ip-address = "${lan_p4}.69";
}
];
}
];
};
services.kea.dhcp6.enable = true;
services.kea.dhcp6.settings = {
interfaces-config.interfaces = [
if_lan
];
# TODO: https://kea.readthedocs.io/en/latest/arm/ddns.html#dual-stack-environments
dhcp-ddns.enable-updates = true;
ddns-qualifying-suffix = "default6.${ldomain}";
subnet6 = [
{
id = 1;
interface = if_lan;
subnet = "${lan_p6}::/64";
ddns-qualifying-suffix = "lan6.${ldomain}";
rapid-commit = true;
pools = [ { pool = "${lan_p6}::1:1000/116"; } ];
reservations = [
{
duid = "00:04:59:c3:ce:9a:08:cf:fb:b7:fe:74:9c:e3:b7:44:bf:01";
hostname = "archy";
ip-addresses = [ "${lan_p6}::69" ];
}
];
}
];
};
services.kea.dhcp-ddns.enable = true;
services.kea.dhcp-ddns.settings = {
forward-ddns = {
ddns-domains = [
{
name = "${ldomain}.";
dns-servers = [
{
ip-address = "::1";
port = 1053;
}
];
}
];
};
};
services.resolved.enable = false;
networking.resolvconf.enable = true;
networking.resolvconf.useLocalResolver = true;
services.adguardhome.enable = true;
services.adguardhome.mutableSettings = false;
services.adguardhome.settings = {
dns = {
bootstrap_dns = [ "1.1.1.1" "9.9.9.9" ];
upstream_dns = [
"quic://p0.freedns.controld.com" # Default upstream
"[/${ldomain}/][::1]:1053" # Local domains to Knot (ddns)
];
};
# https://adguard-dns.io/kb/general/dns-filtering-syntax/
user_rules = [
# DNS rewrites
"|grouter.${domain}^$dnsrewrite=${lan_ula_addr}"
# Allowed exceptions
"@@||googleads.g.doubleclick.net"
];
};
services.knot.enable = true;
services.knot.settings = {
server = {
# listen = "0.0.0.0@1053";
listen = "::1@1053";
};
# TODO: templates
zone = [
{
domain = ldomain;
storage = "/var/lib/knot/zones";
file = "${ldomain}.zone";
acl = [ "allow_localhost_update" ];
}
];
acl = [
{
id = "allow_localhost_update";
address = [ "::1" "127.0.0.1" ];
action = [ "update" ];
}
];
};
# Ensure the zone file exists
system.activationScripts.knotZoneFile = ''
ZONE_DIR="/var/lib/knot/zones"
ZONE_FILE="$ZONE_DIR/${ldomain}.zone"
# Create the directory if it doesn't exist
mkdir -p "$ZONE_DIR"
# Check if the zone file exists
if [ ! -f "$ZONE_FILE" ]; then
# Create the zone file with a basic SOA record
# Serial; Refresh; Retry; Expire; Negative Cache TTL;
echo "${ldomain}. 3600 SOA ns.${ldomain}. admin.${ldomain}. 1 86400 900 691200 3600" > "$ZONE_FILE"
echo "Created new zone file: $ZONE_FILE"
else
echo "Zone file already exists: $ZONE_FILE"
fi
# Ensure proper ownership and permissions
chown -R knot:knot "/var/lib/knot"
chmod 644 "$ZONE_FILE"
'';
# https://wiki.nixos.org/wiki/Prometheus
services.prometheus = {
enable = true;
exporters = {
# TODO: DNS, Kea, Knot, other exporters
node = {
enable = true;
enabledCollectors = [ "systemd" ];
};
};
scrapeConfigs = [
{
job_name = "node";
static_configs = [{
targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ];
}];
}
];
};
# https://wiki.nixos.org/wiki/Grafana#Declarative_configuration
services.grafana = {
enable = true;
settings.server.http_port = 3001;
provision = {
enable = true;
datasources.settings.datasources = [
{
name = "Prometheus";
type = "prometheus";
url = "http://localhost:${toString config.services.prometheus.port}";
}
];
};
};
services.caddy = {
enable = true;
virtualHosts."grouter.${domain}".extraConfig = ''
reverse_proxy localhost:${toString config.services.grafana.settings.server.http_port}
tls internal
'';
};
# services.netdata.enable = true;
# Enable the X11 windowing system. # Enable the X11 windowing system.
# You can disable this if you're only using the Wayland session. # You can disable this if you're only using the Wayland session.
services.xserver.enable = false; services.xserver.enable = false;
# Enable the KDE Plasma Desktop Environment. # Enable the KDE Plasma Desktop Environment.
# Useful for debugging with wireshark. # Useful for debugging with wireshark.
services.displayManager.sddm.enable = false; hardware.graphics.enable = true;
services.displayManager.sddm.wayland.enable = true; services.displayManager.sddm.enable = enableDesktop;
services.desktopManager.plasma6.enable = true; services.displayManager.sddm.wayland.enable = enableDesktop;
services.desktopManager.plasma6.enable = enableDesktop;
# No need for audio in VM # No need for audio in VM
services.pipewire.enable = false; services.pipewire.enable = false;
# VM services # VM services
services.qemuGuest.enable = true; services.qemuGuest.enable = true;
services.spice-vdagentd.enable = true; services.spice-vdagentd.enable = true;
services.openssh.enable = true;
services.openssh.settings.PasswordAuthentication = false;
services.openssh.settings.KbdInteractiveAuthentication = false;
security.sudo.wheelNeedsPassword = false; security.sudo.wheelNeedsPassword = false;
users.groups = {
cazzzer = {
gid = 1000;
};
};
users.users.cazzzer = {
password = "";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPWgEzbEjbbu96MVQzkiuCrw+UGYAXN4sRe2zM6FVopq cazzzer@Yura-PC"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIApFeLVi3BOquL0Rt+gQK2CutNHaBDQ0m4PcGWf9Bc43 cazzzer@Yura-TPX13"
];
isNormalUser = true;
description = "Yura";
uid = 1000;
group = "cazzzer";
extraGroups = [ "wheel" "docker" "wireshark" ];
};
programs.firefox.enable = true; programs.firefox.enable = true;
programs.fish.enable = true; programs.fish.enable = true;
programs.git.enable = true; programs.git.enable = true;
@@ -551,19 +65,20 @@ in
eza eza
fastfetch fastfetch
fd fd
kdePackages.filelight
kdePackages.kate kdePackages.kate
kdePackages.yakuake
ldns ldns
lsof lsof
micro micro
mpv mpv
openssl
ripgrep ripgrep
rustscan rustscan
starship starship
tealdeer tealdeer
transcrypt
waypipe waypipe
whois whois
wireguard-tools
]; ];
# This value determines the NixOS release from which the default # This value determines the NixOS release from which the default

133
hosts/router/dns.nix Normal file
View File

@@ -0,0 +1,133 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
domain = vars.domain;
ldomain = vars.ldomain;
sysdomain = vars.sysdomain;
ifs = vars.ifs;
alpinaDomains = [
"|"
"|nc."
"|sonarr."
"|radarr."
"|prowlarr."
"|qbit."
"|gitea."
"|traefik."
"|auth."
"||s3."
"|minio."
"|jellyfin."
"|whoami."
"|grafana."
"|influxdb."
"|uptime."
"|opnsense."
"|vpgen."
"|woodpecker."
"||pgrok."
"|sync."
];
in
{
# https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes
# For upstream quic dns
boot.kernel.sysctl."net.core.wmem_max" = 7500000;
boot.kernel.sysctl."net.core.rmem_max" = 7500000;
services.resolved.enable = false;
networking.resolvconf.enable = true;
networking.resolvconf.useLocalResolver = true;
services.adguardhome.enable = true;
services.adguardhome.mutableSettings = false;
# https://github.com/AdguardTeam/Adguardhome/wiki/Configuration
services.adguardhome.settings = {
dns = {
# Disable rate limit, default of 20 is too low
# https://github.com/AdguardTeam/AdGuardHome/issues/6726
ratelimit = 0;
bootstrap_dns = [ "1.1.1.1" "9.9.9.9" ];
upstream_dns = [
# Default upstreams
"quic://p0.freedns.controld.com"
"tls://one.one.one.one"
"tls://dns.quad9.net"
# Adguard uses upstream and not rewrite rules to resolve cname rewrites,
# and obviously my sysdomain entries don't exist in cloudflare.
"[/${sysdomain}/][::1]" # Sys domains to self (for cname rewrites)
"[/${ldomain}/][::1]:1053" # Local domains to Knot (ddns)
"[/home/][${ifs.lan.ulaPrefix}::250]" # .home domains to opnsense (temporary)
];
};
# https://adguard-dns.io/kb/general/dns-filtering-syntax/
user_rules = [
# DNS rewrites
"|grouter.${domain}^$dnsrewrite=${ifs.lan.ulaAddr}"
"|pve-1.${sysdomain}^$dnsrewrite=${ifs.lan.p4}.5"
"|pve-1.${sysdomain}^$dnsrewrite=${ifs.lan.ulaPrefix}::5:1"
"|pve-3.${sysdomain}^$dnsrewrite=${ifs.lan.p4}.7"
"|pve-3.${sysdomain}^$dnsrewrite=${ifs.lan.ulaPrefix}::7:1"
"|truenas.${sysdomain}^$dnsrewrite=${ifs.lan.p4}.10"
"|truenas.${sysdomain}^$dnsrewrite=${ifs.lan.ulaPrefix}::20d0:43ff:fec6:3192"
"|debbi.${sysdomain}^$dnsrewrite=${ifs.lan.p4}.11"
"|debbi.${sysdomain}^$dnsrewrite=${ifs.lan.ulaPrefix}::11:1"
"|etappi.${sysdomain}^$dnsrewrite=${ifs.lan.p4}.12"
"|etappi.${sysdomain}^$dnsrewrite=${ifs.lan.ulaPrefix}::12:1"
# Lab DNS rewrites
"||lab.${domain}^$dnsrewrite=etappi.${sysdomain}"
# Allowed exceptions
"@@||googleads.g.doubleclick.net"
]
# Alpina DNS rewrites
++ map (host: "${host}${domain}^$dnsrewrite=debbi.${sysdomain}") alpinaDomains;
};
services.knot.enable = true;
services.knot.settings = {
# server.listen = "0.0.0.0@1053";
server.listen = "::1@1053";
zone = [
{
domain = ldomain;
storage = "/var/lib/knot/zones";
file = "${ldomain}.zone";
acl = [ "allow_localhost_update" ];
}
];
acl = [
{
id = "allow_localhost_update";
address = [ "::1" "127.0.0.1" ];
action = [ "update" ];
}
];
};
# Ensure the zone file exists
system.activationScripts.knotZoneFile = ''
ZONE_DIR="/var/lib/knot/zones"
ZONE_FILE="$ZONE_DIR/${ldomain}.zone"
# Create the directory if it doesn't exist
mkdir -p "$ZONE_DIR"
# Check if the zone file exists
if [ ! -f "$ZONE_FILE" ]; then
# Create the zone file with a basic SOA record
# Serial; Refresh; Retry; Expire; Negative Cache TTL;
echo "${ldomain}. 3600 SOA ns.${ldomain}. admin.${ldomain}. 1 86400 900 691200 3600" > "$ZONE_FILE"
echo "Created new zone file: $ZONE_FILE"
else
echo "Zone file already exists: $ZONE_FILE"
fi
# Ensure proper ownership and permissions
chown -R knot:knot "/var/lib/knot"
chmod 644 "$ZONE_FILE"
'';
}

213
hosts/router/firewall.nix Normal file
View File

@@ -0,0 +1,213 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
links = vars.links;
ifs = vars.ifs;
pdFromWan = vars.pdFromWan;
nftIdentifiers = ''
define ZONE_WAN_IFS = { ${ifs.wan.name} }
define ZONE_LAN_IFS = {
${ifs.lan.name},
${ifs.lan10.name},
${ifs.lan20.name},
${ifs.lan30.name},
${ifs.lan40.name},
${ifs.lan50.name},
${ifs.wg0.name},
}
define OPNSENSE_NET6 = ${vars.extra.opnsense.net6}
define ZONE_LAN_EXTRA_NET6 = {
# TODO: reevaluate this statement
${ifs.lan20.net6}, # needed since packets can come in from wan on these addrs
$OPNSENSE_NET6,
}
define RFC1918 = { 10.0.0.0/8, 172.12.0.0/12, 192.168.0.0/16 }
define CLOUDFLARE_NET6 = {
# https://www.cloudflare.com/ips-v6
# TODO: figure out a better way to get addrs dynamically from url
# perhaps building a nixos module/package that fetches the ips?
2400:cb00::/32,
2606:4700::/32,
2803:f800::/32,
2405:b500::/32,
2405:8100::/32,
2a06:98c0::/29,
2c0f:f248::/32,
}
'';
in
{
networking.firewall.enable = false;
networking.nftables.enable = true;
# networking.nftables.ruleset = nftIdentifiers; #doesn't work because it's appended to the end
networking.nftables.tables.nat4 = {
family = "ip";
content = ''
${nftIdentifiers}
map port_forward {
type inet_proto . inet_service : ipv4_addr . inet_service
elements = {
tcp . 8006 : ${ifs.lan50.p4}.10 . 8006
}
}
chain prerouting {
# Initial step, accept by default
type nat hook prerouting priority dstnat; policy accept;
# Port forwarding
fib daddr type local dnat ip to meta l4proto . th dport map @port_forward
}
chain postrouting {
# Last step, accept by default
type nat hook postrouting priority srcnat; policy accept;
# Masquerade LAN addrs
oifname $ZONE_WAN_IFS ip saddr $RFC1918 masquerade
}
'';
};
# Optional IPv6 masquerading (big L if enabled, don't forget to allow forwarding)
networking.nftables.tables.nat6 = {
family = "ip6";
enable = false;
content = ''
${nftIdentifiers}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname $ZONE_WAN_IFS ip6 saddr fd00::/8 masquerade
}
'';
};
networking.nftables.tables.firewall = {
family = "inet";
content = ''
${nftIdentifiers}
define ALLOWED_TCP_PORTS = { ssh }
define ALLOWED_UDP_PORTS = { ${toString vars.ifs.wg0.listenPort} }
define ALLOWED_TCP_LAN_PORTS = { ssh, https }
define ALLOWED_UDP_LAN_PORTS = { bootps, dhcpv6-server, domain, https }
set port_forward_v6 {
type inet_proto . ipv6_addr . inet_service
elements = {
# syncthing on alpina
tcp . ${ifs.lan.p6}::11:1 . 22000 ,
udp . ${ifs.lan.p6}::11:1 . 22000 ,
}
}
set cloudflare_forward_v6 {
type ipv6_addr
elements = {
${ifs.lan.p6}::11:1,
}
}
chain input {
type filter hook input priority filter; policy drop;
# Drop router adverts from self
# peculiarity due to wan and lan20 being bridged
# TODO: figure out a less jank way to do this
iifname $ZONE_WAN_IFS ip6 saddr ${links.lanLL} icmpv6 type nd-router-advert log prefix "self radvt: " drop
# iifname $ZONE_WAN_IFS ip6 saddr ${links.lanLL} ip6 nexthdr icmpv6 log prefix "self icmpv6: " drop
# iifname $ZONE_WAN_IFS ip6 saddr ${links.lanLL} log prefix "self llv6: " drop
# iifname $ZONE_WAN_IFS ip6 saddr ${links.lanLL} log drop
# iifname $ZONE_LAN_IFS ip6 saddr ${links.wanLL} log drop
# Allow established and related connections
# All icmp stuff should (theoretically) be handled by ct related
# https://serverfault.com/a/632363
ct state established,related accept
# However, that doesn't happen for router advertisements from what I can tell
# TODO: more testing
# Allow ICMPv6 on local addrs
ip6 nexthdr icmpv6 ip6 saddr { fe80::/10, ${pdFromWan}0::/60 } accept
ip6 nexthdr icmpv6 ip6 daddr fe80::/10 accept # TODO: not sure if necessary
# Allow all traffic from loopback interface
iif lo accept
# Allow DHCPv6 traffic
# I thought dhcpv6-client traffic would be accepted by established/related,
# but apparently not.
ip6 daddr { fe80::/10, ff02::/16 } th dport { dhcpv6-client, dhcpv6-server } accept
# Global input rules
tcp dport $ALLOWED_TCP_PORTS accept
udp dport $ALLOWED_UDP_PORTS accept
# WAN zone input rules
iifname $ZONE_WAN_IFS jump zone_wan_input
# LAN zone input rules
# iifname $ZONE_LAN_IFS accept
iifname $ZONE_LAN_IFS jump zone_lan_input
ip6 saddr $ZONE_LAN_EXTRA_NET6 jump zone_lan_input
# log
}
chain forward {
type filter hook forward priority filter; policy drop;
# Allow established and related connections
ct state established,related accept
# WAN zone forward rules
iifname $ZONE_WAN_IFS jump zone_wan_forward
# LAN zone forward rules
iifname $ZONE_LAN_IFS jump zone_lan_forward
ip6 saddr $ZONE_LAN_EXTRA_NET6 jump zone_lan_forward
}
chain zone_wan_input {
# Allow specific stuff from WAN
}
chain zone_wan_forward {
# Port forwarding
ct status dnat accept
# Allowed IPv6 ports
meta l4proto . ip6 daddr . th dport @port_forward_v6 accept
# Allowed IPv6 from cloudflare
ip6 saddr $CLOUDFLARE_NET6 ip6 daddr @cloudflare_forward_v6 th dport https accept
}
chain zone_lan_input {
# Allow all ICMPv6 from LAN
ip6 nexthdr icmpv6 accept
# Allow all ICMP from LAN
ip protocol icmp accept
# Allow specific services from LAN
tcp dport $ALLOWED_TCP_LAN_PORTS accept
udp dport $ALLOWED_UDP_LAN_PORTS accept
}
chain zone_lan_forward {
# Allow port forwarded targets
# ct status dnat accept
# Allow all traffic from LAN to WAN, except ULAs
oifname $ZONE_WAN_IFS ip6 saddr fd00::/8 drop # Not sure if needed
oifname $ZONE_WAN_IFS accept;
# Allow traffic between LANs
oifname $ZONE_LAN_IFS accept
}
chain output {
# Accept anything out of self by default
type filter hook output priority filter; policy accept;
# NAT reflection
# oif lo ip daddr != 127.0.0.0/8 dnat ip to meta l4proto . th dport map @port_forward_v4
}
'';
};
}

166
hosts/router/glance.nix Normal file
View File

@@ -0,0 +1,166 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
domain = vars.domain;
in
{
# Glance dashboard
services.glance.enable = true;
services.glance.settings.pages = [
{
name = "Home";
# hideDesktopNavigation = true; # Uncomment if needed
columns = [
{
size = "small";
widgets = [
{
type = "calendar";
firstDayOfWeek = "monday";
}
{
type = "rss";
limit = 10;
collapseAfter = 3;
cache = "12h";
feeds = [
{ url = "https://rtk0c.pages.dev/index.xml"; }
{ url = "https://www.yegor256.com/rss.xml"; }
{ url = "https://selfh.st/rss/"; title = "selfh.st"; }
{ url = "https://ciechanow.ski/atom.xml"; }
{ url = "https://www.joshwcomeau.com/rss.xml"; title = "Josh Comeau"; }
{ url = "https://samwho.dev/rss.xml"; }
{ url = "https://ishadeed.com/feed.xml"; title = "Ahmad Shadeed"; }
];
}
{
type = "twitch-channels";
channels = [
"theprimeagen"
"j_blow"
"piratesoftware"
"cohhcarnage"
"christitustech"
"EJ_SA"
];
}
];
}
{
size = "full";
widgets = [
{
type = "group";
widgets = [
{ type = "hacker-news"; }
{ type = "lobsters"; }
];
}
{
type = "videos";
channels = [
"UCXuqSBlHAE6Xw-yeJA0Tunw" # Linus Tech Tips
"UCR-DXc1voovS8nhAvccRZhg" # Jeff Geerling
"UCsBjURrPoezykLs9EqgamOA" # Fireship
"UCBJycsmduvYEL83R_U4JriQ" # Marques Brownlee
"UCHnyfMqiRRG1u-2MsSQLbXA" # Veritasium
];
}
{
type = "group";
widgets = [
{
type = "reddit";
subreddit = "technology";
showThumbnails = true;
}
{
type = "reddit";
subreddit = "selfhosted";
showThumbnails = true;
}
];
}
];
}
{
size = "small";
widgets = [
{
type = "weather";
location = "San Jose, California, United States";
units = "metric";
hourFormat = "12h";
# hideLocation = true; # Uncomment if needed
}
{
type = "markets";
markets = [
{ symbol = "SPY"; name = "S&P 500"; }
{ symbol = "BTC-USD"; name = "Bitcoin"; }
{ symbol = "NVDA"; name = "NVIDIA"; }
{ symbol = "AAPL"; name = "Apple"; }
{ symbol = "MSFT"; name = "Microsoft"; }
];
}
{
type = "releases";
cache = "1d";
# token = "..."; # Uncomment and set if needed
repositories = [
"glanceapp/glance"
"go-gitea/gitea"
"immich-app/immich"
"syncthing/syncthing"
];
}
];
}
];
}
{
name = "Infrastructure";
columns = [
{
size = "small";
widgets = [
{
type = "server-stats";
servers = [
{
type = "local";
name = "Router";
mountpoints."/nix/store".hide = true;
}
];
}
];
}
{
size = "full";
widgets = [
{
type = "iframe";
title = "Grafana";
title-url = "/grafana/";
source = "/grafana/d-solo/rYdddlPWk/node-exporter-full?orgId=1&from=1747211119196&to=1747297519196&timezone=browser&var-datasource=PBFA97CFB590B2093&var-job=node&var-node=localhost:9100&var-diskdevices=%5Ba-z%5D%2B%7Cnvme%5B0-9%5D%2Bn%5B0-9%5D%2B%7Cmmcblk%5B0-9%5D%2B&refresh=1m&panelId=74&__feature.dashboardSceneSolo";
height = 400;
}
];
}
{
size = "small";
widgets = [
{
type = "dns-stats";
service = "adguard";
url = "http://localhost:${toString config.services.adguardhome.port}";
username = "";
password = "";
}
];
}
];
}
];
}

187
hosts/router/ifconfig.nix Normal file
View File

@@ -0,0 +1,187 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
links = vars.links;
ifs = vars.ifs;
pdFromWan = vars.pdFromWan;
ulaPrefix = vars.ulaPrefix;
mkVlanDev = { id, name }: {
netdevConfig = {
Kind = "vlan";
Name = name;
};
vlanConfig.Id = id;
};
mkLanConfig = ifObj: {
matchConfig.Name = ifObj.name;
networkConfig = {
IPv4Forwarding = true;
IPv6SendRA = true;
Address = [ ifObj.addr4Sized ifObj.addr6Sized ifObj.ulaAddrSized ];
};
ipv6Prefixes = [
{
Prefix = ifObj.net6;
Assign = true;
# Token = [ "static::1" "eui64" ];
Token = [ "static:${ifObj.ip6Token}" ];
}
{
Prefix = ifObj.ulaNet;
Assign = true;
Token = [ "static:${ifObj.ulaToken}" ];
}
];
ipv6RoutePrefixes = [ { Route = "${ulaPrefix}::/48"; } ];
ipv6SendRAConfig = {
# don't manage the att box subnet
# should work fine either way though
Managed = (ifObj.p6 != "${pdFromWan}0");
OtherInformation = (ifObj.p6 != "${pdFromWan}0");
EmitDNS = true;
DNS = [ ifObj.ulaAddr ];
};
};
in
{
# By default, Linux will respond to ARP requests that belong to other interfaces.
# Normally this isn't a problem, but it causes issues
# since my WAN and LAN20 are technically bridged.
# https://networkengineering.stackexchange.com/questions/83071/why-linux-answers-arp-requests-for-ips-that-belong-to-different-network-interfac
boot.kernel.sysctl."net.ipv4.conf.default.arp_filter" = 1;
# It is impossible to do multiple prefix requests with networkd,
# so I use dhcpcd for this
# https://github.com/systemd/systemd/issues/22571
# https://github.com/systemd/systemd/issues/22571#issuecomment-2094905496
# https://gist.github.com/csamsel/0f8cca3b2e64d7e4cc47819ec5ba9396
networking.dhcpcd.enable = true;
networking.dhcpcd.allowInterfaces = [ ifs.wan.name ];
networking.dhcpcd.extraConfig = ''
debug
nohook resolv.conf, yp, hostname, ntp
interface ${ifs.wan.name}
ipv6only
duid
ipv6rs
dhcp6
# option rapid_commit
# DHCPv6 addr
ia_na
# DHCPv6 Prefix Delegation
# request the leases just for routing (so that the att box knows we're here)
# actual ip assignments are static, based on $pdFromWan
ia_pd 1/${ifs.lan.net6} -
ia_pd 10/${ifs.lan10.net6} -
# ia_pd 20/${pdFromWan}d::/64 - # for opnsense (legacy services)
ia_pd 30/${ifs.lan30.net6} -
ia_pd 40/${ifs.lan40.net6} -
ia_pd 50/${ifs.lan50.net6} -
ia_pd 100/${pdFromWan}9::/64 - # for vpn stuff
# ia_pd 8 -
# the leases can be assigned to the interfaces,
# but this doesn't play well with networkd
# ia_pd 1 ${ifs.lan.name}/0
# ia_pd 2 ${ifs.lan10.name}/0
# ia_pd 3 ${ifs.lan20.name}/0
'';
networking.useNetworkd = true;
systemd.network.enable = true;
systemd.network = {
# Global options
config.networkConfig = {
IPv4Forwarding = true;
IPv6Forwarding = true;
};
# This is applied by udev, not networkd
# https://nixos.wiki/wiki/Systemd-networkd
# https://nixos.org/manual/nixos/stable/#sec-rename-ifs
links = {
"10-wan" = {
matchConfig.PermanentMACAddress = links.wanMAC;
linkConfig.Name = ifs.wan.name;
};
"10-lan" = {
matchConfig.PermanentMACAddress = links.lanMAC;
linkConfig.Name = ifs.lan.name;
};
};
netdevs = {
"10-vlan10" = mkVlanDev { id = 10; name = ifs.lan10.name; };
"10-vlan20" = mkVlanDev { id = 20; name = ifs.lan20.name; };
"10-vlan30" = mkVlanDev { id = 30; name = ifs.lan30.name; };
"10-vlan40" = mkVlanDev { id = 40; name = ifs.lan40.name; };
"10-vlan50" = mkVlanDev { id = 50; name = ifs.lan50.name; };
};
networks = {
"10-wan" = {
matchConfig.Name = ifs.wan.name;
# make routing on this interface a dependency for network-online.target
linkConfig.RequiredForOnline = "routable";
networkConfig = {
# start a DHCP Client for IPv4 Addressing/Routing
# DHCP = "ipv4";
# accept Router Advertisements for Stateless IPv6 Autoconfiguraton (SLAAC)
# let dhcpcd handle this
Address = [ ifs.wan.addr4Sized ];
IPv6AcceptRA = false;
KeepConfiguration = true;
};
routes = [
{ Gateway = ifs.wan.gw4; }
];
};
"20-lan" = (mkLanConfig ifs.lan) // {
vlan = [
ifs.lan10.name
ifs.lan20.name
ifs.lan30.name
ifs.lan40.name
ifs.lan50.name
];
routes = vars.extra.opnsense.routes;
};
"30-vlan10" = mkLanConfig ifs.lan10;
"30-vlan20" = mkLanConfig ifs.lan20;
"30-vlan30" = mkLanConfig ifs.lan30;
"30-vlan40" = mkLanConfig ifs.lan40;
"30-vlan50" = mkLanConfig ifs.lan50;
};
};
# For some reason, the interfaces stop receiving route solicitations after a while.
# Regular router adverts still get sent out at intervals, but this breaks dhcp6 clients.
# Restarting networkd makes it work again, I have no clue why.
# This is jank af, but I've tried a bunch of other stuff with no success
# and I'm giving up (for now).
systemd.timers."restart-networkd" = {
wantedBy = [ "timers.target" ];
timerConfig = {
OnBootSec = "1m";
OnUnitActiveSec = "1m";
Unit = "restart-networkd.service";
};
};
systemd.services."restart-networkd" = {
script = ''
set -eu
${pkgs.systemd}/bin/systemctl restart systemd-networkd
'';
serviceConfig = {
Type = "oneshot";
User = "root";
};
};
}

183
hosts/router/kea.nix Normal file
View File

@@ -0,0 +1,183 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
ldomain = vars.ldomain;
ifs = vars.ifs;
mkDhcp4Subnet = id: ifObj: {
id = id;
subnet = ifObj.net4;
pools = [ { pool = "${ifObj.p4}.100 - ${ifObj.p4}.199"; } ];
ddns-qualifying-suffix = "4.${ifObj.domain}";
option-data = [
{ name = "routers"; data = ifObj.addr4; }
{ name = "domain-name-servers"; data = ifObj.addr4; }
{ name = "domain-name"; data = "4.${ifObj.domain}"; }
];
};
mkDhcp6Subnet = id: ifObj: {
id = id;
interface = ifObj.name;
subnet = ifObj.net6;
rapid-commit = true;
pools = [ { pool = "${ifObj.p6}::1:1000/116"; } ];
ddns-qualifying-suffix = "6.${ifObj.domain}";
option-data = [
{ name = "domain-search"; data = "6.${ifObj.domain}"; }
];
};
# Reservations added to Kea
reservations.lan.v4.reservations = [
{
hw-address = "64:66:b3:78:9c:09";
hostname = "openwrt";
ip-address = "${ifs.lan.p4}.2";
}
{
hw-address = "40:86:cb:19:9d:70";
hostname = "dlink-switchy";
ip-address = "${ifs.lan.p4}.3";
}
{
hw-address = "6c:cd:d6:af:4f:6f";
hostname = "netgear-switchy";
ip-address = "${ifs.lan.p4}.4";
}
{
hw-address = "74:d4:35:1d:0e:80";
hostname = "pve-1";
ip-address = "${ifs.lan.p4}.5";
}
{
hw-address = "00:25:90:f3:d0:e0";
hostname = "pve-2";
ip-address = "${ifs.lan.p4}.6";
}
{
hw-address = "a8:a1:59:d0:57:87";
hostname = "pve-3";
ip-address = "${ifs.lan.p4}.7";
}
{
hw-address = "22:d0:43:c6:31:92";
hostname = "truenas";
ip-address = "${ifs.lan.p4}.10";
}
{
hw-address = "1e:d5:56:ec:c7:4a";
hostname = "debbi";
ip-address = "${ifs.lan.p4}.11";
}
{
hw-address = "ee:42:75:2e:f1:a6";
hostname = "etappi";
ip-address = "${ifs.lan.p4}.12";
}
];
reservations.lan.v6.reservations = [
{
duid = "00:03:00:01:64:66:b3:78:9c:09";
hostname = "openwrt";
ip-addresses = [ "${ifs.lan.p6}::1:2" ];
}
{
duid = "00:01:00:01:2e:c0:63:23:22:d0:43:c6:31:92";
hostname = "truenas";
ip-addresses = [ "${ifs.lan.p6}::10:1" ];
}
{
duid = "00:02:00:00:ab:11:09:41:25:21:32:71:e3:77";
hostname = "debbi";
ip-addresses = [ "${ifs.lan.p6}::11:1" ];
}
{
duid = "00:02:00:00:ab:11:6b:56:93:72:0b:3c:84:11";
hostname = "etappi";
ip-addresses = [ "${ifs.lan.p6}::12:1" ];
}
];
reservations.lan20.v4.reservations = [
{
# Router
hw-address = "1c:3b:f3:da:5f:cc";
hostname = "archer-ax3000";
ip-address = "${ifs.lan20.p4}.2";
}
{
# Printer
hw-address = "30:cd:a7:c5:40:71";
hostname = "SEC30CDA7C54071";
ip-address = "${ifs.lan20.p4}.9";
}
{
# 3D Printer
hw-address = "20:f8:5e:ff:ae:5f";
hostname = "GS_ffae5f";
ip-address = "${ifs.lan20.p4}.11";
}
{
hw-address = "70:85:c2:d8:87:3f";
hostname = "Yura-PC";
ip-address = "${ifs.lan20.p4}.40";
}
];
in
{
services.kea.dhcp4.enable = true;
services.kea.dhcp4.settings = {
interfaces-config.interfaces = [
ifs.lan.name
ifs.lan10.name
ifs.lan20.name
ifs.lan30.name
ifs.lan40.name
ifs.lan50.name
];
dhcp-ddns.enable-updates = true;
ddns-qualifying-suffix = "4.default.${ldomain}";
subnet4 = [
((mkDhcp4Subnet 1 ifs.lan) // reservations.lan.v4)
(mkDhcp4Subnet 10 ifs.lan10)
((mkDhcp4Subnet 20 ifs.lan20) // reservations.lan20.v4)
(mkDhcp4Subnet 30 ifs.lan30)
(mkDhcp4Subnet 40 ifs.lan40)
(mkDhcp4Subnet 50 ifs.lan50)
];
};
services.kea.dhcp6.enable = true;
services.kea.dhcp6.settings = {
interfaces-config.interfaces = [
ifs.lan.name
ifs.lan10.name
# ifs.lan20.name # Managed by Att box
ifs.lan30.name
ifs.lan40.name
ifs.lan50.name
];
# TODO: https://kea.readthedocs.io/en/latest/arm/ddns.html#dual-stack-environments
dhcp-ddns.enable-updates = true;
ddns-qualifying-suffix = "6.default.${ldomain}";
subnet6 = [
((mkDhcp6Subnet 1 ifs.lan) // reservations.lan.v6)
(mkDhcp6Subnet 10 ifs.lan10)
(mkDhcp6Subnet 30 ifs.lan30)
(mkDhcp6Subnet 40 ifs.lan40)
(mkDhcp6Subnet 50 ifs.lan50)
];
};
services.kea.dhcp-ddns.enable = true;
services.kea.dhcp-ddns.settings = {
forward-ddns.ddns-domains = [
{
name = "${ldomain}.";
dns-servers = [ { ip-address = "::1"; port = 1053; } ];
}
];
};
}

2
hosts/router/private.nix Normal file
View File

@@ -0,0 +1,2 @@
U2FsdGVkX1/98w32OE1ppwT0I5A3UOTKCLJfvk+TQdrbf0TLfYNZ9TC9n8cH2hC9
ObKVuFlOLwHlzeBy7MXaLg==

View File

@@ -0,0 +1,5 @@
age-encryption.org/v1
-> ssh-ed25519 D2MY/A Kj69kavxx+ATNHP5pX0JtGggU76f9uRwkZp2HbjwiWc
SbU3jIcQzUzaQjRHzVSoW1WKiUj+1ijbkUKqVb406fY
--- vMV0TcchFvxw1xetQQZ0xVi2KwjLFRfZBM1gl7BGbGI
<EFBFBD><EFBFBD>1<10><><EFBFBD><EFBFBD>K<EFBFBD><<3C>

View File

@@ -0,0 +1,5 @@
age-encryption.org/v1
-> ssh-ed25519 D2MY/A cRVo1AetNYKsb28kGpe6mVpoCyfNcRibeBYhJuXbbEY
k8XL4XEv4FM6sfU/TOFTg4vlKm61409No/TpCEjTnSk
--- mT9w1vnx2FrzWw+Zt1wV6UJ+mjHTizrUPVeaTisYQ74
=<3D>q-So<><6F>pn<70><6E><EFBFBD><EFBFBD><EFBFBD>I<EFBFBD><49><EFBFBD>Z֠i<D6A0>'<27><><EFBFBD>"%M<><06><>C&

View File

@@ -0,0 +1,5 @@
age-encryption.org/v1
-> ssh-ed25519 D2MY/A Xg7XTl/qJqVqvXsHNKcoICq74DeOlquN1CEn1PwxlVY
FqmPdDgmuUrwZPLW56RhW8o1VXr5l2Xms6IVebpi7bA
--- nLT/bC55EvoXK6f7DYbMhD3I8Z122bxeGVw1PCds2IM
!<><7F><05>Dl<44><6C><EFBFBD><EFBFBD><EFBFBD><EFBFBD>;<3B><>KXq8<71>4<EFBFBD><34><EFBFBD><EFBFBD>+b<><62>p_q4B<34><42>'8<>%<25>cI<63><49>D<EFBFBD>t<> <0C> <05>V~;v*<2A><>W<EFBFBD>-<2D>,[<5B><74>

85
hosts/router/services.nix Normal file
View File

@@ -0,0 +1,85 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
domain = vars.domain;
in
{
# vnStat for tracking network interface stats
services.vnstat.enable = true;
# https://wiki.nixos.org/wiki/Prometheus
services.prometheus = {
enable = true;
exporters = {
# TODO: DNS, Kea, Knot, other exporters
node = {
enable = true;
enabledCollectors = [ "systemd" ];
};
};
scrapeConfigs = [
{
job_name = "node";
static_configs = [{
targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ];
}];
}
];
};
# https://wiki.nixos.org/wiki/Grafana#Declarative_configuration
services.grafana = {
enable = true;
settings = {
security.allow_embedding = true;
server = {
http_port = 3001;
domain = "grouter.${domain}";
root_url = "https://%(domain)s/grafana/";
serve_from_sub_path = true;
};
};
provision = {
enable = true;
datasources.settings.datasources = [
{
name = "Prometheus";
type = "prometheus";
url = "http://localhost:${toString config.services.prometheus.port}";
}
];
};
};
secrix.system.secrets.cf-api-key.encrypted.file = ./secrets/cf-api-key.age;
systemd.services.caddy.serviceConfig.EnvironmentFile = config.secrix.system.secrets.cf-api-key.decrypted.path;
services.caddy = {
enable = true;
package = pkgs.caddy.withPlugins {
plugins = [ "github.com/caddy-dns/cloudflare@v0.2.1" ];
hash = "sha256-Gsuo+ripJSgKSYOM9/yl6Kt/6BFCA6BuTDvPdteinAI=";
};
virtualHosts."grouter.${domain}".extraConfig = ''
encode
tls {
dns cloudflare {env.CF_API_KEY}
resolvers 1.1.1.1
}
@grafana path /grafana /grafana/*
handle @grafana {
reverse_proxy localhost:${toString config.services.grafana.settings.server.http_port}
}
redir /adghome /adghome/
handle_path /adghome/* {
reverse_proxy localhost:${toString config.services.adguardhome.port}
basic_auth {
Bob $2a$14$HsWmmzQTN68K3vwiRAfiUuqIjKoXEXaj9TOLUtG2mO1vFpdovmyBy
}
}
handle /* {
reverse_proxy localhost:${toString config.services.glance.settings.server.port}
}
'';
};
}

139
hosts/router/vars.nix Normal file
View File

@@ -0,0 +1,139 @@
let
private = import ./private.nix;
mkIfConfig = {
name_,
domain_,
p4_, # /24
p4Size_ ? 24,
p6_, # /64
p6Size_ ? 64,
ulaPrefix_, # /64
ulaSize_ ? 64,
token? 1,
ip6Token_? "::${toString token}",
ulaToken_? "::${toString token}",
}: rec {
name = name_;
domain = domain_;
p4 = p4_;
p4Size = p4Size_;
net4 = "${p4}.0/${toString p4Size}";
addr4 = "${p4}.${toString token}";
addr4Sized = "${addr4}/${toString p4Size}";
p6 = p6_;
p6Size = p6Size_;
net6 = "${p6}::/${toString p6Size}";
ip6Token = ip6Token_;
addr6 = "${p6}${ip6Token}";
addr6Sized = "${addr6}/${toString p6Size}";
ulaPrefix = ulaPrefix_;
ulaSize = ulaSize_;
ulaNet = "${ulaPrefix}::/${toString ulaSize}";
ulaToken = ulaToken_;
ulaAddr = "${ulaPrefix}${ulaToken}";
ulaAddrSized = "${ulaAddr}/${toString ulaSize}";
};
in
rec {
pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFobB87yYVwhuYrA+tfztLuks3s9jZOqEFktwGw1mo83 root@grouter";
domain = "cazzzer.com";
ldomain = "l.${domain}";
sysdomain = "sys.${domain}";
links = {
wanMAC = "bc:24:11:4f:c9:c4";
lanMAC = "bc:24:11:83:d8:de";
wanLL = "fe80::be24:11ff:fe4f:c9c4";
lanLL = "fe80::be24:11ff:fe83:d8de";
};
p4 = "10.17"; # .0.0/16
pdFromWan = private.pdFromWan; # ::/60
ulaPrefix = "fdab:07d3:581d"; # ::/48
ifs = rec {
wan = rec {
name = "wan";
addr4 = "192.168.1.61";
addr4Sized = "${addr4}/24";
gw4 = "192.168.1.254";
};
lan = mkIfConfig {
name_ = "lan";
domain_ = "lan.${ldomain}";
p4_ = "${p4}.1"; # .0/24
p6_ = "${pdFromWan}f"; # ::/64
ulaPrefix_ = "${ulaPrefix}:0001"; # ::/64
};
lan10 = mkIfConfig {
name_ = "${lan.name}.10";
domain_ = "lab.${ldomain}";
p4_ = "${p4}.10"; # .0/24
p6_ = "${pdFromWan}e"; # ::/64
ulaPrefix_ = "${ulaPrefix}:0010"; # ::/64
};
lan20 = mkIfConfig {
name_ = "${lan.name}.20";
domain_ = "life.${ldomain}";
p4_ = "${p4}.20"; # .0/24
p6_ = "${pdFromWan}0"; # ::/64 managed by Att box
ulaPrefix_ = "${ulaPrefix}:0020"; # ::/64
ip6Token_ = "::1:1"; # override ipv6 for lan20, since the Att box uses ::1 here
};
lan30 = mkIfConfig {
name_ = "${lan.name}.30";
domain_ = "iot.${ldomain}";
p4_ = "${p4}.30"; # .0/24
p6_ = "${pdFromWan}c"; # ::/64
ulaPrefix_ = "${ulaPrefix}:0030"; # ::/64
};
lan40 = mkIfConfig {
name_ = "${lan.name}.40";
domain_ = "kube.${ldomain}";
p4_ = "${p4}.40"; # .0/24
p6_ = "${pdFromWan}b"; # ::/64
ulaPrefix_ = "${ulaPrefix}:0040"; # ::/64
};
lan50 = mkIfConfig {
name_ = "${lan.name}.50";
domain_ = "prox.${ldomain}";
p4_ = "${p4}.50"; # .0/24
p6_ = "${pdFromWan}a"; # ::/64
ulaPrefix_ = "${ulaPrefix}:0050"; # ::/64
};
wg0 = mkIfConfig {
name_ = "wg0";
domain_ = "wg0.${ldomain}";
p4_ = "10.18.16"; # .0/24
p6_ = "${pdFromWan}9:0:6"; # ::/96
p6Size_ = 96;
ulaPrefix_ = "${ulaPrefix}:0100:0:6"; # ::/96
ulaSize_ = 96;
} // {
listenPort = 51944;
};
};
extra = {
opnsense = rec {
addr4 = "${ifs.lan.p4}.250";
ulaAddr = "${ifs.lan.ulaPrefix}::250";
p6 = "${pdFromWan}d";
net6 = "${p6}::/64";
# VPN routes on opnsense
routes = [
{
Destination = "10.6.0.0/24";
Gateway = addr4;
}
{
Destination = "10.18.0.0/20";
Gateway = addr4;
}
{
Destination = net6;
Gateway = ulaAddr;
}
];
};
};
}

View File

@@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
let
vars = import ./vars.nix;
wg0 = vars.ifs.wg0;
peerIps = ifObj: token: [
"${ifObj.p4}.${toString token}/32"
"${ifObj.p6}:${toString token}:0/112"
"${ifObj.ulaPrefix}:${toString token}:0/112"
];
mkWg0Peer = token: publicKey: {
allowedIPs = peerIps wg0 token;
inherit publicKey;
pskEnabled = true;
};
wg0Peers = {
"Yura-TPX13" = mkWg0Peer 100 "iFdsPYrpw7vsFYYJB4SOTa+wxxGVcmYp9CPxe0P9ewA=";
"Yura-Pixel7Pro" = mkWg0Peer 101 "GPdXxjvnhsyufd2QX/qsR02dinUtPnnxrE66oGt/KyA=";
};
peerSecretName = name: "wg0-peer-${name}-psk";
secrets = config.secrix.services.systemd-networkd.secrets;
in
{
secrix.services.systemd-networkd.secrets = let
pskPeers = lib.attrsets.filterAttrs (name: peer: peer.pskEnabled) wg0Peers;
mapPeer = name: peer: {
name = peerSecretName name;
value.encrypted.file = ./secrets/wireguard/${peerSecretName name}.age;
};
peerSecrets = lib.attrsets.mapAttrs' mapPeer pskPeers;
allSecrets = {
wg0-private-key.encrypted.file = ./secrets/wireguard/wg0-private-key.age;
} // peerSecrets;
setSecretOwnership = name: value: value // {
decrypted.user = "systemd-network";
decrypted.group = "systemd-network";
};
in lib.attrsets.mapAttrs setSecretOwnership allSecrets;
systemd.network.netdevs = {
"10-wg0" = {
netdevConfig = {
Kind = "wireguard";
Name = wg0.name;
};
wireguardConfig = {
PrivateKeyFile = secrets.wg0-private-key.decrypted.path;
ListenPort = wg0.listenPort;
};
wireguardPeers = map (peer: {
AllowedIPs = lib.strings.concatStringsSep "," peer.value.allowedIPs;
PublicKey = peer.value.publicKey;
PresharedKeyFile = if peer.value.pskEnabled then secrets."${peerSecretName peer.name}".decrypted.path else null;
}) (lib.attrsToList wg0Peers);
};
};
systemd.network.networks = {
"10-wg0" = {
matchConfig.Name = "wg0";
networkConfig = {
IPv4Forwarding = true;
IPv6SendRA = false;
Address = [ wg0.addr4Sized wg0.addr6Sized wg0.ulaAddrSized ];
};
};
};
}

View File

@@ -7,9 +7,9 @@
{ {
imports = imports =
[ # Include the results of the hardware scan. [ # Include the results of the hardware scan.
# ./hardware-configuration-vm.nix # ./hardware-configuration.nix
]; ];
mods.kb-input.enable = false; opts.kb-input.enable = false;
# Bootloader. # Bootloader.
boot.loader.systemd-boot.enable = true; boot.loader.systemd-boot.enable = true;
@@ -47,34 +47,13 @@
services.flatpak.enable = true; services.flatpak.enable = true;
# VM services # VM services
services.cloud-init.enable = true; # services.cloud-init.enable = false;
# services.cloud-init.network.enable = false; # services.cloud-init.network.enable = false;
services.qemuGuest.enable = true; services.qemuGuest.enable = true;
services.spice-vdagentd.enable = true; services.spice-vdagentd.enable = true;
services.openssh.enable = true;
services.openssh.settings.PasswordAuthentication = false;
services.openssh.settings.KbdInteractiveAuthentication = false;
security.sudo.wheelNeedsPassword = false; security.sudo.wheelNeedsPassword = false;
users.groups = {
cazzzer = {
gid = 1000;
};
};
users.users.cazzzer = {
password = "";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPWgEzbEjbbu96MVQzkiuCrw+UGYAXN4sRe2zM6FVopq cazzzer@Yura-PC"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIApFeLVi3BOquL0Rt+gQK2CutNHaBDQ0m4PcGWf9Bc43 cazzzer@Yura-TPX13"
];
isNormalUser = true;
description = "Yura";
uid = 1000;
group = "cazzzer";
extraGroups = [ "wheel" "docker" "wireshark" ];
};
# Install firefox. # Install firefox.
programs.firefox.enable = true; programs.firefox.enable = true;
programs.fish.enable = true; programs.fish.enable = true;
@@ -113,9 +92,11 @@
ldns ldns
micro micro
mpv mpv
openssl
ripgrep ripgrep
starship starship
tealdeer tealdeer
transcrypt
waypipe waypipe
whois whois
zfs zfs

View File

@@ -1,11 +0,0 @@
{ ... }:
{
# boot.kernelParams = [ "console=tty0" ];
proxmox.qemuConf.bios = "ovmf";
proxmox.qemuExtraConf = {
machine = "q35";
# efidisk0 = "local-lvm:vm-9999-disk-1";
cpu = "host";
};
}

View File

@@ -1,6 +1,6 @@
{ ... }: { { ... }: {
imports = [ imports = [
./mods ./opts
./workarounds ./workarounds
]; ];
} }

View File

@@ -1,5 +1,5 @@
{ ... }: { { ... }: {
imports = [ imports = [
./kb-input.nix ./kb-input
]; ];
} }

View File

@@ -4,16 +4,26 @@
lib, lib,
... ...
}: let }: let
cfg = config.mods.kb-input; cfg = config.opts.kb-input;
in { in {
options = { options = {
mods.kb-input = { opts.kb-input = {
enable = lib.mkEnableOption "input method and custom keyboard layout"; enable = lib.mkEnableOption "input method and custom keyboard layout";
enableMinimak = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Enable Minimak keyboard layout";
};
enableFcitx = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Enable Fcitx5 input method";
};
}; };
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.xserver.xkb.extraLayouts = { services.xserver.xkb.extraLayouts = lib.mkIf cfg.enableMinimak {
minimak-4 = { minimak-4 = {
description = "English (US, Minimak-4)"; description = "English (US, Minimak-4)";
languages = [ "eng" ]; languages = [ "eng" ];
@@ -31,9 +41,9 @@ in {
}; };
}; };
i18n.inputMethod = { i18n.inputMethod = lib.mkIf cfg.enableFcitx {
type = "fcitx5";
enable = true; enable = true;
type = "fcitx5";
fcitx5.waylandFrontend = true; fcitx5.waylandFrontend = true;
fcitx5.plasma6Support = true; fcitx5.plasma6Support = true;
fcitx5.addons = [ pkgs.fcitx5-mozc ]; fcitx5.addons = [ pkgs.fcitx5-mozc ];

12
renovate.json Normal file
View File

@@ -0,0 +1,12 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended"
],
"lockFileMaintenance": {
"enabled": true
},
"nix": {
"enabled": true
}
}

18
users/cazzzer/default.nix Normal file
View File

@@ -0,0 +1,18 @@
{ config, lib, pkgs, ... }: {
users.groups.cazzzer.gid = 1000;
users.users.cazzzer = {
uid = 1000;
isNormalUser = true;
description = "Yura";
group = "cazzzer";
extraGroups = [ "wheel" ]
++ lib.optionals config.networking.networkmanager.enable [ "networkmanager" ]
++ lib.optionals config.virtualisation.docker.enable [ "docker" ]
++ lib.optionals config.programs.wireshark.enable [ "wireshark" ]
;
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE02AhJIZtrtZ+5sZhna39LUUCEojQzmz2BDWguT9ZHG yuri@tati.sh"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHczlipzGWv8c6oYwt2/9ykes5ElfneywDXBTOYbfSfn Pixel7Pro"
];
};
}