The Importance of Backups: Why Failing to Create or Test Them Leads to Regret

Data loss is an inevitable reality in the digital age. It is not a matter of “if,” but “when.” Whether you’re managing critical infrastructure for a large enterprise or simply storing personal documents, photos, and legal files on your own computer, the risk of data loss is ever-present. Despite this, many people and organizations fail to prioritize backups until it is too late.

Throughout my career as an IT specialist, I have witnessed this problem repeatedly. The pattern is always the same: individuals and companies lose data, panic ensues, and attempts are made to recover what could have been easily protected with minimal cost and effort.

Data loss

On a daily basis, I encounter numerous instances of data loss across various sectors. Companies pay for bare-metal servers or cloud compute instances to run their production workloads, but when these systems crash, experience filesystem corruption, or are compromised by security breaches, the response is typically one of outrage. Support tickets are submitted with complaints that their data is lost, with no backups, no snapshots, and no contingency plan in place.

This issue is not exclusive to large enterprises. Consider personal devices. How many individuals store years of important tax records, legal documents, irreplaceable family photos, or critical work files on their laptops, with no backup strategy whatsoever? Hardware failure, malware attacks, accidental deletion, theft, and simple human error can result in irreversible data loss in an instant.

Some of the cases I handled…

The cases I handled professionally can be truly staggering. For example, I worked with a client paying tens of thousands of dollars per month for a bare-metal server to host highly sensitive data, yet they had no backup strategy in place. When hardware failure inevitably occurred, the result was outrage, threats of legal action, and attempts to assign blame. Unfortunately, without a backup policy, disaster recovery plan, and monitoring system in place, these situations are often beyond repair.

In this particular instance, I worked closely with the client to establish a comprehensive backup procedure. This involved selecting an appropriate backup solution adapted to their infrastructure, setting up regular backups, and ensuring the backups were stored securely and off-site. Additionally, we implemented a monitoring system to alert the client to any potential issues before they became critical. The result is now a much more resilient system, where data loss is no longer a risk, and the client has a clear disaster recovery plan in place.

To make matters worse, I’ve witnessed scenarios where the financial cost of data loss has reached tens of thousands of dollars per day. This is not a matter of insufficient resources. It’s a matter of misplaced priorities and negligence. For a fraction of the cost of their monthly expenses, these individuals and organizations could have secured a robust backup solution that would have prevented the catastrophe.

The reality is simple: backups are inexpensive. Data loss, on the other hand, is costly, stressful, and often irreversible. The barriers to implementing reliable, automated backup solutions today are negligible. Whether you choose cloud storage, external drives, remote servers, the tools are readily available, and there is no excuse for neglecting to protect your data.

Restores have to be tested

It is a common misconception that having a backup in place means data is secure, simply because a tool reports, “Your backups were successful.” However, the true test of a backup’s reliability occurs when it is needed most, during a restore attempt. This is when many realize that despite the “successful” alert, the backup may be incomplete, corrupted, or otherwise unusable.

It is concerning that many individuals in technical roles lack the necessary expertise to properly set up and manage infrastructure within an organization. While foundational knowledge is readily available through online resources, there is a tendency for some to set up systems with minimal effort, a few clicks, and the system is presumed to be functioning. However, the true measure of proficiency in IT infrastructure lies not in the ability to configure a system, but in the ability to resolve issues when things inevitably fail.

Conclusion

If you are reading this, I encourage you to take a moment to assess your backup strategy. If you do not have one, create one today. If you already have a backup system in place, ensure that it is functioning correctly. Whether you are managing a business with critical client data or simply safeguarding personal files on your laptop, the importance of backups cannot be overstated.

Feel free to share your own experiences with data loss, lessons learned, or helpful backup solutions in the comments.

Pathaction – Create .pathaction.yaml rule-set files for executing commands on any file

License

Introduction

The pathaction command-line tool enables the execution of specific commands on targeted files or directories. Its key advantage lies in its flexibility, allowing users to handle various types of files (such as source code, text files, images, videos, configuration files, and more) simply by passing the file or directory as an argument to the pathaction tool. The tool uses a .pathaction.yaml rule-set file to determine which command to execute. Additionally, Jinja2 templating can be employed in the rule-set file to further customize the commands.

(If you use Emacs, you can use the pathaction.el package to execute the pathaction command-line tool directly from within Emacs. There is also a Vim plugin: vim-pathaction @GitHub)

You can execute a file with the following commands:

pathaction -t main file.py

Or:

pathaction -t edit another-file.jpg

(Note: The -t option specifies the tag, allowing you to apply a tagged rule.)

Here’s an example of what a .pathaction.yaml rule-set file looks like:

---
actions:
  - path_match: "*.py"
    tags: main
    command:
      - "python"
      - "{{ file }}"

  - path_match: "*.jpg"
    tags:
      - edit
      - show
    command: "gimp {{ file|quote }}"

(Note: There are many ways to match paths, including using regex. See below for more details.)

The pathaction tool can be viewed as a type of Makefile but is applicable to any file or directory within the filesystem hierarchy (e.g., it can execute any file such as independent scripts, Ansible playbooks, Python scripts, configuration files, etc.). It executes specific actions (i.e., commands) using tags that allow the user to specify different commands for the same type of file (e.g., a tag for execution, another tag for debugging, another tag for installation, etc.).

By using predefined rules in a user-created rule-set file (.pathaction.yaml), pathaction enables the creation of various tagged actions (e.g., Install, Run, Debug, Compile) customized for different file types (e.g., C/C++ files, Python files, Ruby files, ini files, images, etc.).

Requirements

  • Python
  • pip

Installation

To install the pathaction executable locally in ~/.local/bin/pathaction using pip, run:

sudo pip install pathaction

(Omitting the --user flag will install pathaction system-wide in /usr/local/bin/pathaction.)

The .pathaction.yaml rule-set file

Example 1

The pathaction command-line tool utilizes regular expressions or filename pattern matching found in the rule-set file named .pathaction.yaml to associate commands with file types.

First off, we are going to create and change the current directory to the project directory:

mkdir ~/project
cd ~/project

After that, we are going to permanently allow pathaction to read rule-set files (.pathaction.yaml) from the current directory using the command:

$ pathaction --allow-dir ~/project

This is a security measure to ensure that only the directories that are explicitly allowed could execute arbitrary commands using the pathaction tool.

For instance, consider the following command:

$ pathaction file.py

The command above will load the .pathaction.yaml file not only from the directory where file.py is located but also from its parent directories. This loading behavior is similar to that of a .gitignore file. The rule sets from all these .pathaction.yaml files are combined. In case of conflicting rules or configurations, the priority is given to the rule set that is located in the directory closest to the specified file or directory passed as a parameter to the pathaction command.

Jinja2 templating can be used to dynamically replace parts of the commands defined in the rule-set file with information about the file being executed, such as its filename and path, among other details (more on this below). In the command "python {{ file|quote }}", the placeholder {{ file|quote }} will be dynamically substituted with the path to the source code passed as a parameter to the pathaction command-line tool.

Each rule defined in the rule set file .pathaction.yaml must include at least:

  • The matching rule (e.g. a file name pattern like *.py or a regex .*py$).
  • The command or a shell command (the command and its arguments can be templated with Jinja2).

Example 2

This is what the rule-set file .pathaction.yaml contains:

---
actions:
  # *.py files
  - path_match: "*.py"
    tags: main
    command:
      - "python"
      - "{{ file }}"

  # *.sh files
  - path_match: "*.sh"
    tags:
      - main
    command: "bash {{ file|quote }}"

  - path_match: "*.sh"
    tags: install
    command: "cp {{ file|quote }} ~/.local/bin/"

Consider the following command:

$ pathaction source_code.py

The command above command will:

  1. Load the source_code.py file,
  2. Attempt to locate .pathaction.yaml or .pathaction.yml in the directory where the source code is located or in its parent directories. The search for .pathaction.yaml follows the same approach as git uses to find .gitignore in the current and parent directories.
  3. Execute the command defined in .pathaction.yaml (e.g. PathAction will execute the command python {{ file }} on all *.py files).

Example 3

Here is another example of a rule-set file located at ~/.pathaction.yaml:

---
options:
  shell: /bin/bash
  verbose: false
  debug: false
  confirm_after_timeout: 120

actions:
  # A shell is used to run the following command:
  - path_match: "*.py"
    path_match_exclude: "*/not_this_one.py"    # optional
    tags:
      - main
    shell: true
    command: "python {{ file|quote }}"

  # The command is executed without a shell when shell=false
  - path_regex: '^.*ends_with_string$'
    regex_path_exclude: '^.*not_this_one$'   # optional
    tags: main
    cwd: "{{ file|dirname }}"          # optional
    shell: false                       # optional
    command:
      - "python"
      - "{{ file }}"

Jinja2 Variables and Filters

Jinja2 Variables

Variable Description
{{ file }} Replaced with the full path to the source code.
{{ cwd }} Refers to the current working directory.
{{ env }} Represents the operating system environment variables (dictionary).
{{ pathsep }} Denotes the path separator

Jinja2 Filters

Filter Description
quote Equivalent to the Python method shlex.quote
basename Equivalent to the Python method os.path.basename
dirname Equivalent to the Python method os.path.dirname
realpath Equivalent to the Python method os.path.realpath
abspath Equivalent to the Python method os.path.abspath
joinpath Equivalent to the Python method os.path.join
joincmd Equivalent to the Python method os.subprocess.list2cmdline
splitcmd Equivalent to the Python method shlex.split
expanduser Equivalent to the Python method os.path.expanduser
expandvars Equivalent to the Python method os.path.expandvars
shebang Loads the shebang from a file (e.g. Loads the first line from a Python file #!/usr/bin/env python)
shebang_list Returns the shebang as a list (e.g. [“/usr/bin/env”, “bash”])
shebang_quote Returns the shebang as a quoted string (e.g. “/usr/bin/env ‘/usr/bin/command name'”)
which Locates a command (raises an error if the command is not found)

Frequently Asked Questions

How to Integrate the pathaction tool with your favorite editor (e.g. Vim)

It is recommended to configure your source code editor to execute source code with the pathaction command when pressing a specific key combination, such as CTRL-E.

Integrate with Vim

If the preferred editor is Vim, the following line can be added to the ~/.vimrc:

nnoremap <silent> <C-e> :!pathaction -t main "%"<CR>

License

Copyright (c) 2021-2025 James Cherti

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

Links

Plugins for editors:

  • Emacs package: pathaction.el: Executing the pathaction command-line tool directly from Emacs.
  • Vim plugin: vim-pathaction: Executing the pathaction command-line tool directly from Vim.

13 Useful GNOME Shell Extensions for a Better Desktop Experience (Available in the official Debian repositories or on the GNOME Extensions website for other distributions)

GNOME Shell offers a clean and modern UI, but it often lacks functionality desired by power users and those coming from other desktop environments.

GNOME Shell extensions provide a way to restore or add features to your desktop. In this article, you’ll explore some of the most useful GNOME Shell extensions available directly from the official Debian repositories via apt (available in Debian Bookworm and newer).

For users running distributions other than Debian, this article provides a link to the GNOME Shell Extensions page for each extension, allowing installation on any supported distribution.

Extension Management

The GNOME Shell Extension Prefs tool offers a graphical interface for enabling, disabling, and configuring GNOME Shell extensions. It can be installed using the following command:

sudo apt install gnome-shell-extension-prefs

After installation, GNOME Shell extensions can be enabled and configured using the gnome-extensions-app command.

Productivity

gnome-shell-extension-caffeine

The Caffeine extension prevents the system from locking the screen or entering suspend mode while it is active or when an application switches to fullscreen mode. This extension is useful during presentations, video playback, or gaming sessions, where uninterrupted screen activity is required.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-caffeine

System Monitoring

gnome-shell-extension-system-monitor

The system-monitor extension displays CPU, memory, network, and other metrics in real time on the top bar. Useful for developers, system administrators, and performance-conscious users.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-system-monitor

gnome-shell-extension-impatience

The Impatience extension decreases the duration of animations in GNOME Shell, resulting in a more responsive user interface.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-impatience

Panel and Dash Customization

gnome-shell-extension-dash-to-panel

The Dash To Panel extension integrates the top panel and dash into a unified taskbar, emulating the interface conventions commonly found in Windows and KDE environments.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-dash-to-panel

gnome-shell-extension-dashtodock

The Dash To Dock extension moves the dash out of the Activities view and docks it on the screen. Ideal for users preferring a macOS-style dock or persistent application launcher.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-dashtodock

gnome-shell-extension-hide-activities

The Hide Activities Button extension removes the “Activities” button from the top bar. It is useful for cleaner interfaces.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-hide-activities

gnome-shell-extension-top-icons-plus

The TopIcons Plus extension restores support for tray icons (i.e., systray) by displaying them in the top panel.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-top-icons-plus

UI Clean-Up and Space Saving

gnome-shell-extension-pixelsaver

The Pixelsaver extension integrates window title bars into the top panel for maximized windows, saving vertical space.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-pixelsaver

gnome-shell-extension-no-annoyance

The NoAnnoyance extension disables “Window is ready” notifications, which can be distracting and interrupt focus when multitasking across multiple applications.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-no-annoyance

gnome-shell-extension-autohidetopbar

The Hide Top Bar extension automatically hides the top bar, helping reduce visual clutter and maximize vertical screen space.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-autohidetopbar

Menus and Application Launching

gnome-shell-extension-arc-menu

The ArcMenu extension replaces the default application overview with a highly customizable start menu. Suitable for users preferring hierarchical or categorized navigation.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-arc-menu

Desktop Icon Support

gnome-shell-extension-desktop-icons-ng

The Desktop Icons NG extension restores desktop icon support (files, folders, and shortcuts), which was removed in newer GNOME versions. Supports drag-and-drop and right-click menus.

To install it on a Debian-based system, execute the following command:

sudo apt install gnome-shell-extension-desktop-icons-ng

Conclusion

These GNOME Shell extensions enable the transformation of GNOME into an efficient and personalized environment. All GNOME Shell extensions mentioned in this article are available through the official Debian repositories.

Additional GNOME Shell extensions, not included in the Debian repositories, can be found on the official GNOME Shell Extensions website.

Installing Arch Linux onto a separate partition from an existing Debian-based distribution (Ubuntu, Debian, Linux Mint…), without using the Arch Linux installation media

Installing Arch Linux typically begins with booting from official installation media. However, it is also possible to bootstrap an Arch Linux installation from within a running Debian-based system (Ubuntu, Debian, Linux Mint, etc.). This method is advantageous in environments where rebooting into live media is impractical or when remote installation is desired.

This article outlines a workflow for installing Arch Linux from a Debian-based system using pacman, pacstrap, arch-chroot, and pacman-key.

Prerequisites

Ensure your Debian system has the necessary tools to begin the installation process:

apt-get install arch-install-scripts pacman-package-manager archlinux-keyring makepkgCode language: plaintext (plaintext)

This command installs the Arch Linux bootstrap tools, makepkg, the pacman package manager, and required keyrings.

Configure the pacman keyring

Initialize the pacman keyring:

pacman-key --initCode language: plaintext (plaintext)

Install the latest Arch Linux keyring using pacman without resolving dependencies (to avoid conflicts with Debian packages):

pacman -S --nodeps archlinux-keyringCode language: plaintext (plaintext)

Replace the outdated Debian’s pacman keyrings with Arch’s:

cp /usr/share/pacman/keyrings/* /usr/share/keyrings/Code language: plaintext (plaintext)

Delete the archlinux-keyring pacman package:

pacman -Rsc archlinux-keyringCode language: plaintext (plaintext)

Populate the keyring again:

pacman-key --populate archlinuxCode language: plaintext (plaintext)

Configure pacman

Modify the /etc/pacman.d/mirrorlist file to include a valid Arch Linux mirror:

Server = http://mirror.csclub.uwaterloo.ca/archlinux/$repo/os/$archCode language: plaintext (plaintext)

Next, create the /etc/pacman.conf file with the following configuration:

[options]
HoldPkg = pacman glibc
Architecture = auto
CheckSpace
ParallelDownloads = 5
SigLevel = Required DatabaseOptional

[core]
Include = /etc/pacman.d/mirrorlist

[extra]
Include = /etc/pacman.d/mirrorlist

# [community]
# Include = /etc/pacman.d/mirrorlistCode language: plaintext (plaintext)

Prepare the installation target

Assuming you have an existing partition or logical volume prepared (e.g., /dev/vg1/arch), mount it:

mount /dev/vg1/arch arch

mkdir -p /mnt/arch/boot
mount -o bind /boot /mnt/arch/bootCode language: plaintext (plaintext)

Install the base system

Use pacstrap to install the base Arch system:

pacstrap /mnt/arch base sudo nanoCode language: plaintext (plaintext)

This command installs a minimal yet functional base system.

Chroot into the new environment

Finally, change root into the newly installed Arch system:

arch-chroot /mnt/archCode language: plaintext (plaintext)

From this point, you may proceed with system configuration as per a standard Arch Linux installation (e.g., locale, hostname, users, packages, bootloader, etc.).

Follow the official Arch Linux installation guide.

Conclusion

Bootstrapping Arch Linux from a Debian system is an efficient method to deploy Arch without the need for traditional installation media. This workflow is suited for advanced users managing systems remotely or automating deployments.

Related links

Linux: Setting the default GDM login monitor in a multi-monitor setup using GNOME display settings

If you’re using a multi-monitor setup with GDM (GNOME Display Manager) and the login screen consistently appears on the wrong monitor, this article presents a tested solution that can be applied either manually or automated, ensuring that the GDM monitor configuration matches that of your primary user setup.

The issue

Consider the following scenario: You have two monitors connected to an Nvidia graphics card, and despite setting the primary monitor correctly in GNOME, the GDM login screen still appears on the secondary monitor. Even if the secondary monitor is turned off, GDM continues to display the login prompt there, as it defaults to the wrong monitor. Additionally, the screen resolution and refresh rate are not configured as desired. For instance, if you have a 144Hz display and GDM uses a different refresh rate, you may experience annoying black flickering when logging in, as the refresh rate changes mid-session.

The solution

The login screen configuration for GDM can be influenced by copying the monitor layout from your GNOME user session.

This is done by copying the ~/.config/monitors.xml file from your user configuration to GDM’s configuration directory.

Step 1: Configure your display using GNOME

First, configure your display layout as desired using GNOME’s display settings:

gnome-control-center displayCode language: plaintext (plaintext)

Step 2: Copy the resulting monitor configuration to GDM’s configuration directory

Copy the resulting monitor configuration ~/.config/monitors.xml to GDM’s configuration directory:

sudo install -o root -m 644 ~/.config/monitors.xml ~gdm/.config/Code language: plaintext (plaintext)

(the shell will automatically interpret ~gdm as the home directory of the gdm user)

Step 3: Restart GDM

Restart GDM with the following command:

sudo systemctl restart gdmCode language: plaintext (plaintext)

How to automatically copy monitors.xml to the GDM configuration directory?

The copying of the monitors.xml file can be automated by defining a systemd service override for the GDM service.

First, create the following directory using:

sudo mkdir -p /etc/systemd/system/gdm.service.d/Code language: plaintext (plaintext)

Then, create the file /etc/systemd/system/gdm.service.d/override.conf with the following contents:

[Service]
ExecStartPre=/bin/sh -c 'install -o root -m 644 /home/YOUR_USER/.config/monitors.xml ~gdm/.config/monitors.xml || true'Code language: plaintext (plaintext)

Ensure to:

  • Replace YOUR_USER with your actual desktop user name.

Conclusion

Copying the GNOME display settings to the GDM configuration directory ensures that GDM adopts the same monitor layout as your GNOME session, causing the login screen to appear on your preferred monitor.

Configuring Linux on a ThinkPad T420s Laptop (Debian, Ubuntu, Linux Mint…)

ThinkPad T420s laptops, despite their age, remain reliable for web browsing, word processing, and various other tasks. Linux can breathe new life into such dated computers, allowing them to perform efficiently.

I configured one of these laptops for my son, and he is now able to do his homework using it. With proper configuration, ThinkPad laptops can achieve optimal performance and power management. This article provides a guide to configuring X11/Xorg, kernel parameters, firmware, fan control, and power management settings to optimize the ThinkPad t420s for modern use.

Some instructions in this article are specific to Debian/Ubuntu-based distributions, but they can easily be adapted to other distributions such as Red Hat, Fedora, Arch Linux, Gentoo, and others.

X11/Xorg

To ensure proper functionality of Intel graphics and avoid issues such as black screens after waking from sleep, use the Intel driver instead of modesetting.

Create the configuration file /etc/X11/xorg.conf.d/30-intel.conf:

Section "Device"
  Identifier "Intel Graphics"
  Driver "intel"

  Option "Backlight" "intel_backlight"
EndSectionCode language: plaintext (plaintext)

Ensure the intel driver is installed:

sudo apt-get install xserver-xorg-video-intelCode language: plaintext (plaintext)

IMPORTANT: Ensure that the integrated graphics are set as the default video card in the ThinkPad BIOS.

Kernel parameters

To ensure backlight control is working (increasing and decreasing brightness), add the following kernel parameter: acpi_backlight=native

On a Debian/Ubuntu based distribution, this can be appended to the kernel command line in the bootloader configuration, typically in /etc/default/grub (for GRUB users):

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_backlight=native"Code language: plaintext (plaintext)

Aditional GRUB_CMDLINE_LINUX_DEFAULT parameters that may be useful:

  • If you experience suspend/resume issues, a black screen, or system freezes, try adding the noapic kernel parameter.
  • If you experience CPU frequency scaling issues, append intel_pstate=disable to your kernel command line to revert to the acpi_cpufreq driver.

After modifying this file, update GRUB with:

sudo update-grubCode language: plaintext (plaintext)

Fan control

Without software to control the fan, it may run at maximum speed. To enable fan control, create the file /etc/modprobe.d/thinkpad_acpi.conf:

options thinkpad_acpi fan_control=1Code language: plaintext (plaintext)

After that, install zcfan. On a Debian/Ubuntu based distribution:

sudo apt-get install lm-sensors zcfanCode language: plaintext (plaintext)

(An alternative to zcfan is thinkfan, which requires slightly more configuration. The advantage of zcfan is that it functions out of the box.)

Packages

Ensure that the required firmware for Atheros and Realtek network devices is installed, along with the tp-smapi-dkms package to enable access to ThinkPad-specific hardware features via the TP-SMAPI kernel modules. On Debian- and Ubuntu-based distributions, install the following packages:

sudo apt install firmware-atheros firmware-realtek tp-smapi-dkmsCode language: plaintext (plaintext)

It is also recommended to install essential packages for hardware encoding and decoding, including intel-microcode for the latest processor updates, which improve system stability and performance:

sudo apt-get install intel-microcode intel-media-va-driver-non-free i965-va-driverCode language: plaintext (plaintext)

TLP

TLP is a power management tool that optimizes battery life. Install and configure it as follows:

sudo apt install tlpCode language: plaintext (plaintext)

Create the configuration file /etc/tlp.d/00-base.conf:

DEVICES_TO_DISABLE_ON_BAT="bluetooth wwan"
DEVICES_TO_ENABLE_ON_STARTUP="wifi"
DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"

TLP_DEFAULT_MODE=BAT

CPU_SCALING_GOVERNOR_ON_AC=performance
CPU_SCALING_GOVERNOR_ON_BAT=schedutil

# PCIe Active State Power Management (ASPM):
PCIE_ASPM_ON_AC=performance

# Set Intel CPU performance: 0..100 (%). Limit the
# max/min to control the power dissipation of the CPU.
# Values are stated as a percentage of the available
# performance.
CPU_MIN_PERF_ON_AC=70
CPU_MAX_PERF_ON_AC=100

# [default] performance powersave powersupersave. on=disable
RUNTIME_PM_ON_AC=on
RUNTIME_PM_ON_BAT=powersupersaveCode language: Python (python)

Conclusion

The ThinkPad T420s, though older models, remain reliable machines for everyday tasks. With the right configuration, these laptops can be revitalized, making them well-suited for modern use.

jc-dotfiles – A collection of configuration files for UNIX/Linux systems

The jc-dotfiles repository houses James Cherti’s dotfiles and configuration scripts:

  • Shell Configuration (.bashrc, .profile, and .bash_profile): Optimized Bash shell settings for efficient command execution and interactive sessions.
  • Terminal Multiplexer (.tmux.conf): Configuration for Tmux, enhancing terminal session management and productivity.
  • Readline configuration (.inputrc): Inputrc configuration that also allows using Alt-h, Alt-j, Alt-k, and Alt-l as a way to move the cursor.
  • Other: .gitconfig, ranger, .fdignore, .wcalcrc, mpv, picom, feh, and various scripts and configuration files for managing system settings, aliases, and more.

Here are additional dotfiles and configuration files maintained by the same author:

  • jc-dotfiles @GitHub: A collection of UNIX/Linux configuration files. You can either install them directly or use them as inspiration your own dotfiles.
  • bash-stdops @GitHub: A collection of Bash helper shell scripts.
  • jc-gnome-settings: GNOME customizations that can be applied programmatically.
  • jc-firefox-settings @GitHub: Provides the user.js file, which holds settings to customize the Firefox web browser to enhance the user experience and security.
  • jc-gentoo-portage @GitHub: Provides configuration files for customizing Gentoo Linux Portage, including package management, USE flags, and system-wide settings.
  • jc-xfce-settings: GNOME customizations that can be applied programmatically.
  • watch-xfce-xfconf: A command-line tool that can be used to configure XFCE 4 programmatically using the xfconf-query commands displayed when XFCE 4 settings are modified.

Installation

Here’s how to install James Cherti’s dotfiles:

  1. Clone the Repository:

    git clone https://github.com/jamescherti/jc-dotfiles
  2. Navigate to the jc-dotfiles directory:

    cd jc-dotfiles
  3. Install:

    ./install.sh

Usage

.bashrc

  • Tmux/fzf auto complete: Pressing Ctrl-n calls a custom Bash autocomplete function that captures the current tmux scrollback buffer, extracts unique word-like tokens, and presents them via fzf for interactive fuzzy selection. The selected word is then inserted inline at the current cursor position using a readline binding.

  • The .bashrc file can be extended by adding configurations to ~/.bashrc.local.

  • The o alias calls a function that provides a cross-platform way to open files or URLs using the appropriate command for the system. This function opens files or URLs using the appropriate command (xdg-open on Linux, open on macOS, and start on Windows). If more than 7 arguments are passed, the user is prompted for confirmation before proceeding. Example usage:

    o file1.jpg file2.png file3.jpeg
  • Customizations in .bashrc to add to ~/.profile.local:

    # Use trash-rm as a safer alternative to rm by moving files to the trash instead
    # of deleting them permanently.
    #
    # JC_TRASH_CLI=1 replaces the standard 'rm' command with a wrapper function
    # that:
    # - Provides a detailed summary of all specified files and directories,
    #   including total size and file count.
    # - Prompts the user for confirmation before proceeding with the deletion.
    # - Moves files to the trash using 'trash-put' instead of permanently deleting
    #   them with 'rm'.
    # - Reports the current size of the trash in megabytes after each deletion.
    # - Optionally wraps 'trash-empty' with an interactive prompt before purging the
    #   trash.
    #
    # This setup is only activated for non-root users when 'trash-put' is available
    # and 'JC_TRASH_CLI' is set to a non-zero value.
    #
    JC_TRASH_CLI=1
    
    # Enable Emacs integration for vterm and EAT, configuring shell-side support for
    # features such as prompt tracking and message passing
    JC_EMACS_INTEGRATION=1  # Default: 0
    
    # Display the current Git branch in the shell prompt (PS1)
    JC_PS1_GIT_BRANCH=1  # Default: 0
    
    # Display the count of unread mails in the shell prompt (PS1)
    JC_PS1_MAILDIR=1  # Default: 0
    
    # Directory containing the mail (e.g., to "$HOME/Mail")
    JC_PS1_MAILDIR_PATH="$HOME/Mail"

License

Distributed under terms of the MIT license.

Copyright (C) 2004-2025 James Cherti.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Links

Related articles:

Running Large Language Models locally with Ollama (compatible with Linux, macOS, and Windows)

Running Large Language Models on your machine can enhance your projects, but the setup is often complex. Ollama simplifies this by packaging everything needed to run an Large Language Models. Here’s a concise guide on using Ollama to run LLMs locally.

Requirements

  • CPU: Aim for an CPU that supports AVX512, which accelerates the matrix multiplication operations essential for LLM AI models. (If your CPU does not support AVX, see Ollama Issue #2187: Support GPU runners on CPUs without AVX.)
  • RAM: A minimum of 16GB is recommended for a decent experience when running models with 7 billion parameters.
  • Disk Space: A practical minimum of 40GB of disk space is advisable.
  • GPU: While a GPU is not mandatory, it is recommended for enhanced performance in model inference. Refer to the list of GPUs that are compatible with Ollama. For running quantized models, GPUs that support 4-bit quantized formats can handle large models more efficiently, with VRAM requirements as follows: ~4 GB VRAM for 7B model, ~8 GB VRAM for 13B model, ~16 GB VRAM 30B model, and ~32 GB VRAM for 65B model.
  • For NVIDIA GPUs: Ollama requires CUDA, a parallel computing platform and API developed by NVIDIA. You can find the instructions to install CUDA on Debian here. Similar instructions can be found in your Linux distribution’s wiki.

Step 1: Install Ollama

Download and install Ollama for Linux using:

curl -fsSL https://ollama.com/install.sh | shCode language: plaintext (plaintext)

Step 2: Download a Large Language Model

Download a specific large language model using the Ollama command:

ollama pull gemma2:2bCode language: plaintext (plaintext)

The command above downloads the Gemma2 model by Google DeepMind. You can find other models by visiting the Ollama Library.

(Downloading “gemma2:2b” actually downloads “gemma2:2b-instruct-q4_0”, indicating that it retrieves a quantized version of the 2 billion parameter model specifically optimized for instruction-following tasks like chat-bots. This quantization process reduces the model’s precision from the original floating-point representation to a more compact format, such as float32, thereby significantly lowering memory usage and enhancing inference speed. However, this quantization can lead to a slight decrease in accuracy compared to the full-precision floating-point model.)

Step 3: Chat with the model

Run the large language model:

ollama run gemma2:2bCode language: plaintext (plaintext)

This launches an interactive REPL where you can interact with the model.

Step 4: Install open-webui (web interface)

Open-webui offers a user-friendly interface for interacting with large language models downloaded via Ollama. It enables users to run and customize models without requiring extensive programming knowledge.

It can be installed using pip within a Python virtual environment:

mkdir -p ~/.python-venv/open-webui
python -m venv ~/.python-venv/open-webui
source ~/.python-venv/open-webui/bin/activate
pip install open-webuiCode language: plaintext (plaintext)

Finally, execute the following command to start the open-webui server:

~/.python-venv/open-webui/bin/open-webui serveCode language: plaintext (plaintext)

You will also have to execute Ollama as a server simultaneously with open-webui:

ollama serve

Conclusion

With Ollama, you can quickly run Large Language Models (LLMs) locally and integrate them into your projects. Additionally, open-webui provides a user-friendly interface for interacting with these models, making it easier to customize and deploy them without extensive programming knowledge.

Links

  • Ollama Library: A collection of language models available for download through Ollama.
  • Ollama @Github: The Ollama Git repository.
  • Compile Ollama: For users who prefer to compile Ollama instead of using the binary.

Emulating Cherry MX Blue Mechanical Keyboard Sounds on Linux

License

For those nostalgic for the era of tactile and auditory feedback from typing on a physical keyboard, Cherrybuckle can be utilized on Linux as a Cherry MX Blue Mechanical Keyboard Simulator.

Cherrybuckle runs as a background process and plays back the sound of each key pressed and released on your keyboard.

Cherry MX Blue

To temporarily silence Cherrybuckle, for example, to enter secrets, press Scroll Lock twice (but be aware that those Scroll Lock events are delivered to the application); do the same to unmute. The keycode for muting can be changed with the ‘-m’ option. Use keycode 0 to disable the mute function.

Links

Building

GNU/Linux

Dependencies

Dependencies: libalure, libopenal, libx11, libxtst.

The dependencies can be installed on a Debian or Ubuntu system using the following commands:

$ sudo apt-get install build-essential git
$ sudo apt-get install libalure-dev libx11-dev libxtst-dev pkg-config
Building on GNU/Linux
Option 1: X11 (Recommended)

This is the preferred method for building it on GNU/Linux:

$ make
$ ./cherrybuckle

This only works with X11/Xorg and does not yet support Wayland.

Option 2: Libinput

The default Linux build relies on X11 for capturing events. If you intend to use it on the Linux console or Wayland display server, you can configure it to read events from the raw input devices located in /dev/input. Keep in mind that this will require special permissions to access the devices. To make it use libinput, build with the following command:

$ make libinput=1

MacOS

You can compile it on macOS using the following commands:

$ brew install alure pkg-config
$ git clone https://github.com/jamescherti/cherrybuckle
$ cd cherrybuckle
$ sed -i '' 's/-Wall -Werror/-Wall/' Makefile
$ make
$ ./cherrybuckle

Note that you need superuser privileges to create the event tap on Mac OS X. Also give your terminal Accessibility rights: system preferences -> security -> privacy -> accessibility.

If you want to use Cherrybuckle while doing normal work, add an & behind the command.

$ sudo ./cherrybuckle &

Windows

The Windows version of Cherrybuckle is currently broken. It appears that switching from FreeLut to Alure caused the issue. There seems to be an issue with ‘alureCreateBufferFromFile()’ being called from another thread in the key capture callback. Assistance would be greatly appreciated.

Usage

usage: ./cherrybuckle [options]

options:

  -b, --bucklespring        use Bucklespring sounds instead
  -d, --device=DEVICE       use OpenAL audio device DEVICE
  -f, --fallback-sound      use a fallback sound for unknown keys
  -g, --gain=GAIN           set playback gain [0..100]
  -m, --mute-keycode=CODE   use CODE as mute key (default 0x46 for scroll lock)
  -M, --mute                start the program muted
  -c, --no-click            don't play a sound on mouse click
  -k, --no-keyboard         don't play a sound on keyboard press
  -h, --help                show help
  -l, --list-devices        list available OpenAL audio devices
  -p, --audio-path=PATH     load .wav files from directory PATH
  -s, --stereo-width=WIDTH  set stereo width [0..100]
  -v, --verbose             increase verbosity / debugging

OpenAL notes

Cherrybuckle uses the OpenAL library for mixing samples and providing a realistic 3D audio playback. This section contains some tips and tricks for properly tuning OpenAL for Cherrybuckle.

The default OpenAL settings can cause a slight delay in playback. Edit or create the OpenAL configuration file ~/.alsoftrc and add the following options:

 period_size = 32
 periods = 4

If you are using headphones, enabling the head-related-transfer functions in OpenAL for a better 3D sound:

 hrtf = true

When starting an OpenAL application, the internal sound card is selected for output, and you might not be able to change the device using pavucontrol. The option to select an alternate device is present, but choosing the device has no effect. To solve this, add the following option to the OpenAL configuration file:

 allow-moves = true

Authors

  • Ico Doornekamp (Original author)
  • nofal (Cherry MX sounds)
  • James Cherti (The maintainer of cherrybuckle, which includes the version maintained by Ico Doornekamp and the pull request by nofal)
  • Egor
  • Ico Doornekamp
  • Jakub Wilk
  • Peter Hofmann
  • Marco Trevisan
  • Marco Trevisan
  • Member1221
  • mirabilos
  • Alex Bertram
  • Alexander Willner
  • Anjan Momi
  • Anton Karmanov
  • Clipsey
  • Dominik George
  • Emanuel Haupt
  • Jan Chren (rindeal)
  • Jan Chren
  • Jeroen Knoops
  • Jeroen Knoops
  • Nisker
  • Peter Tonoli
  • Sebastian Morr
  • Stephen Gelman
  • Vladislav Khvostov
  • jeromenerf
  • qu1gl3s
  • rabin-io
  • somini
  • tensorknower69
  • tnagorra

Creating and Restoring a Gzip Compressed Disk Image with dd on UNIX/Linux

Creating and restoring disk images are essential tasks for developers, system administrators, and users who want to safeguard their data or replicate systems efficiently. One useful tool for this purpose is dd, which allows for low-level copying of data. In this article, we will explore how to clone and restore a partition from a compressed disk image in a UNIX/Linux operating system.

IMPORTANT: There is a risk of data loss if a mistake is made. The dd command can be dangerous if not used carefully. Specifying the wrong input or output device can result in data loss. Users should exercise caution and double-check their commands before executing them.

Cloning a Partition into a Compressed Disk Image

To clone a partition into a compressed disk image, you can use the dd and gzip commands:

dd if=/dev/SOURCE conv=sync bs=64K | gzip --stdout > /path/to/file.gzCode language: plaintext (plaintext)

This command copies the content of the block device /dev/SOURCE to the compressed file /path/to/file.gz, 64 kilobytes at a time.

Restoring a Partition from a Compressed Disk Image

To restore a partition from a file containing a compressed disk image, use the following command:

gunzip --stdout /path/to/file.gz | dd of=/dev/DESTINATION bs=64K
Code language: plaintext (plaintext)

This command decompresses the content of the compressed file located at /path/to/file.gz and copies it to the block device /dev/DESTINATION, 64 kilobytes at a time.

More information about the dd command options

Here are additional details about the dd command options:

  • The status=progress option makes dd display transfer statistics progressively.
  • The conv=noerror option instructs dd to persist despite encountering errors. However, ignoring errors might result in data corruption in the copied image. The image could be incomplete or corrupted, especially if errors occur in critical parts of the data. This option can be added to the conv option as follows: conv=noerror
  • The conv=sync option makes dd wait for both the data and the metadata to be physically written to the storage media before proceeding to the next operation. In situations where data integrity is less critical, using conv=sync can help restore as much data as possible, even from a source with occasional errors.
  • Finally, the bs=64K option instructs dd to read or write up to the specified bytes at a time (in this case, 64 kilobytes). The default value is 512 bytes, which is relatively small. It is advisable to consider using 64K or even the larger 128K. However, it’s important to note that while a larger block size speeds up the transfer, a smaller block size enhances transfer reliability.

Ensuring Data Integrity

Although the dd command automatically verifies that the input and output block sizes match during each block copy operation, it is prudent to further confirm the integrity of the copied data after completing the dd operation.

To achieve this, follow these steps:

Generate the md5sum of the source block device:

dd if=/dev/SOURCE | md5sumCode language: plaintext (plaintext)

Next, generate the md5sum of the gzip-compressed file:

gunzip --stdout /path/to/file.gz | md5sumCode language: plaintext (plaintext)

Ensure that the two md5sum fingerprints are equal. This additional verification step adds an extra layer of assurance regarding the accuracy and integrity of the copied data.

Installing Debian onto a separate partition from an existing distribution, such as Arch Linux or Gentoo, without using the Debian installer

There are various scenarios in which one might need to install a Debian-based system (e.g., Debian, Ubuntu, etc.) from another distribution (e.g., Arch Linux, Gentoo, Debian/Ubuntu distributions, Fedora, etc.). One common reason is when a user wants to set up a Debian-based system alongside an existing distribution. This could be for the purpose of testing software compatibility, development, or simply to have a dual-boot.

A Debian-based distribution can be installed from any other distribution using debootstrap. The debootstrap command-line tool allows installing a Debian or Ubuntu base system within a subdirectory of an existing, installed system. Unlike traditional installation methods using a CD or a USB Key, debootstrap only requires access to a Debian repository.

There are several reasons why this approach is advantageous:

  • No need for a bootable USB/CD: Install Debian without external installation media.
  • Dual-boot without reinstalling the host OS: Easily add Debian alongside another Linux system.
  • Minimal and customizable installation: Install only essential packages for a lightweight system (the installer sometimes installs more than necessary).
  • Remote server installations: Install Debian on a remote machine without physical access.
  • System recovery: Reinstall or repair a broken Debian system from another Linux distribution.
  • Automated and scripted deployments: Useful for mass deployments in enterprise environments.
  • Maintaining a multi-distro workflow: Run both a stable Debian system and a rolling-release distribution.

Step 1: Create a new LVM partition, format it, and mount it

# Create the root LVM partition
lvcreate  -L 20G -n debian_root VOL_NAME

# Format the partition
mkfs.ext4 /dev/VOL_NAME/debian_root

# Mount the partition
mkdir /mnt/debian_root
mount /dev/VOL_NAME/debian_root /mnt/debian_rootCode language: plaintext (plaintext)

Step 2: Install the debootstrap command-line tool

On Arch Linux, debootstrap can be installed using:

pacman -Sy debian-archive-keyring debootstrapCode language: plaintext (plaintext)

On Gentoo, it can be installed using:

emerge -a dev-util/debootstrapCode language: plaintext (plaintext)

On Debian/Ubuntu based distributions:

apt-get install debootstrapCode language: JavaScript (javascript)

Step 3: Install the Debian base system

Use the debootstrap command to install Debian into the target directory:

debootstrap  --arch=amd64 stable /mnt/debian_root http://deb.debian.org/debianCode language: plaintext (plaintext)

You can replace stable with another Debian release like testing or unstable if desired. You can also add the flag --force-check-gpg to force checking Release file signatures.

In the above example, it will install the Debian-based system from the repository http://deb.debian.org/debian into the local directory /mnt/debian_root.

Step 4: Chroot into the Debian system

Since you are installing a Debian-based system inside another distribution (Arch Linux, Gentoo, etc.), you’ll need to ensure that the directory where the Debian system is mounted is ready. You can achieve this by mounting certain directories and chrooting into the Debian system:

sudo mount --bind /dev /mnt/debian_root/dev
sudo mount --bind /proc /mnt/debian_root/proc
sudo mount --bind /sys /mnt/debian_root/sys
sudo mount --bind /boot /mnt/debian_root/boot
sudo cp /etc/resolv.conf /mnt/debian_root/etc/resolv.conf
sudo cp /etc/fstab /mnt/debian_root/etc/fstab
sudo chroot /mnt/debian_root /bin/bash -lCode language: plaintext (plaintext)

The chroot command will open a new shell in the Debian environment.

Step 5: Configure the Debian-based system

Now that you’re inside the Debian-based system, you can configure it as desired. You can install packages, modify configurations, set up users, etc.

Here is an example:

apt-get update

# Install the Linux Kernel
apt-get install linux-image-amd64 firmware-linux-free firmware-misc-nonfree 

# Install cryptsetup if you are using a LUKS encrypted partition
apt-get install cryptsetup cryptsetup-initramfs

# Install misc packages
apt-get install console-setup vim lvm2 sudo

# Reconfigure locales
dpkg-reconfigure locales

# Configure the host name and the time zone
echo yourhostname > /etc/hostname
ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime
Code language: plaintext (plaintext)

Do not forget to:

  • Modify /mnt/debian_root/etc/fstab (The mount point “/” has to point to the Debian system)
  • Modify /mnt/debian_root/etc/crypttab (If you are using a LUKS encrypted partition)

How to boot into the newly installed system?

You can, for example, configure GRUB to boot your newly configured operating system.

You can either use the new Debian-based system’s GRUB as the default (replace the existing one) or configure additional GRUB entries in an existing system. For instance, if your base system is Debian-based, you can add your entry using /etc/grub.d/40_custom:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian 2' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'debian_fallback' {
        insmod part_gpt
        insmod fat
        search --no-floppy --fs-uuid --set=root 00000000-000a-00a0-a0a0-000000a0000a
        linux /backup/vmlinuz-6.12.12+bpo-amd64 root=/dev/MY_LVM_VOLUME/debian ro fsck.mode=auto fsck.repair=yes nowatchdog apparmor=1 acpi_backlight=native
        initrd /initrd.img-6.12.12+bpo-amd64Code language: plaintext (plaintext)

(Replace /dev/mapper/MY_LVM_VOLUME_debian with your root partition and 00000000-000a-00a0-a0a0-000000a0000a with your actual UUID that you can find using the command: lsblk -o +UUID)

In my case, I am using bootctl, which I installed using Gentoo. I simply added /boot/loader/entries/debian.conf with the following configuration:

title Debian
linux /vmlinuz-6.12.12+bpo-amd64
initrd /initrd.img-6.12.12+bpo-amd64
options rw root=/dev/volume1/debianCode language: plaintext (plaintext)

Congratulations! You have successfully installed a Debian-based system using debootstrap from another distribution such as Arch Linux, Gentoo, etc.

Gentoo: How to Speed Up emerge ‐‐sync

Synchronizing with the Gentoo Portage ebuild repository using emerge --sync can be slow when utilizing the rsync protocol. However, an effective solution exists that can greatly improve the synchronization speed: Configuring emerge --sync to synchronize using Git instead.

In this article, we will explore how to set up emerge to synchronize from the official Gentoo ebuild Git repository and save valuable time during the synchronizing process.

Step 1: Install Git using the following command:

sudo emerge -a dev-vcs/gitCode language: plaintext (plaintext)

Step 2: Remove any file from the directory /etc/portage/repos.conf/ that configures the emerge command to use rsync.

Step 3: Create the file /etc/portage/repos.conf/gentoo.conf containing:

[DEFAULT]
main-repo = gentoo

[gentoo]

# The sync-depth=1 option speeds up initial pull by fetching 
# only the latest Git commit and its immediate ancestors, 
# reducing the amount of downloaded Git history.
sync-depth = 1

sync-type = git
auto-sync = yes
location = /var/db/repos/gentoo
sync-git-verify-commit-signature = yes
sync-openpgp-key-path = /usr/share/openpgp-keys/gentoo-release.asc
sync-uri = https://github.com/gentoo-mirror/gentoo.gitCode language: plaintext (plaintext)

Step 4: Finally, run the following command to synchronize with the Gentoo ebuild repository using Git:

sudo emerge --sync

The initial download of the entire Git repository will cause the first emerge --sync command to take some time. However, subsequent synchronizations will be significantly quicker, taking only a few seconds.

Using Git can be a great way to speed up synchronization with the Gentoo ebuild repository. By following the steps outlined in this article, you can clone the Portage repository to your local machine and keep it up-to-date with the latest changes using Git. This can save you a lot of time when syncing your local repository.

A Docker container for Oddmuse, a Wiki engine that does not require a database

Oddmuse is a wiki engine. Unlike other wiki engines that rely on local or remote databases to store and modify content, Oddmuse utilizes the local file system. This means that users can create and manage Wiki pages on their local machine and easily transfer them to other locations or servers. By leveraging the local file system, Oddmuse eliminates the need for complex and costly database setups. Oddmuse is used by many websites around the world, including the website emacswiki.org.

To make it even easier to use Oddmuse, I have created a Docker container that includes the wiki engine and all of its dependencies. This container can be downloaded and run on any machine that has Docker installed.

Pull and run the Oddmuse Docker container

The Oddmuse Docker container can be pulled from the Docker hub repository jamescherti/oddmuse using the following command:

docker pull jamescherti/oddmuseCode language: Bash (bash)

And here is an example of how to run the Docker container:

docker run --rm \
  -v /local/path/oddmuse_data:/data \
  -p 8080:80 \
  --env ODDMUSE_URL_PATH=/wiki \
  jamescherti/oddmuseCode language: Bash (bash)

Once the container up and running, you can start using Oddmuse to create and manage your own wiki pages.

Alternative method: Compile the Oddmuse Docker container

Alternatively, you can build the Oddmuse Docker container using the Dockerfile that is hosted in the GitHub repository jamescherti/docker-oddmuse:

git clone https://github.com/jamescherti/docker-oddmuse
docker build -t jamescherti/oddmuse docker-oddmuseCode language: Bash (bash)

The Oddmuse Docker container is a convenient and efficient way to use the Oddmuse Wiki engine without the need for complex setups.

Arch Linux: Preserving the kernel modules of the currently running kernel during and after an upgrade

One potential issue when upgrading the Arch Linux kernel is that the modules of the currently running kernel may be deleted. This can lead to a number of problems, including unexpected behavior, system crashes, or the inability to mount certain file systems (e.g. the kernel fails to mount a vfat file system due to the unavailability of the vfat kernel module).

The Arch Linux package linux-keep-modules (also available on AUR: linux-keep-modules @AUR), written by James Cherti, provides a solution to ensure that the modules of the currently running Linux kernel remain available until the operating system is restarted. Additionally, after a system restart, the script automatically removes any unnecessary kernel modules that might have been left behind by previous upgrades (e.g. the kernel modules that are not owned by any Arch Linux package and are not required by the currently running kernel).

The linux-keep-modules package keeps your system running smoothly and maintains stability even during major Linux kernel upgrades.

Make and install the linux-keep-modules package

Clone the repository and change the current directory to ‘archlinux-linux-keep-modules/’:

$ git clone https://github.com/jamescherti/archlinux-linux-keep-modules.git
$ cd archlinux-linux-keep-modules/Code language: plaintext (plaintext)

Use makepkg to make linux-keep-modules package:

$ makepkg -fCode language: plaintext (plaintext)

Install the linux-keep-modules package:

$ sudo pacman -U linux-keep-modules-*-any.pkg.tar.*Code language: plaintext (plaintext)

Finally, enable the cleanup-linux-modules service:

$ sudo systemctl enable cleanup-linux-modulesCode language: plaintext (plaintext)

(The cleanup-linux-modules service will delete the Linux kernel modules that are not owned by any a package at boot time)

The linux-keep-modules Arch Linux package offers a solution to preserve kernel modules during and after upgrades, ensuring that the necessary modules for the currently running kernel remain present in the system even after the kernel is upgraded. This solution keeps your system running smoothly and maintains stability even during major upgrades.

Links related to the pacman package linux-keep-modules

Helper script to upgrade Arch Linux

In this article, we will be sharing a Python script, written by James Cherti, that can be used to upgrade Arch Linux. It is designed to make the process of upgrading the Arch Linux system as easy and efficient as possible.

The helper script to upgrade Arch Linux can:

  • Delete the ‘/var/lib/pacman/db.lck’ when pacman is not running,
  • upgrade archlinux-keyring,
  • upgrade specific packages,
  • download packages,
  • upgrade all packages,
  • remove from the cache the pacman packages that are no longer installed.

The script provides a variety of options and is perfect for those who want to automate the process of upgrading their Arch Linux system (e.g. execute it from cron) and ensure that their system is always up to date.

Requirements: psutil
Python script name: archlinux-update.py

#!/usr/bin/env python
# Author: James Cherti
# License: MIT
# URL: https://www.jamescherti.com/script-update-arch-linux/
"""Helper script to upgrade Arch Linux."""

import argparse
import logging
import os
import re
import subprocess
import sys
import time

import psutil


class ArchUpgrade:
    """Upgrade Arch Linux."""

    def __init__(self, no_refresh: bool):
        self._download_package_db = no_refresh
        self._keyring_and_pacman_upgraded = False
        self._delete_pacman_db_lck()

    @staticmethod
    def _delete_pacman_db_lck():
        """Delete '/var/lib/pacman/db.lck' when pacman is not running."""
        pacman_running = False
        for pid in psutil.pids():
            try:
                process = psutil.Process(pid)
                if process.name() == "pacman":
                    pacman_running = True
                    break
            except psutil.Error:
                pass

        if pacman_running:
            print("Error: pacman is already running.", file=sys.stderr)
            sys.exit(1)

        lockfile = "/var/lib/pacman/db.lck"
        if os.path.isfile(lockfile):
            os.unlink(lockfile)

    def upgrade_specific_packages(self, package_list: list) -> list:
        """Upgrade the packages that are in 'package_list'."""
        outdated_packages = self._outdated_packages(package_list)
        if outdated_packages:
            cmd = ["pacman", "--noconfirm", "-S"] + outdated_packages
            self.run(cmd)

        return outdated_packages

    def _outdated_packages(self, package_list: list) -> list:
        """Return the 'package_list' packages that are outdated."""
        outdated_packages = []
        try:
            output = subprocess.check_output(["pacman", "-Qu"])
        except subprocess.CalledProcessError:
            output = b""

        for line in output.splitlines():
            line = line.strip()
            pkg_match = re.match(r"^([^\s]*)\s", line.decode())
            if not pkg_match:
                continue

            pkg_name = pkg_match.group(1)
            if pkg_name in package_list:
                outdated_packages += [pkg_name]

        return outdated_packages

    @staticmethod
    def upgrade_all_packages():
        """Upgrade all packages."""
        ArchUpgrade.run(["pacman", "--noconfirm", "-Su"])

    def download_all_packages(self):
        """Download all packages."""
        self.download_package_db()
        self.run(["pacman", "--noconfirm", "-Suw"])

    def download_package_db(self):
        """Download the package database."""
        if self._download_package_db:
            return

        print("[INFO] Download the package database...")
        ArchUpgrade.run(["pacman", "--noconfirm", "-Sy"])
        self._download_package_db = True

    def upgrade_keyring_and_pacman(self):
        self.download_package_db()

        if not self._keyring_and_pacman_upgraded:
            self.upgrade_specific_packages(["archlinux-keyring"])
            self._keyring_and_pacman_upgraded = True

    def clean_package_cache(self):
        """Remove packages that are no longer installed from the cache."""
        self.run(["pacman", "--noconfirm", "-Scc"])

    @staticmethod
    def run(cmd, *args, print_command=True, **kwargs):
        """Execute the command 'cmd'."""
        if print_command:
            print()
            print("[RUN] " + subprocess.list2cmdline(cmd))

        subprocess.check_call(
            cmd,
            *args,
            **kwargs,
        )

    def wait_download_package_db(self):
        """Wait until the package database is downloaded."""
        successful = False
        minutes = 60
        hours = 60 * 60
        seconds_between_tests = 15 * minutes
        for _ in range(int((10 * hours) / seconds_between_tests)):
            try:
                self.download_package_db()
            except subprocess.CalledProcessError:
                minutes = int(seconds_between_tests / 60)
                print(
                    f"[INFO] Waiting {minutes} minutes before downloading "
                    "the package database...",
                    file=sys.stderr,
                )
                time.sleep(seconds_between_tests)
                continue
            else:
                successful = True
                break

        if not successful:
            print("Error: failed to download the package database...",
                  file=sys.stderr)
            sys.exit(1)


def parse_args():
    """Parse the command-line arguments."""
    usage = "%(prog)s [--option] [args]"
    parser = argparse.ArgumentParser(description=__doc__.splitlines()[0],
                                     usage=usage)
    parser.add_argument("packages",
                        metavar="N",
                        nargs="*",
                        help="Upgrade specific packages.")

    parser.add_argument(
        "-u",
        "--upgrade-packages",
        default=False,
        action="store_true",
        required=False,
        help="Upgrade all packages.",
    )

    parser.add_argument(
        "-d",
        "--download-packages",
        default=False,
        action="store_true",
        required=False,
        help="Download the packages that need to be upgraded.",
    )

    parser.add_argument(
        "-c",
        "--clean",
        default=False,
        action="store_true",
        required=False,
        help=("Remove packages that are no longer installed from "
              "the cache."),
    )

    parser.add_argument(
        "-n",
        "--no-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Do not download the package database (pacman -Sy)."),
    )

    parser.add_argument(
        "-w",
        "--wait-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Wait for a successful download of the package database "
              "(pacman -Sy)."),
    )

    return parser.parse_args()


def command_line_interface():
    """The command-line interface."""
    logging.basicConfig(level=logging.INFO, stream=sys.stdout,
                        format="%(asctime)s %(name)s: %(message)s")

    if os.getuid() != 0:
        print("Error: you cannot perform this operation unless you are root.",
              file=sys.stderr)
        sys.exit(1)

    nothing_to_do = True
    args = parse_args()
    upgrade = ArchUpgrade(no_refresh=args.no_refresh)

    if args.wait_refresh:
        upgrade.wait_download_package_db()
        nothing_to_do = False

    if args.packages:
        print("[INFO] Upgrade the packages:", ", ".join(args.packages))
        upgrade.upgrade_keyring_and_pacman()
        if not upgrade.upgrade_specific_packages(args.packages):
            print()
            print("[INFO] The following packages are already up-to-date:",
                  ", ".join(args.packages))
        nothing_to_do = False

    if args.download_packages:
        print("[INFO] Download all packages...")
        upgrade.download_all_packages()
        nothing_to_do = False

    if args.upgrade_packages:
        print("[INFO] Upgrade all packages...")
        upgrade.upgrade_keyring_and_pacman()
        upgrade.upgrade_all_packages()

        nothing_to_do = False

    if args.clean:
        print("[INFO] Remove packages that are no longer installed "
              "from the cache...")
        upgrade.clean_package_cache()
        nothing_to_do = False

    if nothing_to_do:
        print("Nothing to do.")
        print()

    sys.exit(0)


def main():
    try:
        command_line_interface()
    except subprocess.CalledProcessError as err:
        print(f"[ERROR] Error {err.returncode} returned by the command: "
              f"{subprocess.list2cmdline(err.cmd)}",
              file=sys.stderr)
        sys.exit(1)


if __name__ == '__main__':
    main()Code language: Python (python)