Copyright © 1999-2024, Gerard Beekmans.
All rights reserved.
This book is licensed under a Creative Commons License.
Computer instructions may be extracted from the book under the MIT License.
Linux® is a registered trademark of Linus Torvalds.
Published 2024-08-06
Revision History | ||
---|---|---|
Revision 11.6 | 2024-08-06 | Third BTFS Release |
Revision 11.5 | 2023-10-08 | Second BTFS Release |
Revision 11.4 | 2023-08-02 | First BTFS Release |
Revision 11.3 | 2023-03-01 | Twenty-seventh Release |
Revision 11.2 | 2022-09-01 | Twenty-sixth Release |
Revision 11.1 | 2022-03-01 | Twenty-fifth Release |
Revision 11.0 | 2021-09-01 | Twenty-fourth Release |
Revision 10.1 | 2021-03-01 | Twenty-third Release |
Revision 10.0 | 2020-09-01 | Twenty-second Release |
Revision 9.1 | 2020-03-01 | Twenty-first Release |
Revision 9.0 | 2019-09-01 | Twentieth release |
Revision 8.4 | 2019-03-01 | Nineteenth release |
Revision 8.3 | 2018-09-01 | Eighteenth release |
Revision 8.2 | 2018-03-02 | Seventeenth release |
Revision 8.1 | 2017-09-01 | Sixteenth release |
Revision 8.0 | 2017-02-25 | Fifteenth release |
Revision 7.10 | 2016-09-07 | Fourteenth release |
Revision 7.9 | 2016-03-08 | Thirteenth release |
Revision 7.8 | 2015-10-01 | Twelfth release |
Revision 7.7 | 2015-03-06 | Eleventh release |
Revision 7.6 | 2014-09-23 | Tenth release |
Revision 7.5 | 2014-03-05 | Ninth release |
Revision 7.4 | 2013-09-14 | Eighth release |
Revision 6.3 | 2008-08-24 | Seventh release |
Revision 6.2 | 2007-02-14 | Sixth release |
Revision 6.1 | 2005-08-14 | Fifth release |
Revision 6.0 | 2005-04-02 | Fourth release |
Revision 5.1 | 2004-06-05 | Third release |
Revision 5.0 | 2003-11-06 | Second release |
Revision 1.0 | 2003-04-25 | First release |
Abstract
This book follows on from the Tinux From Scratch book. It introduces and guides the reader through additions to the Linux operating system including networking, graphical interfaces, sound support, and printer and scanner support.
BTFS 11.4 is a fork from BLFS at commit 64e4, just inbetween BLFS version 11.3 and before their next release 12.0. It accompanies the Tinux From Scratch book, which in itself is fork off from the Linux From Scratch book.
Clemens Lahme
clemens.lahme@techinvest.li
BLFS Editor (August 2023–now)
Having helped out with Linux From Scratch for a short time, I noticed that we were getting many queries as to how to do things beyond the base LFS system. At the time, the only assistance specifically offered relating to LFS were the LFS hints (https://www.linuxfromscratch.org/hints). Most of the LFS hints are extremely good and well written but I (and others) could still see a need for more comprehensive help to go Beyond LFS — hence BLFS.
BLFS aims to be more than the LFS-hints converted to XML although much of our work is based around the hints and indeed some authors write both hints and the relevant BLFS sections. We hope that we can provide you with enough information to not only manage to build your system up to what you want, whether it be a web server or a multimedia desktop system, but also that you will learn a lot about system configuration as you go.
Thanks as ever go to everyone in the LFS/BLFS community; especially those who have contributed instructions, written text, answered questions and generally shouted when things were wrong!
Finally, we encourage you to become involved in the community; ask questions on the mailing list or news gateway and join in the fun on #lfs and #lfs-support at Libera. You can find more details about all of these in the Introduction section of the book.
Enjoy using BLFS.
Mark Hymers
markh <at> linuxfromscratch.org
BLFS Editor (July 2001–March 2003)
I still remember how I found the BLFS project and started using the instructions that were completed at the time. I could not believe how wonderful it was to get an application up and running very quickly, with explanations as to why things were done a certain way. Unfortunately, for me, it wasn't long before I was opening applications that had nothing more than "To be done" on the page. I did what most would do, I waited for someone else to do it. It wasn't too long before I am looking through Bugzilla for something easy to do. As with any learning experience, the definition of what was easy kept changing.
We still encourage you to become involved as BLFS is never really finished. Contributing or just using, we hope you enjoy your BLFS experience.
Larry Lawrence
larry <at> linuxfromscratch.org
BLFS Editor (March 2003–June 2004)
The BLFS project is a natural progression of LFS. Together, these projects provide a unique resource for the Open Source Community. They take the mystery out of the process of building a complete, functional software system from the source code contributed by many talented individuals throughout the world. They truly allow users to implement the slogan “Your distro, your rules”.
Our goal is to continue to provide the best resource available that shows you how to integrate many significant Open Source applications. Since these applications are constantly updated and new applications are developed, this book will never be complete. Additionally, there is always room for improvement in explaining the nuances of how to install the different packages. To make these improvements, we need your feedback. I encourage you to participate on the different mailing lists, news groups, and IRC channels to help meet these goals.
Bruce Dubbs
bdubbs <at> linuxfromscratch.org
BLFS Editor (June 2004–December 2006 and February 2011–now)
My introduction to the [B]LFS project was actually by accident. I was trying to build a GNOME environment using some how-tos and other information I found on the web. A couple of times I ran into some build issues and Googling pulled up some old BLFS mailing list messages. Out for curiosity, I visited the Linux From Scratch web site and shortly thereafter was hooked. I've not used any other Linux distribution for personal use since.
I can't promise anyone will feel the sense of satisfaction I felt after building my first few systems using [B]LFS instructions, but I sincerely hope that your BLFS experience is as rewarding for you as it has been for me.
The BLFS project has grown significantly the last couple of years. There are more package instructions and related dependencies than ever before. The project requires your input for continued success. If you discover that you enjoy building BLFS, please consider helping out in any way you can. BLFS requires hundreds of hours of maintenance to keep it even semi-current. If you feel confident enough in your editing skills, please consider joining the BLFS team. Simply contributing to the mailing list discussions with sound advice and/or providing patches to the book's XML will probably result in you receiving an invitation to join the team.
Randy McMurchy
randy <at> linuxfromscratch.org
BLFS Editor (December 2006–January 2011)
This version of the book is intended to be used when building on top of a system built using the TFS book. Every effort has been made to ensure accuracy and reliability of the instructions. Many people find that using the instructions in this book after building the current stable or development version of LFS provides a stable and very modern Linux system.
Enjoy!
Randy McMurchy
August 24th, 2008
This book is mainly aimed at those who have built a system based on the TFS book. It will also be useful for those who are using other distributions, and for one reason or another want to manually build software and need some assistance. Note that the material in this book, in particular the dependency listings, assumes that you are using a basic TFS system with every package listed in the LFS book already installed and configured. BTFS can be used to create a range of diverse systems and so the target audience is probably as wide as that of the TFS book. If you found TFS useful, you should also like this!
Since Release 7.4, the BLFS book version has matched the LFS book version. This book may be incompatible with a previous or later release of the LFS or TFS book.
This book is divided into the following fourteen parts.
This part contains essential information which is needed to understand the rest of the book.
Here we introduce basic configuration and security issues. We also discuss a range of text editors, file systems, and shells which aren't covered in the main TFS book.
In this section we cover libraries which are often needed throughout the book, as well as system utilities. Information on programming (including recompiling GCC to support its full range of languages) concludes this part.
Here we explain how to connect to a network when you aren't using the simple static IP setup presented in the main TFS book. Networking libraries and command-line networking tools are also covered here.
Here we show you how to set up mail and other servers (such as FTP, Apache, etc.).
This part explains how to set up a basic X Window System, along with some generic X libraries and Window managers.
This part is for those who want to use the K Desktop Environment, or parts of it.
GNOME is the main alternative to KDE in the Desktop Environment arena.
Xfce is a lightweight alternative to GNOME and KDE.
LXDE is another lightweight alternative to GNOME and KDE.
Office programs and graphical web browsers are important to most people. They, and some generic X software, can be found in this part of the book.
Here we cover multimedia libraries and drivers, along with some audio, video, and CD-writing programs.
This part covers document handling, from applications like Ghostscript, CUPS, and DocBook, all the way to texlive.
The Appendices present information which doesn't belong in the body of book; they are included as reference material. The glossary of acronyms is a handy feature.
The Beyond Tinux From Scratch book is designed to carry on from where the TFS book leaves off. But unlike the TFS book, it isn't designed to be followed straight through. Reading the Which sections of the book? part of this chapter should help guide you through the book.
Please read most of this part of the book carefully as it explains quite a few of the conventions used throughout the book.
Unlike the Tinux From Scratch book, BTFS isn't designed to be followed in a linear manner. TFS provides instructions on how to create a base system which can become anything from a web server to a multimedia desktop system. BTFS attempts to guide you in the process of going from the base system to your intended destination. Choice is very much involved.
Everyone who reads this book will want to read certain sections. The Introduction, which you are currently reading, contains generic information. Take special note of the information in Chapter 2, Important Information, as this contains comments about how to unpack software, issues related to the use of different locales, and various other considerations which apply throughout the book.
The part on Post LFS Configuration and Extra Software is where most people will want to turn next. This deals not only with configuration, but also with Security (Chapter 4, Security), File Systems (Chapter 5, File Systems and Disk Management -- including GRUB for UEFI), Text Editors (Chapter 6, Text Editors), and Shells (Chapter 7, Shells). Indeed, you may wish to reference some parts of this chapter (especially the sections on Text Editors and File Systems) while building your TFS system.
Following these basic items, most people will want to at least browse through the General Libraries and Utilities part of the book. This contains information on many items which are prerequisites for other sections of the book, as well as some items (such as Chapter 13, Programming) which are useful in their own right. You don't have to install all of the libraries and packages found in this part; each BLFS installation procedure tells you which other packages this one depends upon. You can choose the program you want to install, and see what it needs. (Don't forget to check for nested dependencies!)
Likewise, most people will probably want to look at the Networking section. It deals with connecting to the Internet or your LAN (Chapter 14, Connecting to a Network) using a variety of methods such as DHCP and PPP, and with items such as Networking Libraries (Chapter 17, Networking Libraries), plus various basic networking programs and utilities.
Once you have dealt with these basics, you may wish to configure more advanced network services. These are dealt with in the Servers part of the book. Those wanting to build servers should find a good starting point there. Note that this section also contains information on several database packages.
The next twelve chapters deal with desktop systems. This portion of the book starts with a part talking about Graphical Components. This part also deals with some generic X-based libraries (Chapter 25, Graphical Environment Libraries). After that, KDE, GNOME, Xfce, and LXDE are given their own parts, followed by one on X Software.
The book then moves on to deal with Multimedia packages. Note that many people may want to use the ALSA-1.2.7 instructions from this chapter when first starting their BTFS journey; the instructions are placed here because it is the most logical place for them.
The final part of the main BTFS book deals with Printing, Scanning and Typesetting. This is useful for most people with desktop systems, but even those who are creating dedicated server systems may find it useful.
We hope you enjoy using BTFS. May you realize your dream of building the perfectly personalized Linux system!
To make things easy to follow, a number of conventions are used throughout the book. Here are some examples:
./configure --prefix=/usr
This form of text should be typed exactly as shown unless otherwise noted in the surrounding text. It is also used to identify references to specific commands.
install-info: unknown option
`--dir-file=/mnt/lfs/usr/info/dir'
This form of text (fixed width font) shows screen output, probably the result of issuing a command. It is also used to show filenames such as
/boot/grub/grub.conf
Emphasis
This form of text is used for several purposes, but mainly to emphasize important points, or to give examples of what to type.
https://www.linuxfromscratch.org/
This form of text is used for hypertext links external to the book, such as HowTos, download locations, websites, etc.
This form of text is used for links internal to the book, such as another section describing a different package.
cat > $LFS/etc/group << "EOF"
root:x:0:
bin:x:1:
......
EOF
This style is mainly used when creating configuration files. The first command (in bold) tells the system to create the file
$LFS/etc/group
from whatever is typed on the following lines, until the sequence EOF is encountered. Therefore, this whole section is usually typed exactly as shown. Remember, copy and paste is your friend!
<REPLACED
TEXT>
This form of text is used to encapsulate text that should be modified, and is not to be typed as shown, or copied and pasted. The angle brackets are not part of the literal text; they are part of the substitution.
root
This form of text is used to show a specific system user or group reference in the instructions.
When new packages are created, the software's authors depend on prior work. In order to build a package in BTFS, these dependencies must be built before the desired package can be compiled. For each package, prerequisites are listed in one or more separate sections: Required, Recommended, and Optional.
These dependencies are the bare minimum needed to build the package. Packages in TFS, and the required dependencies of these required packages, are omitted from this list. Always remember to check for nested dependencies. If a dependency is said to be “runtime”, then it is not needed for building the package, but only to use it after installation.
These are dependencies the BTFS editors have determined are important to give the package reasonable capabilities. If a recommended dependency is not said to be “runtime”, package installation instructions assume it is installed. If it is not installed, the instructions may require modification, to accommodate the missing package. A recommended “runtime” dependency does not need to be installed before building the package, but must be built afterwards for running the package with reasonable capabilities.
These are dependencies the package may use. Integration of optional dependencies may be automatic by the package, or additional steps not presented by BTFS may be necessary. Optional dependencies are sometimes listed without explicit BTFS instructions. In this case you must determine how to perform the installation yourself.
Some packages require specific kernel configuration options. The general layout for these looks like this:
Master section --->
Subsection --->
[*] Required parameter [CONFIG_REQU_PAR]
<*> Required parameter (not as module) [CONFIG_REQU_PAR_NMOD]
<*/M> Required parameter (could be a module) [CONFIG_REQU_PAR_MOD]
<*/M/ > Optional parameter [CONFIG_OPT_PAR]
[ ] Incompatible parameter [CONFIG_INCOMP_PAR]
< > Incompatible parameter (even as module) [CONFIG_INCOMP_PAR_MOD]
[CONFIG_...] on the right gives the name of the option, so
you can easily check whether it is set in your .config
file. The meaning of the various
entries is:
Master section top level menu item Subsection submenu item Required parameter the option can either be built-in, or not selected: it must be selected Required parameter (not as module) the option can be built-in, a module, or not selected (tri-state): it must be selected as built-in Required parameter (could be a module) the option can be built-in, a module, or not selected: it must be selected, either as built-in or as a module Optional parameter rarely used: the option can be built-in, a module, or not selected: it may be set any way you wish Incompatible parameter the option can either be built-in or not selected: it must not be selected Incompatible parameter (even as module) the option can be built-in, a module, or not selected: it must not be selected
Note that, depending on other selections, the angle brackets (<>) in the configuration menu may appear as braces ({}) if the option cannot be unselected, or even as dashes (-*- or -M-), when the choice is imposed. The help text describing the option specifies the other selections on which this option relies, and how those other selections are set.
As in TFS, each package in BTFS has a build time listed in Standard Build Units (SBUs). These times are relative to the time it took to build binutils in TFS, and are intended to provide some insight into how long it will take to build a package. Most times listed are for a single processor or core to build the package. In some cases, large, long running builds tested on multi-core systems have SBU times listed with comments such as '(parallelism=4)'. These values indicate testing was done using multiple cores. Note that while this speeds up the build on systems with the appropriate hardware, the speedup is not linear and to some extent depends on the individual package and the specific hardware used.
For packages which use ninja (i.e., anything using meson) or rust, by default all cores are used; similar comments will be seen on such packages even when the build time is minimal.
Where even a parallel build takes more than 15 SBU, on certain machines the time may be considerably greater even when the build does not use swap. In particular, different micro-architectures will build some files at different relative speeds, and this can introduce delays when certain make targets wait for another file to be created. Where a large build uses a lot of C++ files, processors with Simultaneous Multi Threading will share the Floating Point Unit and can take 45% longer than when using four 'prime' cores (measured on an intel i7 using taskset and keeping the other cores idle).
Some packages do not support parallel builds; for these, the make command must specify -j1. Packages that are known to impose such limits are so marked in the text.
The BLFS project has a number of mirrors set up world-wide to make it easier and more convenient for you to access the website. Please visit the https://www.linuxfromscratch.org/mirrors.html website for the list of current mirrors.
Within the BTFS instructions, each package has two references for finding the source files for the package—an HTTP link and an FTP link (some packages may only list one of these links). Every effort has been made to ensure that these links are accurate. However, the World Wide Web is in continuous flux. Packages are sometimes moved or updated and the exact URL specified is not always available.
To overcome this problem, the BLFS Team, with the assistance of Oregon State University Open Source Lab, has made an HTTP/FTP site available through world wide mirrors. See https://www.linuxfromscratch.org/blfs/download.html#sources for a list. These sites have all the sources of the exact versions of the packages used in BLFS. If you can't find the BTFS package you need at the listed addresses, get it from these sites.
We would like to ask a favor, however. Although this is a public resource for you to use, please do not abuse it. We have already had one unthinking individual download over 3 GB of data, including multiple copies of the same files that are placed at different locations (via symlinks) to make finding the right package easier. This person clearly did not know what files he needed and downloaded everything. The best place to download files is the site or sites set up by the source code developer. Please try there first.
Current release: 11.6 – August 6th, 2024
Changelog Entries:
August 6st, 2024
[clemens] - Added xdotool-3.20211022.1.
November 26st, 2023
[clemens] - Updated Node.js to 18.18.2.
August 21st, 2023
[clemens] - Added Toybox-0.8.10.
August 9th, 2023
[clemens] - Move Meson-$meson-version; and Ninja-1.11.1 packages from TFS to BTFS.
August 9tht, 2023
[clemens] - Remove OpenSSH-9.3p1.
The linuxfromscratch.org server is hosting a number of mailing lists that are used for the development of the BLFS book. These lists include, among others, the main development and support lists.
For more information regarding which lists are available, how to subscribe to them, archive locations, etc., visit https://www.linuxfromscratch.org/mail.html.
The BLFS Project has created a Wiki for users to comment on pages and instructions at https://wiki.linuxfromscratch.org/blfs/wiki. Comments are welcome from all users.
The following are the rules for posting:
Users must register and log in to edit a page.
Suggestions to change the book should be made by creating a new ticket, not by making comments in the Wiki.
Questions with your specific installation problems should be made by subscribing and mailing to the BLFS Support Mailing List at mailto:blfs-support AT linuxfromscratch D0T org.
Discussions of build instructions should be made by subscribing and mailing to the BLFS Development List at mailto:blfs-dev AT linuxfromscratch D0T org.
Inappropriate material will be removed.
If you encounter a problem while using this book, and your problem is not listed in the FAQ (https://www.linuxfromscratch.org/faq), you will find that most of the people on Internet Relay Chat (IRC) and on the mailing lists are willing to help you. An overview of the LFS mailing lists can be found in Mailing lists. To assist us in diagnosing and solving your problem, include as much relevant information as possible in your request for help.
Before asking for help, you should review the following items:
Is the hardware support compiled into the kernel or
available as a module to the kernel? If it is a module,
is it configured properly in modprobe.conf
and has it been loaded?
You should use lsmod as the
root
user to see if
it's loaded. Check the sys.log
file or run modprobe <driver>
to review any error message. If it loads properly, you
may need to add the modprobe command to
your boot scripts.
Are your permissions properly set, especially for
devices? LFS uses groups to make these settings easier,
but it also adds the step of adding users to groups to
allow access. A simple usermod -G audio <user>
may be all that's necessary for that user to have
access to the sound system. Any question that starts
out with “It works as root, but not as
...” requires a thorough review of
permissions prior to asking.
BLFS liberally uses /opt/
.
The main objection to this centers around the need to
expand your environment variables for each package
placed there (e.g., PATH=$PATH:/opt/kde/bin). In most
cases, the package instructions will walk you through
the changes, but some will not. The section called
“Going Beyond
BLFS” is available to help you
check.
<package>
Apart from a brief explanation of the problem you're having, the essential things to include in your request are:
warn the people you are not using BLFS but BTFS, unless you directly contact the BTFS maintainer (clemens.lahme@techinvest.li),
the version of the book you are using (being 11.6),
the package or section giving you problems,
the exact error message or symptom you are receiving,
whether you have deviated from the book or LFS at all,
if you are installing a BLFS package on a non-LFS system.
(Note that saying that you've deviated from the book doesn't mean that we won't help you. It'll just help us to see other possible causes of your problem.)
Expect guidance instead of specific instructions. If you are instructed to read something, please do so. It generally implies that the answer was way too obvious and that the question would not have been asked if a little research was done prior to asking. The volunteers in the mailing list prefer not to be used as an alternative to doing reasonable research on your end. In addition, the quality of your experience with BLFS is also greatly enhanced by this research, and the quality of volunteers is enhanced because they don't feel that their time has been abused, so they are far more likely to participate.
An excellent article on asking for help on the Internet in general has been written by Eric S. Raymond. It is available online at http://www.catb.org/~esr/faqs/smart-questions.html. Read and follow the hints in that document and you are much more likely to get a response to start with and also to get the help you actually need.
Many people have contributed both directly and indirectly to BLFS. This page lists all of those we can think of. We may well have left people out and if you feel this is the case, drop us a line. Many thanks to all of the LFS community for their assistance with this project.
Bruce Dubbs
Pierre Labastie
DJ Lucas
Ken Moffat
Douglas Reno
The list of contributors is far too large to provide detailed information about the contributions for each contributor. Over the years, the following individuals have provided significant inputs to the book:
Timothy Bauscher
Daniel Bauman
Jeff Bauman
Andy Benton
Wayne Blaszczyk
Paul Campbell
Nathan Coulson
Jeroen Coumans
Guy Dalziel
Robert Daniels
Richard Downing
Manuel Canales Esparcia
Jim Gifford
Manfred Glombowski
Ag Hatzimanikas
Mark Hymers
James Iwanek
David Jensen
Jeremy Jones
Seth Klein
Alex Kloss
Eric Konopka
Larry Lawrence
Chris Lynn
Andrew McMurry
Randy McMurchy
Denis Mugnier
Billy O'Connor
Fernando de Oliveira
Alexander Patrakov
Olivier Peres
Andreas Pedersen
Henning Rohde
Matt Rogers
James Robertson
Henning Rohde
Chris Staub
Jesse Tie-Ten-Quee
Ragnar Thomsen
Thomas Trepl
Tushar Teredesai
Jeremy Utley
Zack Winkles
Christian Wurst
Igor Živković
Fernando Arbeiza
Miguel Bazdresch
Gerard Beekmans
Oliver Brakmann
Jeremy Byron
Ian Chilton
David Ciecierski
Jim Harris
Lee Harris
Marc Heerdink
Steffen Knollmann
Eric Konopka
Scot McPherson
Ted Riley
Email: clemens.lahme@techinvest.li
IRC chat: Libera network #tinux channel.
Or if aware that BTFS is derived of BLFS one of the BLFS mailing lists. See Mailing lists for more information on the available mailing lists.
This chapter is used to explain some of the policies used throughout the book, to introduce important concepts and to explain some issues you may see with some of the included packages.
Those people who have built an TFS system may be aware of the general principles of downloading and unpacking software. Some of that information is repeated here for those new to building their own software.
Each set of installation instructions contains a URL from which you can download the package. The patches; however, are stored on the LFS servers and are available via HTTP. These are referenced as needed in the installation instructions.
While you can keep the source files anywhere you like, we assume that you have unpacked the package and changed into the directory created by the unpacking process (the 'build' directory). We also assume you have uncompressed any required patches and they are in the directory immediately above the 'build' directory.
We can not emphasize strongly enough that you should start from
a clean source tree each
time. This means that if you have had an error during
configuration or compilation, it's usually best to delete the
source tree and re-unpack it before trying again. This obviously
doesn't apply if you're an advanced user used to hacking
Makefile
s and C code, but if in
doubt, start from a clean tree.
The golden rule of Unix System Administration is to use your
superpowers only when necessary. Hence, BTFS recommends that
you build software as an unprivileged user and only become
the root
user when installing
the software. This philosophy is followed in all the packages
in this book. Unless otherwise specified, all instructions
should be executed as an unprivileged user. The book will
advise you on instructions that need root
privileges.
If a file is in .tar
format and
compressed, it is unpacked by running one of the following
commands:
tar -xvf filename.tar.gz tar -xvf filename.tgz tar -xvf filename.tar.Z tar -xvf filename.tar.bz2
You may omit using the v
parameter in the commands shown above and below if you wish
to suppress the verbose listing of all the files in the
archive as they are extracted. This can help speed up the
extraction as well as make any errors produced during the
extraction more obvious to you.
You can also use a slightly different method:
bzcat filename.tar.bz2 | tar -xv
Finally, you sometimes need to be able to unpack patches
which are generally not in .tar
format. The best way to do this is to copy the patch file to
the parent of the 'build' directory and then run one of the
following commands depending on whether the file is a
.gz
or .bz2
file:
gunzip -v patchname.gz bunzip2 -v patchname.bz2
Generally, to verify that the downloaded file is complete,
many package maintainers also distribute md5sums of the
files. To verify the md5sum of the downloaded files, download
both the file and the corresponding md5sum file to the same
directory (preferably from different on-line locations), and
(assuming file.md5sum
is the
md5sum file downloaded) run the following command:
md5sum -c file.md5sum
If there are any errors, they will be reported. Note that the
BTFS book includes md5sums for all the source files also. To
use the BTFS supplied md5sums, you can create a file.md5sum
(place the md5sum data and the
exact name of the downloaded file on the same line of a file,
separated by white space) and run the command shown above.
Alternately, simply run the command shown below and compare
the output to the md5sum data shown in the BTFS book.
md5sum <name_of_downloaded_file>
MD5 is not cryptographically secure, so the md5sums are only provided for detecting unmalicious changes to the file content. For example, an error or truncation introduced during network transfer, or a “stealth” update to the package from the upstream (updating the content of a released tarball instead of making a new release properly).
There is no “100%” secure way to make sure the genuity of the source files. Assuming the upstream is managing their website correctly (the private key is not leaked and the domain is not hijacked), and the trust anchors have been set up correctly using make-ca-1.13 on the BTFS system, we can reasonably trust download URLs to the upstream official website with https protocol. Note that BLFS book itself is published on a website with https, so you should already have some confidence in https protocol or you wouldn't trust the book content.
If the package is downloaded from an unofficial location (for example a local mirror), checksums generated by cryptographically secure digest algorithms (for example SHA256) can be used to verify the genuity of the package. Download the checksum file from the upstream official website (or somewhere you can trust) and compare the checksum of the package from unofficial location with it. For example, SHA256 checksum can be checked with the command:
If the checksum and the package are downloaded from the same untrusted location, you won't gain security enhancement by verifying the package with the checksum. The attacker can fake the checksum as well as compromising the package itself.
sha256sum -c file
.sha256sum
If GnuPG-2.4.0 is installed, you can also verify the genuity of the package with a GPG signature. Import the upstream GPG public key with:
gpg --recv-key keyID
keyID
should be
replaced with the key ID from somewhere you can trust (for example,
copy it from the upstream official website using https). Now
you can verify the signature with:
gpg --recv-keyfile
.sigfile
The advantage of GnuPG signature is, once you imported a public key which can be trusted, you can download both the package and its signature from the same unofficial location and verify them with the public key. So you won't need to connect to the official upstream website to retrieve a checksum for each new release. You only need to update the public key if it's expired or revoked.
For larger packages, it is convenient to create log files
instead of staring at the screen hoping to catch a particular
error or warning. Log files are also useful for debugging and
keeping records. The following command allows you to create
an installation log. Replace <command>
with the
command you intend to execute.
( <command>
2>&1 | tee compile.log && exit $PIPESTATUS )
2>&1
redirects error
messages to the same location as standard output. The
tee command
allows viewing of the output while logging the results to a
file. The parentheses around the command run the entire
command in a subshell and finally the exit $PIPESTATUS command
ensures the result of the <command>
is returned
as the result and not the result of the tee command.
For many modern systems with multiple processors (or cores) the compilation time for a package can be reduced by performing a "parallel make" by either setting an environment variable or telling the make program how many processors are available. For instance, a Core2Duo can support two simultaneous processes with:
export MAKEFLAGS='-j2'
or just building with:
make -j2
If you have applied the optional sed when building ninja in TFS, you can use:
export NINJAJOBS=2
when a package uses ninja, or just:
ninja -j2
but for ninja, the default number of jobs is <N>+2, where <N> is the number of processors available, so that using the above commands is rather for limiting the number of jobs (see below for why this could be necessary).
Generally the number of processes should not exceed the
number of cores supported by the CPU. To list the processors
on your system, issue: grep
processor /proc/cpuinfo
.
In some cases, using multiple processes may result in a 'race' condition where the success of the build depends on the order of the commands run by the make program. For instance, if an executable needs File A and File B, attempting to link the program before one of the dependent components is available will result in a failure. This condition usually arises because the upstream developer has not properly designated all the prerequisites needed to accomplish a step in the Makefile.
If this occurs, the best way to proceed is to drop back to a
single processor build. Adding '-j1' to a make command will
override the similar setting in the MAKEFLAGS
environment variable.
When running the package tests or the install portion of the package build process, we do not recommend using an option greater than '-j1' unless specified otherwise. The installation procedures or checks have not been validated using parallel procedures and may fail with issues that are difficult to debug.
Another problem may occur with modern CPU's, which have a lot of cores. Each job started consumes memory, and if the sum of the needed memory for each job exceeds the available memory, you may encounter either an OOM (Out of Memory) kernel interrupt or intense swapping that will slow the build beyond reasonable limits.
Some compilations with g++ may consume up to 2.5 GB of memory, so to be safe, you should restrict the number of jobs to (Total Memory in GB)/2.5, at least for big packages such as LLVM, WebKitGtk, QtWebEngine, or libreoffice.
There are times when automating the building of a package can
come in handy. Everyone has their own reasons for wanting to
automate building, and everyone goes about it in their own
way. Creating Makefile
s,
Bash scripts, Perl scripts or simply a list of
commands used to cut and paste are just some of the methods
you can use to automate building BTFS packages. Detailing how
and providing examples of the many ways you can automate the
building of packages is beyond the scope of this section.
This section will expose you to using file redirection and
the yes command
to help provide ideas on how to automate your builds.
You will find times throughout your BTFS journey when you will come across a package that has a command prompting you for information. This information might be configuration details, a directory path, or a response to a license agreement. This can present a challenge to automate the building of that package. Occasionally, you will be prompted for different information in a series of questions. One method to automate this type of scenario requires putting the desired responses in a file and using redirection so that the program uses the data in the file as the answers to the questions.
Building the CUPS package is a good example of how redirecting a file as input to prompts can help you automate the build. If you run the test suite, you are asked to respond to a series of questions regarding the type of test to run and if you have any auxiliary programs the test can use. You can create a file with your responses, one response per line, and use a command similar to the one shown below to automate running the test suite:
make check < ../cups-1.1.23-testsuite_parms
This effectively makes the test suite use the responses in the file as the input to the questions. Occasionally you may end up doing a bit of trial and error determining the exact format of your input file for some things, but once figured out and documented you can use this to automate building the package.
Sometimes you will only need to provide one response, or provide the same response to many prompts. For these instances, the yes command works really well. The yes command can be used to provide a response (the same one) to one or more instances of questions. It can be used to simulate pressing just the Enter key, entering the Y key or entering a string of text. Perhaps the easiest way to show its use is in an example.
First, create a short Bash script by entering the following commands:
cat > blfs-yes-test1 << "EOF"
#!/bin/bash
echo -n -e "\n\nPlease type something (or nothing) and press Enter ---> "
read A_STRING
if test "$A_STRING" = ""; then A_STRING="Just the Enter key was pressed"
else A_STRING="You entered '$A_STRING'"
fi
echo -e "\n\n$A_STRING\n\n"
EOF
chmod 755 blfs-yes-test1
Now run the script by issuing ./blfs-yes-test1 from the command line. It will wait for a response, which can be anything (or nothing) followed by the Enter key. After entering something, the result will be echoed to the screen. Now use the yes command to automate the entering of a response:
yes | ./blfs-yes-test1
Notice that piping yes by itself to the script results in y being passed to the script. Now try it with a string of text:
yes 'This is some text' | ./blfs-yes-test1
The exact string was used as the response to the script. Finally, try it using an empty (null) string:
yes '' | ./blfs-yes-test1
Notice this results in passing just the press of the Enter key to the script. This is useful for times when the default answer to the prompt is sufficient. This syntax is used in the Net-tools instructions to accept all the defaults to the many prompts during the configuration step. You may now remove the test script, if desired.
In order to automate the building of some packages, especially those that require you to read a license agreement one page at a time, requires using a method that avoids having to press a key to display each page. Redirecting the output to a file can be used in these instances to assist with the automation. The previous section on this page touched on creating log files of the build output. The redirection method shown there used the tee command to redirect output to a file while also displaying the output to the screen. Here, the output will only be sent to a file.
Again, the easiest way to demonstrate the technique is to show an example. First, issue the command:
ls -l /usr/bin | more
Of course, you'll be required to view the output one page at
a time because the more filter was used. Now
try the same command, but this time redirect the output to a
file. The special file /dev/null
can be used instead of the
filename shown, but you will have no log file to examine:
ls -l /usr/bin | more > redirect_test.log 2>&1
Notice that this time the command immediately returned to the shell prompt without having to page through the output. You may now remove the log file.
The last example will use the yes command in combination with output redirection to bypass having to page through the output and then provide a y to a prompt. This technique could be used in instances when otherwise you would have to page through the output of a file (such as a license agreement) and then answer the question of “do you accept the above?”. For this example, another short Bash script is required:
cat > blfs-yes-test2 << "EOF"
#!/bin/bash
ls -l /usr/bin | more
echo -n -e "\n\nDid you enjoy reading this? (y,n) "
read A_STRING
if test "$A_STRING" = "y"; then A_STRING="You entered the 'y' key"
else A_STRING="You did NOT enter the 'y' key"
fi
echo -e "\n\n$A_STRING\n\n"
EOF
chmod 755 blfs-yes-test2
This script can be used to simulate a program that requires you to read a license agreement, then respond appropriately to accept the agreement before the program will install anything. First, run the script without any automation techniques by issuing ./blfs-yes-test2.
Now issue the following command which uses two automation techniques, making it suitable for use in an automated build script:
yes | ./blfs-yes-test2 > blfs-yes-test2.log 2>&1
If desired, issue tail blfs-yes-test2.log to see the end of the paged output, and confirmation that y was passed through to the script. Once satisfied that it works as it should, you may remove the script and log file.
Finally, keep in mind that there are many ways to automate and/or script the build commands. There is not a single “correct” way to do it. Your imagination is the only limit.
For each package described, BTFS lists the known dependencies. These are listed under several headings, whose meaning is as follows:
Required means that the target package cannot be correctly built without the dependency having first been installed, except if the dependency is said to be “runtime”, which means the target package can be built but cannot function without it.
Note that a target package can start to “function” in many subtle ways: an installed configuration file can make the init system, cron daemon, or bus daemon to run a program automatically; another package using the target package as an dependency can run a program from the target package in the building system; and the configuration sections in the BTFS book may also run a program from a just installed package. So if you are installing the target package without a Required (runtime) dependency installed, You should install the dependency as soon as possible after the installation of the target package.
Recommended means that BTFS strongly suggests this package is installed first (except if said to be “runtime”, see below) for a clean and trouble-free build, that won't have issues either during the build process, or at run-time. The instructions in the book assume these packages are installed. Some changes or workarounds may be required if these packages are not installed. If a recommended dependency is said to be “runtime”, it means that BTFS strongly suggests that this dependency is installed before using the package, for getting full funtionality.
Optional means that this package might be installed for added functionality. Often BTFS will describe the dependency to explain the added functionality that will result. An optional dependency may be automatically pick up by the target package if the dependency is installed, but another some optional dependency may also need additional configuration options to enable them when the target package is built. Such additional options are often documented in the BTFS book. If an optional dependency is said to be “runtime”, it means you may install the dependency after installing the target package to support some optional features of the target package if you need these features.
An optional dependency may be out of BTFS. If you need such an external optional dependency for some features you need, read Going Beyond BLFS for the general hint about installing an out-of-BTFS package.
On occasion you may run into a situation in the book when a package will not build or work properly. Though the Editors attempt to ensure that every package in the book builds and works properly, sometimes a package has been overlooked or was not tested with this particular version of BTFS.
If you discover that a package will not build or work properly, you should see if there is a more current version of the package. Typically this means you go to the maintainer's web site and download the most current tarball and attempt to build the package. If you cannot determine the maintainer's web site by looking at the download URLs, use Google and query the package's name. For example, in the Google search bar type: 'package_name download' (omit the quotes) or something similar. Sometimes typing: 'package_name home page' will result in you finding the maintainer's web site.
In TFS, stripping of debugging symbols and unneeded symbol table entries was discussed a couple of times. When building BTFS packages, there are generally no special instructions that discuss stripping again. Stripping can be done while installing a package, or afterwards.
There are several ways to strip executables installed by a package. They depend on the build system used (see below the section about build systems), so only some generalities can be listed here:
The following methods using the feature of a building system (autotools, meson, or cmake) will not strip static libraries if any is installed. Fortunately there are not too many static libraries in BTFS, and a static library can always be stripped safely by running strip --strip-unneeded on it manually.
The packages using autotools usually have an install-strip
target in
their generated Makefile
files. So installing stripped executables is just a
matter of using make
install-strip instead of make install.
The packages using the meson build system can accept
-Dstrip=true
when running meson. If you've
forgot to add this option running the meson, you can also
run meson install
--strip instead of ninja install.
cmake
generates install/strip
targets for
both the Unix
Makefiles
and Ninja
generators (the
default is Unix
Makefiles
on linux). So just run
make
install/strip or ninja install/strip
instead of the install counterparts.
Removing (or not generating) debug symbols can also be
achieved by removing the -g<something>
options in C/C++ calls. How to do that is very specific
for each package. And, it does not remove unneeded
symbol table entries. So it will not be explained in
detail here. See also below the paragraphs about
optimization.
The strip
utility changes files in place, which may break anything
using it if it is loaded in memory. Note that if a file is in
use but just removed from the disk (i.e. not overwritten nor
modified), this is not a problem since the kernel can use
“deleted” files. Look at
/proc/*/maps
and it is likely
that you'll see some (deleted) entries. The
mv just removes
the destination file from the directory but does not touch
its content, so that it satisfies the condition for the
kernel to use the old (deleted) file. But this approach can
detach hard links into duplicated copies, causing a bloat
which is obviously unwanted as we are stripping to reduce
system size. If two files in a same file system share the
same inode number, they are hard links to each other and we
should reconstruct the link. The script below is just an
example. It should be run as the root
user:
cat > /usr/sbin/strip-all.sh << "EOF"
#!/usr/bin/bash
if [ $EUID -ne 0 ]; then
echo "Need to be root"
exit 1
fi
last_fs_inode=
last_file=
{ find /usr/lib -type f -name '*.so*' ! -name '*dbg'
find /usr/lib -type f -name '*.a'
find /usr/{bin,sbin,libexec} -type f
} | xargs stat -c '%m %i %n' | sort | while read fs inode file; do
if ! readelf -h $file >/dev/null 2>&1; then continue; fi
if file $file | grep --quiet --invert-match 'not stripped'; then continue; fi
if [ "$fs $inode" = "$last_fs_inode" ]; then
ln -f $last_file $file;
continue;
fi
cp --preserve $file ${file}.tmp
strip --strip-unneeded ${file}.tmp
mv ${file}.tmp $file
last_fs_inode="$fs $inode"
last_file=$file
done
EOF
chmod 744 /usr/sbin/strip-all.sh
If you install programs in other directories such as
/opt
or /usr/local
, you may want to strip the files
there too. Just add other directories to scan in the compound
list of find
commands between the braces.
For more information on stripping, see https://www.technovelty.org/linux/stripping-shared-libraries.html.
There are now three different build systems in common use for converting C or C++ source code into compiled programs or libraries and their details (particularly, finding out about available options and their default values) differ. It may be easiest to understand the issues caused by some choices (typically slow execution or unexpected use of, or omission of, optimizatons) by starting with the CFLAGS and CXXFLAGS environment variables. There are also some programs which use rust.
Most TFS and BTFS builders are probably aware of the basics of CFLAGS and CXXFLAGS for altering how a program is compiled. Typically, some form of optimization is used by upstream developers (-O2 or -O3), sometimes with the creation of debug symbols (-g), as defaults.
If there are contradictory flags (e.g. multiple different -O values), the last value will be used. Sometimes this means that flags specified in environment variables will be picked up before values hardcoded in the Makefile, and therefore ignored. For example, where a user specifies '-O2' and that is followed by '-O3' the build will use '-O3'.
There are various other things which can be passed in CFLAGS or CXXFLAGS, such as forcing compilation for a specific microarchitecture (e.g. -march=amdfam10, -march=native) or specifying a specific standard for C or C++ (-std=c++17 for example). But one thing which has now come to light is that programmers might include debug assertions in their code, expecting them to be disabled in releases by using -DNDEBUG. Specifically, if Mesa-23.1.8 is built with these assertions enabled, some activities such as loading levels of games can take extremely long times, even on high-class video cards.
This combination is often described as 'CMMI' (configure, make, make install) and is used here to also cover the few packages which have a configure script that is not generated by autotools.
Sometimes running ./configure --help will produce useful options about switches which might be used. At other times, after looking at the output from configure you may need to look at the details of the script to find out what it was actually searching for.
Many configure scripts will pick up any CFLAGS or CXXFLAGS from the environment, but CMMI packages vary about how these will be mixed with any flags which would otherwise be used (variously: ignored, used to replace the programmer's suggestion, used before the programmer's suggestion, or used after the programmer's suggestion).
In most CMMI packages, running 'make' will list each command and run it, interspersed with any warnings. But some packages try to be 'silent' and only show which file they are compiling or linking instead of showing the command line. If you need to inspect the command, either because of an error, or just to see what options and flags are being used, adding 'V=1' to the make invocation may help.
CMake works in a very different way, and it has two backends which can be used on BTFS: 'make' and 'ninja'. The default backend is make, but ninja can be faster on large packages with multiple processors. To use ninja, specify '-G Ninja' in the cmake command. However, there are some packages which create fatal errors in their ninja files but build successfully using the default of Unix Makefiles.
The hardest part of using CMake is knowing what options you might wish to specify. The only way to get a list of what the package knows about is to run cmake -LAH and look at the output for that default configuration.
Perhaps the most-important thing about CMake is that it has a variety of CMAKE_BUILD_TYPE values, and these affect the flags. The default is that this is not set and no flags are generated. Any CFLAGS or CXXFLAGS in the environment will be used. If the programmer has coded any debug assertions, those will be enabled unless -DNDEBUG is used. The following CMAKE_BUILD_TYPE values will generate the flags shown, and these will come after any flags in the environment and therefore take precedence.
Value | Flags |
---|---|
Debug |
-g
|
Release |
-O3 -DNDEBUG
|
RelWithDebInfo |
-O2 -g -DNDEBUG
|
MinSizeRel |
-Os -DNDEBUG
|
CMake tries to produce quiet builds. To see the details of the commands which are being run, use make VERBOSE=1 or ninja -v.
By default, CMake treats file installation differently from
the other build systems: if a file already exists and is not
newer than a file that would overwrite it, then the file is
not installed. This may be a problem if a user wants to
record which file belongs to a package, either using
LD_PRELOAD
, or by listing files
newer than a timestamp. The default can be changed by setting
the variable CMAKE_INSTALL_ALWAYS
to 1 in the environment, for example by
export'ing it.
Meson has some similarities to CMake, but many differences.
To get details of the defines that you may wish to change you
can look at meson_options.txt
which is usually in the top-level directory.
If you have already configured the package by running meson and now wish to change one or more settings, you can either remove the build directory, recreate it, and use the altered options, or within the build directory run meson configure, e.g. to set an option:
meson configure -D<some_option>=true
If you do that, the file meson-private/cmd_line.txt
will show the
last commands which
were used.
Meson provides the following buildtype values, and the flags they enable come after any flags supplied in the environment and therefore take precedence.
plain : no added flags. This is for distributors to supply their own CLFAGS, CXXFLAGS and LDFLAGS. There is no obvious reason to use this in BTFS.
debug : '-g' - this is the default if nothing is
specified in either meson.build
or the command line.
However it results large and slow binaries, so we
should override it in BTFS.
debugoptimized : '-O2 -g' : this is the default
specified in meson.build
of some packages.
release : '-O3 -DNDEBUG' (but occasionally a package will force -O2 here)
Although the 'release' buildtype is described as enabling -DNDEBUG, and all CMake Release builds pass that, it has so far only been observed (in verbose builds) for Mesa-23.1.8. That suggests that it might only be used when there are debug assertions present.
The -DNDEBUG flag can also be provided by passing -Db_ndebug=true.
To see the details of the commands which are being run in a package using meson, use 'ninja -v'.
Most released rustc programs are provided as crates (source
tarballs) which will query a server to check current versions
of dependencies and then download them as necessary. These
packages are built using cargo
--release. In theory, you can manipulate the
RUSTFLAGS to change the optimize-level (default is 3, like
-O3, e.g. -Copt-level=3
) or to
force it to build for the machine it is being compiled on,
using -Ctarget-cpu=native
but in
practice this seems to make no significant difference.
If you find an interesting rustc program which is only
provided as unpackaged source, you should at least specify
RUSTFLAGS=-Copt-level=2
otherwise it will do an unoptimized compile with debug info
and run much slower.
The rust developers seem to assume that everyone will compile
on a machine dedicated to producing builds, so by default all
CPUs are used. This can often be worked around, either by
exporting CARGO_BUILD_JOBS=<N> or passing --jobs
<N> to cargo. For compiling rustc itself, specifying
--jobs <N> on invocations of x.py (together with the
CARGO_BUILD_JOBS
environment
variable, which looks like a "belt and braces" approach but
seems to be necessary) mostly works. The exception is running
the tests when building rustc, some of them will nevertheless
use all online CPUs, at least as of rustc-1.42.0.
Many people will prefer to optimize compiles as they see fit, by providing CFLAGS or CXXFLAGS. For an introduction to the options available with gcc and g++ see https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html and https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html and info gcc.
Some packages default to '-O2 -g', others to '-O3 -g', and if CFLAGS or CXXFLAGS are supplied they might be added to the package's defaults, replace the package's defaults, or even be ignored. There are details on some desktop packages which were mostly current in April 2019 at https://www.linuxfromscratch.org/~ken/tuning/ - in particular, README.txt, tuning-1-packages-and-notes.txt, and tuning-notes-2B.txt. The particular thing to remember is that if you want to try some of the more interesting flags you may need to force verbose builds to confirm what is being used.
Clearly, if you are optimizing your own program you can spend time to profile it and perhaps recode some of it if it is too slow. But for building a whole system that approach is impractical. In general, -O3 usually produces faster programs than -O2. Specifying -march=native is also beneficial, but means that you cannot move the binaries to an incompatible machine - this can also apply to newer machines, not just to older machines. For example programs compiled for 'amdfam10' run on old Phenoms, Kaveris, and Ryzens : but programs compiled for a Kaveri will not run on a Ryzen because certain op-codes are not present. Similarly, if you build for a Haswell not everything will run on a SandyBridge.
There are also various other options which some people claim are beneficial. At worst, you get to recompile and test, and then discover that in your usage the options do not provide a benefit.
If building Perl or Python modules, or Qt packages which use qmake, in general the CFLAGS and CXXFLAGS used are those which were used by those 'parent' packages.
Even on desktop systems, there are still a lot of exploitable vulnerabilities. For many of these, the attack comes via javascript in a browser. Often, a series of vulnerabilities are used to gain access to data (or sometimes to pwn, i.e. own, the machine and install rootkits). Most commercial distros will apply various hardening measures.
In the past, there was Hardened LFS where gcc (a much older
version) was forced to use hardening (with options to turn
some of it off on a per-package basis). The current TFS and
BTFS books are carrying forward a part of its spirit by
enabling PIE (-fPIE -pie
) and SSP
(-fstack-protector-strong
) as the
defaults for GCC and clang. What is being covered here is
different - first you have to make sure that the package is
indeed using your added flags and not over-riding them.
For hardening options which are reasonably cheap, there is
some discussion in the 'tuning' link above (occasionally, one
or more of these options might be inappropriate for a
package). These options are -D_FORTIFY_SOURCE=2
and (for C++)
-D_GLIBCXX_ASSERTIONS
. On modern
machines these should only have a little impact on how fast
things run, and often they will not be noticeable.
The main distros use much more, such as RELRO (Relocation
Read Only) and perhaps -fstack-clash-protection
. You may also
encounter the so-called “userspace
retpoline” (-mindirect-branch=thunk
etc.) which is the
equivalent of the spectre mitigations applied to the linux
kernel in late 2018. The kernel mitigations caused a lot of
complaints about lost performance, if you have a production
server you might wish to consider testing that, along with
the other available options, to see if performance is still
sufficient.
Whilst gcc has many hardening options, clang/LLVM's strengths lie elsewhere. Some options which gcc provides are said to be less effective in clang/LLVM.
Should I install XXX in /usr
or /usr/local
?
This is a question without an obvious answer for an TFS based system.
In traditional Unix systems, /usr
usually contains files that come with the system distribution,
and the /usr/local
tree is free
for the local administrator to manage. The only really hard and
fast rule is that Unix distributions should not touch
/usr/local
, except perhaps to
create the basic directories within it.
With Linux distributions like Red Hat, Debian, etc., a possible
rule is that /usr
is managed by
the distribution's package system and /usr/local
is not. This way the package
manager's database knows about every file within /usr
.
TFS users build their own system and so deciding where the
system ends and local files begin is not straightforward. So
the choice should be made in order to make things easier to
administer. There are several reasons for dividing files
between /usr
and /usr/local
.
On a network of several machines all running TFS, or
mixed TFS and other Linux distributions, /usr/local
could be used to hold
packages that are common between all the computers in the
network. It can be NFS mounted or mirrored from a single
server. Here local indicates local to the site.
On a network of several computers all running an
identical TFS system, /usr/local
could hold packages that are
different between the machines. In this case local refers
to the individual computers.
Even on a single computer, /usr/local
can be useful if you have
several distributions installed simultaneously, and want
a place to put packages that will be the same on all of
them.
Or you might regularly rebuild your TFS, but want a place to put files that you don't want to rebuild each time. This way you can wipe the TFS file system and start from a clean partition every time without losing everything.
Some people ask why not use your own directory tree, e.g.,
/usr/site
, rather than
/usr/local
?
There is nothing stopping you, many sites do make their own
trees, however it makes installing new software more difficult.
Automatic installers often look for dependencies in
/usr
and /usr/local
, and if the file it is looking for
is in /usr/site
instead, the
installer will probably fail unless you specifically tell it
where to look.
What is the BTFS position on this?
All of the BTFS instructions install programs in /usr
with optional instructions to install
into /opt
for some specific
packages.
As you follow the various sections in the book, you will observe that the book occasionally includes patches that are required for a successful and secure installation of the packages. The general policy of the book is to include patches that fall in one of the following criteria:
Fixes a compilation problem.
Fixes a security problem.
Fixes a broken functionality.
In short, the book only includes patches that are either required or recommended. There is a Patches subproject which hosts various patches (including the patches referenced in the books) to enable you to configure your TFS the way you like it.
The BTFS Bootscripts package contains the init scripts that are used throughout the book. It is assumed that you will be using the BTFS Bootscripts package in conjunction with a compatible TFS-Bootscripts package. Refer to ../../../../lfs/view/development/chapter09/bootscripts.html for more information on the TFS-Bootscripts package.
Package Information
The BTFS Bootscripts package will be used throughout the BTFS
book for startup scripts. Unlike TFS, each init script has a
separate install target in the BTFS Bootscripts package. It is
recommended you keep the package source directory around until
completion of your BTFS system. When a script is requested from
BTFS Bootscripts, simply change to the directory and as the
root
user, execute the given
make install-<init-script>
command. This command installs the init script to its proper
location (along with any auxiliary configuration scripts) and
also creates the appropriate symlinks to start and stop the
service at the appropriate run-level.
You should review each bootscript before installation to ascertain that it satisfies your need. Also verify that the start and stop symlinks it creates match your preferences.
From time to time the bootscripts are updated to accommodate new packages or to make minor corrections. All versions of the bootscripts are located at https://anduin.linuxfromscratch.org/BLFS/blfs-bootscripts/.
In TFS and BTFS, many packages use a internally shipped libtool copy to build on a variety of Unix platforms. This includes platforms such as AIX, Solaris, IRIX, HP-UX, and Cygwin as well as Linux. The origins of this tool are quite dated. It was intended to manage libraries on systems with less advanced capabilities than a modern Linux system.
On a Linux system, libtool specific files are generally unneeded. Normally libraries are specified in the build process during the link phase. Since a linux system uses the Executable and Linkable Format (ELF) for executables and dynamic libraries, information needed to complete the task is embedded in the files. Both the linker and the program loader can query the appropriate files and properly link or execute the program.
Static libraries are rarely used in TFS and BTFS. And, nowadays most packages store the information needed for linking against a static library into a .pc file, instead of relying on libtool. A pkg-config --static --libs command will output the sufficient flags for the linker to link against a static library without any libtool magic.
The problem is that libtool usually creates one or more text files for package libraries called libtool archives. These small files have a ".la" extension and contain information that is similar to that embedded in the libraries or pkg-config files. When building a package that uses libtool, the process automatically looks for these files. Sometimes a .la file can contains the name or path of a static library used during build but not installed, then the build process will break because the .la file refers to something nonexistent on the system. Similarly, if a package is updated and no longer uses the .la file, then the build process can break with the old .la files.
The solution is to remove the .la files. However there is a catch. Some packages, such as ImageMagick-7.1.0-61, use a libtool function, lt_dlopen, to load libraries as needed during execution and resolve their dependencies at run time. In this case, the .la files should remain.
The script below, removes all unneeded .la files and saves them in a directory, /var/local/la-files by default, not in the normal library path. It also searches all pkg-config files (.pc) for embedded references to .la files and fixes them to be conventional library references needed when an application or library is built. It can be run as needed to clean up the directories that may be causing problems.
cat > /usr/sbin/remove-la-files.sh << "EOF"
#!/bin/bash
# /usr/sbin/remove-la-files.sh
# Written for Beyond Linux From Scratch
# by Bruce Dubbs <bdubbs@linuxfromscratch.org>
# Make sure we are running with root privs
if test "${EUID}" -ne 0; then
echo "Error: $(basename ${0}) must be run as the root user! Exiting..."
exit 1
fi
# Make sure PKG_CONFIG_PATH is set if discarded by sudo
source /etc/profile
OLD_LA_DIR=/var/local/la-files
mkdir -p $OLD_LA_DIR
# Only search directories in /opt, but not symlinks to directories
OPTDIRS=$(find /opt -mindepth 1 -maxdepth 1 -type d)
# Move any found .la files to a directory out of the way
find /usr/lib $OPTDIRS -name "*.la" ! -path "/usr/lib/ImageMagick*" \
-exec mv -fv {} $OLD_LA_DIR \;
###############
# Fix any .pc files that may have .la references
STD_PC_PATH='/usr/lib/pkgconfig
/usr/share/pkgconfig
/usr/local/lib/pkgconfig
/usr/local/share/pkgconfig'
# For each directory that can have .pc files
for d in $(echo $PKG_CONFIG_PATH | tr : ' ') $STD_PC_PATH; do
# For each pc file
for pc in $d/*.pc ; do
if [ $pc == "$d/*.pc" ]; then continue; fi
# Check each word in a line with a .la reference
for word in $(grep '\.la' $pc); do
if $(echo $word | grep -q '.la$' ); then
mkdir -p $d/la-backup
cp -fv $pc $d/la-backup
basename=$(basename $word )
libref=$(echo $basename|sed -e 's/^lib/-l/' -e 's/\.la$//')
# Fix the .pc file
sed -i "s:$word:$libref:" $pc
fi
done
done
done
EOF
chmod +x /usr/sbin/remove-la-files.sh
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/la-files
The original libraries were simply an archive of routines
from which the required routines were extracted and linked
into the executable program. These are described as static
libraries, with names of the form libfoo.a
on UNIX-like operating systems. On
some old operating systems they are the only type available.
On almost all Linux platforms there are also “shared” (or equivalently
“dynamic”) libraries (with names
of the form libfoo.so
) –
one copy of the library is loaded into virtual memory, and
shared by all the programs which call any of its functions.
This is space efficient.
In the past, essential programs such as a shell were often
linked statically so that some form of minimal recovery
system would exist even if shared libraries, such as
libc.so
, became damaged (e.g.
moved to lost+found
after
fsck following
an unclean shutdown). Nowadays, most people use an
alternative system install or a USB stick if they have to
recover. Journaling filesystems also reduce the likelihood of
this sort of problem.
Within the book, there are various places where configure
switches such as --disable-static
are employed,
and other places where the possibility of using system
versions of libraries instead of the versions included within
another package is discussed. The main reason for this is to
simplify updates of libraries.
If a package is linked to a dynamic library, updating to a
newer library version is automatic once the newer library is
installed and the program is (re)started (provided the
library major version is unchanged, e.g. going from
libfoo.so.2.0
to libfoo.so.2.1
. Going to libfoo.so.3
will require recompilation
– ldd can
be used to find which programs use the old version). If a
program is linked to a static library, the program always has
to be recompiled. If you know which programs are linked to a
particular static library, this is merely an annoyance. But
usually you will not
know which programs to recompile.
One way to identify when a static library is used, is to deal
with it at the end of the installation of every package.
Write a script to find all the static libraries in
/usr/lib
or wherever you are
installing to, and either move them to another directory so
that they are no longer found by the linker, or rename them
so that libfoo.a
becomes e.g.
libfoo.a.hidden
. The static
library can then be temporarily restored if it is ever
needed, and the package needing it can be identified. This
shouldn't be done blindly since many libraries only exist in
a static version. For example, some libraries from the
glibc and gcc packages should always be present on
the system (libc_nonshared.a, libg.a,
libpthread_nonshared.a, libssp_nonshared.a,
libsupc++.a
as of glibc-2.36 and gcc-12.2).
If you use this approach, you may discover that more packages than you were expecting use a static library. That was the case with nettle-2.4 in its default static-only configuration: It was required by GnuTLS-3.0.19, but also linked into package(s) which used GnuTLS, such as glib-networking-2.32.3.
Many packages put some of their common functions into a static library which is only used by the programs within the package and, crucially, the library is not installed as a standalone library. These internal libraries are not a problem – if the package has to be rebuilt to fix a bug or vulnerability, nothing else is linked to them.
When BTFS mentions system libraries, it means shared versions of libraries. Some packages such as Firefox-115.4.0 and ghostscript-10.01.1 bundle many other libraries in their build tree. The version they ship is often older than the version used in the system, so it may contain bugs – sometimes developers go to the trouble of fixing bugs in their included libraries, other times they do not.
Sometimes, deciding to use system libraries is an easy decision. Other times it may require you to alter the system version (e.g. for libpng-1.6.39 if used for Firefox-115.4.0). Occasionally, a package ships an old library and can no longer link to the current version, but can link to an older version. In this case, BTFS will usually just use the shipped version. Sometimes the included library is no longer developed separately, or its upstream is now the same as the package's upstream and you have no other packages which will use it. In those cases, you'll be lead to use the included library even if you usually prefer to use system libraries.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/libraries
This page contains information about locale related problems and issues. In the following paragraphs you'll find a generic overview of things that can come up when configuring your system for various locales. Many (but not all) existing locale related problems can be classified and fall under one of the headings below. The severity ratings below use the following criteria:
Critical: The program doesn't perform its main function. The fix would be very intrusive, it's better to search for a replacement.
High: Part of the functionality that the program provides is not usable. If that functionality is required, it's better to search for a replacement.
Low: The program works in all typical use cases, but lacks some functionality normally provided by its equivalents.
If there is a known workaround for a specific package, it will appear on that package's page. For the most recent information about locale related issues for individual packages, check the User Notes in the BLFS Wiki.
Severity: Critical
Some programs require the user to specify the character
encoding for their input or output data and present only a
limited choice of encodings. This is the case for the
-X
option in Enscript-1.6.6,
the -input-charset
option in
unpatched Cdrtools-3.02a09, and the character
sets offered for display in the menu of Links-2.29. If the required
encoding is not in the list, the program usually becomes
completely unusable. For non-interactive programs, it may be
possible to work around this by converting the document to a
supported input character set before submitting to the
program.
A solution to this type of problem is to implement the necessary support for the missing encoding as a patch to the original program or to find a replacement.
Severity: High for non-text documents, low for text documents
Some programs, nano-7.2 or JOE-4.6 for example, assume that documents are always in the encoding implied by the current locale. While this assumption may be valid for the user-created documents, it is not safe for external ones. When this assumption fails, non-ASCII characters are displayed incorrectly, and the document may become unreadable.
If the external document is entirely text based, it can be converted to the current locale encoding using the iconv program.
For documents that are not text-based, this is not possible. In fact, the assumption made in the program may be completely invalid for documents where the Microsoft Windows operating system has set de facto standards. An example of this problem is ID3v1 tags in MP3 files (see the BLFS Wiki ID3v1Coding page for more details). For these cases, the only solution is to find a replacement program that doesn't have the issue (e.g., one that will allow you to specify the assumed document encoding).
Among BTFS packages, this problem applies to nano-7.2, JOE-4.6, and all media players except Audacious-4.3.
Another problem in this category is when someone cannot read the documents you've sent them because their operating system is set up to handle character encodings differently. This can happen often when the other person is using Microsoft Windows, which only provides one character encoding for a given country. For example, this causes problems with UTF-8 encoded TeX documents created in Linux. On Windows, most applications will assume that these documents have been created using the default Windows 8-bit encoding.
In extreme cases, Windows encoding compatibility issues may be solved only by running Windows programs under Wine.
Severity: Critical
The POSIX standard mandates that the filename encoding is the
encoding implied by the current LC_CTYPE locale category.
This information is well-hidden on the page which specifies
the behavior of Tar and
Cpio programs. Some programs
get it wrong by default (or simply don't have enough
information to get it right). The result is that they create
filenames which are not subsequently shown correctly by
ls, or they
refuse to accept filenames that ls shows properly. For the
GLib-2.76.2 library, the problem can be
corrected by setting the G_FILENAME_ENCODING
environment variable to
the special "@locale" value. Glib2 based programs that don't respect
that environment variable are buggy.
The Zip-3.0 and UnZip-6.0 have this problem because they hard-code the expected filename encoding. UnZip contains a hard-coded conversion table between the CP850 (DOS) and ISO-8859-1 (UNIX) encodings and uses this table when extracting archives created under DOS or Microsoft Windows. However, this assumption only works for those in the US and not for anyone using a UTF-8 locale. Non-ASCII characters will be mangled in the extracted filenames.
The general rule for avoiding this class of problems is to avoid installing broken programs. If this is impossible, the convmv command-line tool can be used to fix filenames created by these broken programs, or intentionally mangle the existing filenames to meet the broken expectations of such programs.
In other cases, a similar problem is caused by importing filenames from a system using a different locale with a tool that is not locale-aware (e.g. OpenSSH). In order to avoid mangling non-ASCII characters when transferring files to a system with a different locale, any of the following methods can be used:
Transfer anyway, fix the damage with convmv.
On the sending side, create a tar archive with the
--format=posix
switch passed to tar (this will be the
default in a future version of tar).
Mail the files as attachments. Mail clients specify the encoding of attached filenames.
Write the files to a removable disk formatted with a FAT or FAT32 filesystem.
Transfer the files using Samba.
Transfer the files via FTP using RFC2640-aware server (this currently means only wu-ftpd, which has bad security history) and client (e.g., lftp).
The last four methods work because the filenames are automatically converted from the sender's locale to UNICODE and stored or sent in this form. They are then transparently converted from UNICODE to the recipient's locale encoding.
Severity: High or critical
Many programs were written in an older era where multibyte locales were not common. Such programs assume that C "char" data type, which is one byte, can be used to store single characters. Further, they assume that any sequence of characters is a valid string and that every character occupies a single character cell. Such assumptions completely break in UTF-8 locales. The visible manifestation is that the program truncates strings prematurely (i.e., at 80 bytes instead of 80 characters). Terminal-based programs don't place the cursor correctly on the screen, don't react to the "Backspace" key by erasing one character, and leave junk characters around when updating the screen, usually turning the screen into a complete mess.
Fixing this kind of problems is a tedious task from a programmer's point of view, like all other cases of retrofitting new concepts into the old flawed design. In this case, one has to redesign all data structures in order to accommodate to the fact that a complete character may span a variable number of "char"s (or switch to wchar_t and convert as needed). Also, for every call to the "strlen" and similar functions, find out whether a number of bytes, a number of characters, or the width of the string was really meant. Sometimes it is faster to write a program with the same functionality from scratch.
Among BTFS packages, this problem applies to xine-ui-0.99.14 and all the shells.
Severity: Low
TFS expects that manual pages are in the language-specific (usually 8-bit) encoding, as specified on the TFS Man DB page. However, some packages install translated manual pages in UTF-8 encoding (e.g., Shadow, already dealt with), or manual pages in languages not in the table. Not all BTFS packages have been audited for conformance with the requirements put in TFS (the large majority have been checked, and fixes placed in the book for packages known to install non-conforming manual pages). If you find a manual page installed by any of BTFS packages that is obviously in the wrong encoding, please remove or convert it as needed, and report this to BTFS team as a bug.
You can easily check your system for any non-conforming manual pages by copying the following short shell script to some accessible location,
#!/bin/sh
# Begin checkman.sh
# Usage: find /usr/share/man -type f | xargs checkman.sh
for a in "$@"
do
# echo "Checking $a..."
# Pure-ASCII manual page (possibly except comments) is OK
grep -v '.\\"' "$a" | iconv -f US-ASCII -t US-ASCII >/dev/null 2>&1 \
&& continue
# Non-UTF-8 manual page is OK
iconv -f UTF-8 -t UTF-8 "$a" >/dev/null 2>&1 || continue
# Found a UTF-8 manual page, bad.
echo "UTF-8 manual page: $a" >&2
done
# End checkman.sh
and then issuing the following command (modify the command
below if the checkman.sh script is not
in your PATH
environment
variable):
find /usr/share/man -type f | xargs checkman.sh
Note that if you have manual pages installed in any location
other than /usr/share/man
(e.g., /usr/local/share/man
),
you must modify the above command to include this additional
location.
The packages that are installed in this book are only the tip of the iceberg. We hope that the experience you gained with the TFS book and the BTFS book will give you the background needed to compile, install and configure packages that are not included in this book.
When you want to install a package to a location other than
/
, or /usr
, you are installing outside the default
environment settings on most machines. The following examples
should assist you in determining how to correct this situation.
The examples cover the complete range of settings that may need
updating, but they are not all needed in every situation.
Expand the PATH
to include
$PREFIX/bin
.
Expand the PATH
for
root
to include
$PREFIX/sbin
.
Add $PREFIX/lib
to
/etc/ld.so.conf
or expand
LD_LIBRARY_PATH
to include it.
Before using the latter option, check out http://xahlee.info/UnixResource_dir/_/ldpath.html.
If you modify /etc/ld.so.conf
, remember to update
/etc/ld.so.cache
by
executing ldconfig as the
root
user.
Add $PREFIX/man
to
/etc/man_db.conf
or expand
MANPATH
.
Add $PREFIX/info
to
INFOPATH
.
Add $PREFIX/lib/pkgconfig
to PKG_CONFIG_PATH
. Some
packages are now installing .pc
files in $PREFIX/share/pkgconfig
, so you may
have to include this directory also.
Add $PREFIX/include
to
CPPFLAGS
when compiling
packages that depend on the package you installed.
Add $PREFIX/lib
to
LDFLAGS
when compiling
packages that depend on a library installed by the
package.
If you are in search of a package that is not in the book, the following are different ways you can search for the desired package.
If you know the name of the package, then search
SourceForge for it at https://sourceforge.net/directory/,
and search GitHub for it at https://github.com/. Also
search Google at https://google.com/. Sometimes
a search for the rpm
at
https://rpmfind.net/ or the
deb
at
https://www.debian.org/distrib/packages#search_packages
can also lead to a link to the package.
If you know the name of the executable, but not the package that the executable belongs to, first try a Google search with the name of the executable. If the results are overwhelming, try searching for the given executable in the Debian repository at https://www.debian.org/distrib/packages#search_contents.
Some general hints on handling new packages:
Many of the newer packages follow the ./configure && make && make install process. Help on the options accepted by configure can be obtained via the command ./configure --help.
Most of the packages contain documentation on compiling and installing the package. Some of the documents are excellent, some not so excellent. Check out the homepage of the package for any additional and updated hints for compiling and configuring the package.
If you are having a problem compiling the package, try searching the LFS archives at https://www.linuxfromscratch.org/search.html for the error or if that fails, try searching Google. Often, a distribution will have already solved the problem (many of them use development versions of packages, so they see the changes sooner than those of us who normally use stable released versions). But be cautious - all builders tend to carry patches which are no longer necessary, and to have fixes which are only required because of their particular choices in how they build a package. You may have to search deeply to find a fix for the package version you are trying to use, or even to find the package (names are sometimes not what you might expect, e.g. ghostscript often has a prefix or a suffix in its name), but the following notes might help, particularly for those who, like the editors, are trying to build the latest versions and encountering problems:
Arch https://www.archlinux.org/packages/
- enter the package name in the 'Keywords' box,
select the package name, select the 'Source Files'
field, and then select the PKGBUILD
entry to see how they
build this package.
Debian https://ftp.debian.org/debian/pool
(use your country's version if there is one) - the
source will be in .tar.gz tarballs (either the
original upstream .orig
source, or else a
dfsg
containing those
parts which comply with debian's free software
guidelines) accompanied by versioned .diff.gz or
.tar.gz additions. These additions often show how
the package is built, and may contain patches. In
the .diff.gz versions, any patches create files in
debian/patches
.
Fedora package source gets reorganized from time to time. At the moment the package source for rpms is at https://src.fedoraproject.org/projects/rpms/%2A and from there you can try putting a package name in the search box. If the package is found you can look at the files (specfile to control the build, various patches) or the commits. If that fails, you can download an srpm (source rpm) and using rpm2cpio (see the Tip at the bottom of the page). For rpms go to https://dl.fedoraproject.org/pub/fedora/linux/ and then choose which repo you wish to look at - development/rawhide is the latest development, or choose releases for what was shipped in a release, updates for updates to a release, or updates/testing for the latest updates which might work or might have problems.
Gentoo - First use a search engine to find an
ebuild which looks as if it will fix the problem,
or search at https://packages.gentoo.org/
- use the search field. Note where the package
lives in the portage hierarchy, e.g. app-something/
. In general you
can treat the ebuild as a sort of pseudo-code /
shell combination with some functions you can
hazard a guess at, such as dodoc. If the fix
is just a sed, try it.
However, in most cases the fix will use a patch. To
find the patch, use a gentoo-portage mirror: Two
links to mirrors in the U.S.A. which seem to
usually be up to date are https://mirror.rackspace.com/gentoo-portage/
and https://mirror.steadfast.net/gentoo-portage/.
Navigate down the tree to the package, then to the
files/
directory to
look for the patch. Sometimes a portage mirror has
not yet been updated, particularly for a recent new
patch. In a few cases, gentoo batch the patches
into a tarball and the ebuild will have a link in
the form
https://dev.gentoo.org/~${PATCH_DEV}/distfiles/${P}-patches-${PATCH_VER}.tar.xz
: here, look for PATCH_DEV and PATCH_VER in the
build and format the full URL in your browser or
for wget : remember the '~' before the developer's
ID and note that trying to search the earlier
levels of the URL in a browser may drop you at
www.gentoo.org or return 403 (forbidden).
openSUSE provide a rolling release, some package versions are in https://download.opensuse.org/source/tumbleweed/repo/oss/src/ but others are in ../update/openSUSE-current/src - the source only seems to be available in source rpms.
Slackware - the official package browser is
currently broken. The site at https://slackbuilds.org/
has current and previous versions in their
unofficial repository with links to homepages,
downloads, and some individual files, particularly
the .SlackBuild
files.
Ubuntu ftp://ftp.ubuntu.com/ubuntu/pool/ - see the debian notes above.
If everything else fails, try the blfs-support mailing-list.
If you have found a package that is only available in
.deb
or .rpm
format, there are two small scripts,
rpm2targz and
deb2targz that
are available at
https://anduin.linuxfromscratch.org/BLFS/extras/deb2targz.tar.bz2
and
https://anduin.linuxfromscratch.org/BLFS/extras/rpm2targz.tar.bz2
to convert the archives into a simple tar.gz
format.
You may also find an rpm2cpio script useful. The Perl version in the linux kernel archives at https://lore.kernel.org/all/20021016121842.GA2292@ncsu.edu/2-rpm2cpio works for most source rpms. The rpm2targz script will use an rpm2cpio script or binary if one is on your path. Note that rpm2cpio will unpack a source rpm in the current directory, giving a tarball, a spec file, and perhaps patches or other files.
The intention of TFS is to provide a basic system which you can build upon. There are several things about tidying up the system which many people wonder about once they have done the base install. We hope to cover these issues in this chapter.
Most people coming from non-Unix like backgrounds to Linux find
the concept of text-only configuration files slightly strange. In
Linux, just about all configuration is done via the manipulation
of text files. The majority of these files can be found in the
/etc
hierarchy. There are often
graphical configuration programs available for different
subsystems but most are simply pretty front ends to the process
of editing a text file. The advantage of text-only configuration
is that you can edit parameters using your favorite text editor,
whether that be vim, emacs, or any other editor.
The first task is making a recovery boot device in Creating a Custom Boot Device because it's the most critical need. Hardware issues relevant to firmware and other devices is addressed next. The system is then configured to ease addition of new users, because this can affect the choices you make in the two subsequent topics—The Bash Shell Startup Files and The vimrc Files.
The remaining topics, Customizing your Logon with /etc/issue and Random number generation are then addressed, in that order. They don't have much interaction with the other topics in this chapter.
This section is really about creating a rescue device. As the name rescue implies, the host system has a problem, often lost partition information or corrupted file systems, that prevents it from booting and/or operating normally. For this reason, you must not depend on resources from the host being "rescued". To presume that any given partition or hard drive will be available is a risky presumption.
In a modern system, there are many devices that can be used as a rescue device: floppy, cdrom, usb drive, or even a network card. Which one you use depends on your hardware and your BIOS. In the past, a rescue device was thought to be a floppy disk. Today, many systems do not even have a floppy drive.
Building a complete rescue device is a challenging task. In many ways, it is equivalent to building an entire LFS system. In addition, it would be a repetition of information already available. For these reasons, the procedures for a rescue device image are not presented here.
The software of today's systems has grown large. Linux 2.6 no longer supports booting directly from a floppy. In spite of this, there are solutions available using older versions of Linux. One of the best is Tom's Root/Boot Disk available at http://www.toms.net/rb/. This will provide a minimal Linux system on a single floppy disk and provides the ability to customize the contents of your disk if necessary.
There are several sources that can be used for a rescue CD-ROM. Just about any commercial distribution's installation CD-ROMs or DVDs will work. These include RedHat, Ubuntu, and SuSE. One very popular option is Knoppix.
Also, the LFS Community has developed its own LiveCD available at https://www.linuxfromscratch.org/livecd/. This LiveCD, is no longer capable of building an entire LFS/BLFS system, but is still a good rescue CD-ROM. If you download the ISO image, use xorriso to copy the image to a CD-ROM.
The instructions for using GRUB2 to make a custom rescue CD-ROM are also available in LFS Chapter 10.
A USB Pen drive, sometimes called a Thumb drive, is recognized by Linux as a SCSI device. Using one of these devices as a rescue device has the advantage that it is usually large enough to hold more than a minimal boot image. You can save critical data to the drive as well as use it to diagnose and recover a damaged system. Booting such a drive requires BIOS support, but building the system consists of formatting the drive, adding GRUB as well as the Linux kernel and supporting files.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/CreatingaCustomBootDevice
An TFS system can be used without a graphical desktop, and unless or until you install a graphical environment you will have to work in the console. Most, if not all, PCs boot with an 8x16 font - whatever the actual screen size. There are a few things you can do to alter the display on the console. Most of them involve changing the font, but the first alters the commandline used by grub.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/aboutconsolefonts
Modern screens often have a lot more pixels then the screens used in the past. If your screen is 1600 pixels wide, an 8x16 font will give you 200 columns of text - unless your monitor is enormous, the text will be tiny. One of the ways to work around this is to tell grub to use a smaller size, such as 1024x768 or 800x600 or even 640x480. Even if your screen does not have a 4:3 aspect ratio, this should work.
To try this, you can reboot and edit grub's command-line to
insert a 'video=' parameter between the 'root=/dev/sdXn' and
'ro', for example root=/dev/sda2
video=1024x768 ro
based on the example in TFS section
10.4.4 : ../../../../lfs/view/development/chapter10/grub.html.
If you decide that you wish to do this, you can then (as the
root
user) edit /boot/grub/grub.cfg
.
In TFS the kbd package is
used. The fonts it provides are PC Screen Fonts, usually
called PSF, and they were installed into /usr/share/consolefonts
. Where these
include a unicode mapping table, the file suffix is often
changed to .psfu
although
packages such as terminus-font (see below) do not add the
'u'. These fonts are usually compressed with gzip to save
space, but that is not essential.
The initial PC text screens had 8 colours, or 16 colours if the bright versions of the original 8 colours were used. A PSF font can include up to 256 characters (technically, glyphs) while allowing 16 colours, or up to 512 characters (in which case, the bright colours will not be available). Clearly, these console fonts cannot be used to display CJK text - that would need thousands of available glyphs.
Some fonts in kbd can cover more than 512 codepoints ('characters'), with varying degrees of fidelity: unicode contains several whitespace codepoints which can all be mapped to a space, varieties of dashes can be mapped to a minus sign, smart quotes can map to the regular ASCII quotes rather than to whatever is used for "codepoint not present or invalid", and those cyrillic or greek letters which look like latin letters can be mapped onto them, so 'A' can also do duty for cyrillic A and greek Alpha, and 'P' can also do duty for cyrillic ER and greek RHO. Unfortunately, where a font has been created from a BDF file (the method in terminus and debian's console-setup ) such mapping of additional codepoints onto an existing glyph is not always done, although the terminus ter-vXXn fonts do this well.
There are over 120 combinations of font and size in
kbd: often a font is
provided at several character sizes, and sometimes varieties
cover different subsets of unicode. Most are 8 pixels wide,
in heights from 8 to 16 pixels, but there are a few which are
9 pixels wide, some others which are 12x22, and even one
(latarcyrheb-sun32.psfu
) which
has been scaled up to 16x32. Using a bigger font is another
way of making text on a large screen easier to read.
You can test fonts as a normal user. If you have a font which has not been installed, you can load it with :
setfont /path/to/yourfont.ext
For the fonts already installed you only need the name, so
using gr737a-9x16.psfu.gz
as an
example:
setfont gr737a-9x16
To see the glyphs in the font, use:
showconsolefont
If the font looks as if it might be useful, you can then go on to test it more thoroughly.
When you find a font which you wish to use, as the
root
user) edit /etc/sysconfig/console
as described in TFS section 9.6.5 ../../../../lfs/view/development/chapter09/usage.html.
.
For fonts not supplied with the kbd package you will need to optionally
compress it / them with gzip and then install it /
them as the root
user.
Although some console fonts are created from BDF files, which is a text format with hex values for the pixels in each row of the character, there are more-modern tools available for editing psf fonts. The psftools package allows you to dump a font to a text representation with a dash for a pixel which is off (black) and a hash for a pixel which is on (white). You can then edit the text file to add more characters, or reshape them, or map extra codepoints onto them, and then create a new psf font with your changes.
The Terminus Font
package provides fixed-width bitmap fonts designed for long
(8 hours and more per day) work with computers. Under
'Character variants' on that page is a list of patches (in
the alt/
directory). If you are
using a graphical browser to look at that page, you can see
what the patches do, e.g. 'll2' makes 'l' more visibly
different from 'i' and '1'.
By default terminus-fonts will try to create several types of font, and it will fail if bdftopcf from Xorg Applications has not been installed. The configure script is only really useful if you go on to install all the fonts (console and X11 bitmap) to the correct directories, as in a distro. To build only the PSF fonts and their dependencies, run:
make psf
This will create more than 240 ter-*.psf fonts. The 'b' suffix indicates bright, 'n' indicates normal. You can then test them to see if any fit your requirements. Unless you are creating a distro, there seems little point in installing them all.
As an example, to install the last of these fonts, you can
gzip it and then as the root
user:
install -v -m644 ter-v32n.psf.gz /usr/share/consolefonts
On some recent PCs it can be necessary, or desirable, to load
firmware to make them work at their best. There is a directory,
/lib/firmware
, where the kernel
or kernel drivers look for firmware images.
Currently, most firmware can be found at a git
repository:
https://git.kernel.org/cgit/linux/kernel/git/firmware/linux-firmware.git/tree/.
For convenience, the LFS Project has created a mirror, updated
daily, where these firmware files can be accessed via
wget
or a web
browser at https://anduin.linuxfromscratch.org/BLFS/linux-firmware/.
To get the firmware, either point a browser to one of the above repositories and then download the item(s) which you need, or install git-2.40.1 and clone that repository.
For some other firmware, particularly for Intel microcode and certain wifi devices, the needed firmware is not available in the above repository. Some of this will be addressed below, but a search of the Internet for needed firmware is sometimes necessary.
Firmware files are conventionally referred to as blobs because you cannot determine what they will do. Note that firmware is distributed under various different licenses which do not permit disassembly or reverse-engineering.
Firmware for PCs falls into four categories:
Updates to the CPU to work around errata, usually referred to as microcode.
Firmware for video controllers. On x86 machines this is required for ATI devices (Radeon and AMDGPU chips) and may be useful for Intel (Skylake and later) and Nvidia (Kepler and later) GPUs.
ATI Radeon and AMDGPU devices all require firmware to be able to use KMS (kernel modesetting - the preferred option) as well as for Xorg. For old radeon chips (before the R600), the firmware is still in the kernel source.
Intel integrated GPUs from Skylake onwards can use firmware for GuC (the Graphics microcontroller), and also for the HuC (HEVC/H265 microcontroller which offloads to the GPU) and the DMC (Display Microcontroller) to provide additional low-power states. The GuC and HuC have had a chequered history in the kernel and updated firmware may be disabled by default, depending on your kernel version. Further details may be found at 01.org and Arch linux.
Nvidia GPUs from Kepler onwards require signed firmware, otherwise the nouveau driver is unable to provide hardware acceleration. Nvidia has now released firmware up to Ampere (GeForce30 series) to linux-firmware. Note that faster clocks than the default are not enabled by the released firmware.
Firmware updates for wired network ports. Mostly they work even without the updates, but probably they will work better with the updated firmware. For some modern laptops, firmware for both wired ethernet (e.g. rtl_nic) and also for bluetooth devices (e.g. qca) is required before the wired network can be used.
Firmware for other devices, such as wifi. These devices are not required for the PC to boot, but need the firmware before these devices can be used.
Although not needed to load a firmware blob, the following tools may be useful for determining, obtaining, or preparing the needed firmware in order to load it into the system: cpio-2.13, git-2.40.1, pciutils-3.9.0, and Wget-1.21.3
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/aboutfirmware
In general, microcode can be loaded by the BIOS or UEFI, and it might be updated by upgrading to a newer version of those. On linux, you can also load the microcode from the kernel if you are using an AMD family 10h or later processor (first introduced late 2007), or an Intel processor from 1998 and later (Pentium4, Core, etc), if updated microcode has been released. These updates only last until the machine is powered off, so they need to be applied on every boot.
Intel provide updates of their microcode for Skylake and later processors as new vulnerabilities come to light, and have in the past provided updates for processors from SandyBridge onwards, although those are no-longer supported for new fixes. New versions of AMD firmware are rare and usually only apply to a few models, although motherboard manufacturers get AGESA (AMD Generic Encapsulated Software Architecture) updates to change BIOS values, e.g. to support more memory variants, new vulnerability fixes or newer CPUs.
There were two ways of loading the microcode, described as 'early' and 'late'. Early loading happens before userspace has been started, late loading happens after userspace has started. However, late loading is known to be problematic and not supported anymore (see the kernel commit x86/microcode: Taint and warn on late loading.) Indeed, early loading is needed to work around one particular erratum in early Intel Haswell processors which had TSX enabled. (See Intel Disables TSX Instructions: Erratum Found in Haswell, Haswell-E/EP, Broadwell-Y.) Without this update glibc can do the wrong thing in uncommon situations.
In previous versions of this book, late loading of microcode to see if it gets applied was recommended, followed by using an initrd to force early loading. But now that the contents of the Intel microcode tarball is documented, and AMD microcode can be read by a Python script to determine which machines it covers, there is no real reason to use late loading.
It might be still possible to manually force late loading of microcode. But it may cause kernel malfunction and you should take the risk yourself. You will need to reconfigure your kernel for either method. The instructions here will show you how to create an initrd for early loading. It is also possible to build the same microcode bin file into the kernel, which allows early loading but requires the kernel to be recompiled to update the microcode.
To confirm what processor(s) you have (if more than one, they will be identical) look in /proc/cpuinfo. Determine the decimal values of the cpu family, model and stepping by running the following command (it will also report the current microcode version):
head -n7 /proc/cpuinfo
Convert the cpu family, model and stepping to pairs of hexadecimal digits, and remember the value of the “microcode” field. You can now check if there is any microcode available.
If you are creating an initrd to update firmware for different machines, as a distro would do, go down to 'Early loading of microcode' and cat all the Intel blobs to GenuineIntel.bin or cat all the AMD blobs to AuthenticAMD.bin. This creates a larger initrd - for all Intel machines in the 20200609 update the size was 3.0 MB compared to typically 24 KB for one machine.
The first step is to get the most recent version of the
Intel microcode. This must be done by navigating to
https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/releases/
and downloading the latest file there. As of this writing
the most secure version of the microcode is
microcode-20230214. Extract this file in the normal way,
the microcode is in the intel-ucode
directory, containing various
blobs with names in the form XX-YY-ZZ. There are also
various other files, and a releasenote.
In the past, intel did not provide any details of which
blobs had changed versions, but now the releasenote details
this. You can compare the microcode version in /proc/cpuinfo
with the version for your
CPU model in the releasenote to know if there is an update.
The recent firmware for older processors is provided to deal with vulnerabilities which have now been made public, and for some of these such as Microarchitectural Data Sampling (MDS) you might wish to increase the protection by disabling hyperthreading, or alternatively to disable the kernel's default mitigation because of its impact on compile times. Please read the online documentation at https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/index.html.
For an Icelake mobile (described as Intel(R) Core(TM)
i7-1065G7 CPU) the relevant values are cpu family 6, model
126, stepping 5 so in this case the required identification
is 06-7e-05. The releasenote says the latest microcode for
it is versioned 0xb8. If the value of the “microcode” field in
/proc/cpuinfo
is 0xb8 or
greater, it indicates the microcode update is already
applied by the BIOS. Otherwise, configure the kernel to
support loading Intel microcode, and then proceed to
the section called
“Early loading of microcode”:
General Setup --->
[*] Initial RAM filesystem and RAM disk (initramfs/initrd) support [CONFIG_BLK_DEV_INITRD]
Processor type and features --->
[*] CPU microcode loading support [CONFIG_MICROCODE]
[*] Intel microcode loading support [CONFIG_MICROCODE_INTEL]
Begin by downloading a container of firmware for your CPU family from https://anduin.linuxfromscratch.org/BLFS/linux-firmware/amd-ucode/. The family is always specified in hex. Families 10h to 14h (16 to 20) are in microcode_amd.bin. Families 15h, 16h, 17h (Zen, Zen+, Zen2) and 19h (Zen3) have their own containers, but very few machines are likely to get updated microcode. Instead, AMD provide an updated AGESA to the motherboard makers, who may provide an updated BIOS using this. There is a Python3 script at https://github.com/AMDESE/amd_ucode_info/blob/master/amd_ucode_info.py. Download that script and run it against the bin file to check which processors have updates.
For the very old Athlon(tm) II X2 in these examples the values were cpu family 16, model 5, stepping 3 giving an identification of Family=0x10 Model=0x05 Stepping=0x03. One line of the amd_ucode_info.py script output describes the microcode version for it:
Family=0x10 Model=0x05 Stepping=0x03: Patch=0x010000c8 Length=960 bytes
If the value of the “microcode” field in
/proc/cpuinfo
is 0x10000c8 or
greater, it indicates the BIOS has already applied the
microcode update. Otherwise, configure the kernel to
support loading AMD microcode, and then proceed to
the section called
“Early loading of microcode”:
General Setup --->
[*] Initial RAM filesystem and RAM disk (initramfs/initrd) support [CONFIG_BLK_DEV_INITRD]
Processor type and features --->
[*] CPU microcode loading support [CONFIG_MICROCODE]
[*] AMD microcode loading support [CONFIG_MICROCODE_AMD]
If you have established that updated microcode is available for your system, it is time to prepare it for early loading. This requires an additional package, cpio-2.13 and the creation of an initrd which will need to be added to grub.cfg.
It does not matter where you prepare the initrd, and once it is working you can apply the same initrd to later LFS systems or newer kernels on this same machine, at least until any newer microcode is released. Use the following commands:
mkdir -p initrd/kernel/x86/microcode cd initrd
For an AMD machine, use the following command (replace <MYCONTAINER> with the name of the container for your CPU's family):
cp -v ../<MYCONTAINER> kernel/x86/microcode/AuthenticAMD.bin
Or for an Intel machine copy the appropriate blob using this command:
cp -v ../intel-ucode/<XX-YY-ZZ> kernel/x86/microcode/GenuineIntel.bin
Now prepare the initrd:
find . | cpio -o -H newc > /boot/microcode.img
You now need to add a new entry to /boot/grub/grub.cfg and here you should add a new line after the linux line within the stanza. If /boot is a separate mountpoint:
initrd /microcode.img
or this if it is not:
initrd /boot/microcode.img
If you are already booting with an initrd (see the
section called “About initramfs”), you
should run mkinitramfs again after
putting the appropriate blob or container into /lib/firmware
. More precisely, put an
intel blob in a /lib/firmware/intel-ucode
directory or an
AMD container in a /lib/firmware/amd-ucode
directory before
running mkinitramfs.
Alternatively, you can have both initrd on the same line,
such as initrd
/microcode.img /other-initrd.img
(adapt
that as above if /boot is not a separate mountpoint).
You can now reboot with the added initrd, and then use the following command to check that the early load worked:
dmesg | grep -e 'microcode' -e 'Linux version' -e 'Command line'
If you updated to address vulnerabilities, you can look at the output of the lscpu command to see what is now reported.
The places and times where early loading happens are very different in AMD and Intel machines. First, an example of an Intel (Icelake mobile) with early loading:
[ 0.000000] microcode: microcode updated early to revision 0xb8, date = 2022-08-31
[ 0.000000] Linux version 6.1.11 (xry111@stargazer) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #2 SMP PREEMPT_DYNAMIC Tue Feb 14 23:23:31 CST 2023
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.1.11-lfs-11.3-rc1 root=PARTUUID=<CLASSIFIED>
ro
[ 0.452924] microcode: sig=0x706e5, pf=0x80, revision=0xb8
[ 0.453197] microcode: Microcode Update Driver: v2.2.
A historic AMD example:
[ 0.000000] Linux version 4.15.3 (ken@testserver) (gcc version 7.3.0 (GCC))
#2 SMP Sun Feb 18 02:32:03 GMT 2018
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-4.15.3-sda5 root=/dev/sda5 ro
[ 0.307619] microcode: microcode updated early to new patch_level=0x010000c8
[ 0.307678] microcode: CPU0: patch_level=0x010000c8
[ 0.307723] microcode: CPU1: patch_level=0x010000c8
[ 0.307795] microcode: Microcode Update Driver: v2.2.
These instructions do NOT apply to old radeons before the
R600 family. For those, the firmware is in the kernel's
/lib/firmware/
directory. Nor
do they apply if you intend to avoid a graphical setup such
as Xorg and are content to use the default 80x25 display
rather than a framebuffer.
Early radeon devices only needed a single 2K blob of firmware. Recent devices need several different blobs, and some of them are much bigger. The total size of the radeon firmware directory is over 500K — on a large modern system you can probably spare the space, but it is still redundant to install all the unused files each time you build a system.
A better approach is to install pciutils-3.9.0 and
then use lspci
to identify which
VGA controller is installed.
With that information, check the RadeonFeature page of the Xorg wiki for Decoder ring for engineering vs marketing names to identify the family (you may need to know this for the Xorg driver in BLFS — Southern Islands and Sea Islands use the radeonsi driver) and the specific model.
Now that you know which controller you are using, consult the Radeon page of the Gentoo wiki which has a table listing the required firmware blobs for the various chipsets. Note that Southern Islands and Sea Islands chips use different firmware for kernel 3.17 and later compared to earlier kernels. Identify and download the required blobs then install them:
mkdir -pv /lib/firmware/radeon cp -v <YOUR_BLOBS> /lib/firmware/radeon
There are actually two ways of installing this firmware. BLFS, in the 'Kernel Configuration for additional firmware' section part of the Xorg ATI Driver-22.0.0 section gives an example of compiling the firmware into the kernel - that is slightly faster to load, but uses more kernel memory. Here we will use the alternative method of making the radeon driver a module. In your kernel config set the following:
Device Drivers --->
Graphics support --->
Direct Rendering Manager --->
[*] Direct Rendering Manager (XFree86 ... support) [CONFIG_DRM]
[M] ATI Radeon [CONFIG_DRM_RADEON]
Loading several large blobs from /lib/firmware takes a noticeable time, during which the screen will be blank. If you do not enable the penguin framebuffer logo, or change the console size by using a bigger font, that probably does not matter. If desired, you can slightly reduce the time if you follow the alternate method of specifying 'y' for CONFIG_DRM_RADEON covered in BLFS at the link above — you must specify each needed radeon blob if you do that.
All video controllers using the amdgpu kernel driver require firmware, whether you will be using the xorg amdgpu driver, the xserver's modesetting driver, or just kernel modesetting to get a console framebuffer larger than 80x25.
Install pciutils-3.9.0 and use that to check the model name (look for 'VGA compatible controller:'). If you have an APU (Accelerated Processing Unit, i.e. CPU and video on the same chip) that will probably tell you the name. If you have a separate amdgpu video card you will need to search to determine which name it uses (e.g. a card described as Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] needs Polaris11 firmware. There is a table of "Family, Chipset name, Product name and Firmware" at the end of the Kernel sections in AMDGPU page of the Gentoo wiki.
Once you have identified the firmware name, install all the relevant files for it. For example, the Baffin card mentioned above has 21 different polaris11* files, APUs such as renoir and picasso have at least 12 files and might gain more in future updates (e.g. the raven APU now has a 13th file, raven_ta.bin).
mkdir -pv /lib/firmware/amdgpu cp -v <YOUR_BLOBS> /lib/firmware/amdgpu
If disk space is not a problem, you could install all the current amdgpu firmware files and not worry about exactly which chipset is installed.
Building the kernel amdgpu driver as a module is recommended. In your kernel .config set at least the following options and review the other AMDGPU options according to your target hardware, for example "ACP (Audio Co-Processor) Configuration":
Device Drivers --->
Graphics support --->
Direct Rendering Manager --->
[*] Direct Rendering Manager (XFree86 ... support) [CONFIG_DRM]
[M] AMD GPU [CONFIG_DRM_AMDGPU]
Display Engine Configuration --->
[*] AMD DC - Enable new display engine (NEW) [CONFIG_DRM_AMD_DC]
As written above at the end of the section on 'Firmware for ATI video chips', loading large blobs from /lib/firmware can take a noticeable time during which the screen will be blank. On a slow machine you might wish to refer to the 'Kernel Configuration for additional firmware' part of Xorg AMDGPU Driver-23.0.0 and compile all the required modules into the kernel to reduce this time, at the cost of using more kernel memory.
Nvidia has released basic signed firmware for recent graphics chips, but significantly after the chips and its own binary drivers were first available. For other chips it has been necessary to extract the firmware from the binary driver.
For more exact information about which chips need extracted firmware, see https://nouveau.freedesktop.org/wiki/VideoAcceleration/#firmware.
First, the kernel Nvidia driver must be activated:
Device Drivers --->
Graphics support --->
Direct Rendering Manager --->
<*> Direct Rendering Manager (XFree86 ... support) [CONFIG_DRM]
<*/M> Nouveau (NVIDIA) cards [CONFIG_DRM_NOUVEAU]
If the necessary firmware is available in the nvidia/
directory of linux-firmware, copy
it to /lib/firmware/nouveau
.
If the firmware has not been made available in linux-firmware, for the old chips mentioned in the nouveau wiki link above ensure you have installed Python-2.7.18 and run the following commands:
wget https://raw.github.com/imirkin/re-vp2/master/extract_firmware.py wget https://us.download.nvidia.com/XFree86/Linux-x86/325.15/NVIDIA-Linux-x86-325.15.run sh NVIDIA-Linux-x86-325.15.run --extract-only python2 extract_firmware.py mkdir -p /lib/firmware/nouveau cp -d nv* vuc-* /lib/firmware/nouveau/
The kernel likes to load firmware for some network drivers,
particularly those from Realtek (the
/lib/linux-firmware/rtl_nic/) directory, but they generally
appear to work without it. Therefore, you can boot the
kernel, check dmesg for messages about this missing firmware,
and if necessary download the firmware and put it in the
specified directory in /lib/firmware
so that it will be found on
subsequent boots. Note that with current kernels this works
whether or not the driver is compiled in or built as a
module, there is no need to build this firmware into the
kernel. Here is an example where the R8169 driver has been
compiled in but the firmware was not made available. Once the
firmware had been provided, there was no mention of it on
later boots.
dmesg | grep firmware | grep r8169
[ 7.018028] r8169 0000:01:00.0: Direct firmware load for rtl_nic/rtl8168g-2.fw failed with error -2
[ 7.018036] r8169 0000:01:00.0 eth0: unable to load firmware patch rtl_nic/rtl8168g-2.fw (-2)
Different countries have different regulations on the radio
spectrum usage of wireless devices. You can install a
firmware to make the wireless devices obey local spectrum
regulations, so you won't be inquired by local authority or
find your wireless NIC jamming the frequencies of other
devices (for example, remote controllers). The regulatory
database firmware can be downloaded from https://kernel.org/pub/software/network/wireless-regdb/.
To install it, simply extract regulatory.db
and regulatory.db.p7s
from the tarball into
/lib/firmware
. Note that either
the cfg80211
driver needs to be
selected as a module for the regulatory.*
files to be loaded, or those
files need to be included as firmware into the kernel, as
explained above in the section called
“Firmware for Video Cards”.
The access point (AP) would send a country code to your
wireless NIC, and wpa_supplicant-2.10 would
tell the kernel to load the regulation of this country from
regulatory.db
, and enforce it.
Note that several AP don't send this country code, so you may
be locked to a rather restricted usage (specially if you want
to use your interface as an AP).
Identifying the correct firmware will typically require you
to install pciutils-3.9.0, and then use
lspci
to
identify the device. You should then search online to check
which module it uses, which firmware, and where to obtain the
firmware — not all of it is in linux-firmware.
If possible, you should begin by using a wired connection when you first boot your LFS system. To use a wireless connection you will need to use a network tools such as iw-5.19, Wireless Tools-29, or wpa_supplicant-2.10.
Firmware may also be needed for other devices such as some SCSI controllers, bluetooth adaptors, or TV recorders. The same principles apply.
Although most devices needed by packages in BLFS and beyond are
set up properly by udev using
the default rules installed by LFS in /etc/udev/rules.d
, there are cases where the
rules must be modified or augmented.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/aboutdevices
If there are multiple sound cards in a system, the "default"
sound card becomes random. The method to establish sound card
order depends on whether the drivers are modules or not. If
the sound card drivers are compiled into the kernel, control
is via kernel command line parameters in /boot/grub/grub.cfg
. For example, if a
system has both an FM801 card and a SoundBlaster PCI card,
the following can be appended to the command line:
snd-fm801.index=0 snd-ens1371.index=1
If the sound card drivers are built as modules, the order can
be established in the /etc/modprobe.conf
file with:
options snd-fm801 index=0
options snd-ens1371 index=1
USB devices usually have two kinds of device nodes associated with them.
The first kind is created by device-specific drivers (e.g., usb_storage/sd_mod or usblp) in the kernel. For example, a USB mass storage device would be /dev/sdb, and a USB printer would be /dev/usb/lp0. These device nodes exist only when the device-specific driver is loaded.
The second kind of device nodes (/dev/bus/usb/BBB/DDD, where BBB is the bus number and DDD is the device number) are created even if the device doesn't have a kernel driver. By using these "raw" USB device nodes, an application can exchange arbitrary USB packets with the device, i.e., bypass the possibly-existing kernel driver.
Access to raw USB device nodes is needed when a userspace program is acting as a device driver. However, for the program to open the device successfully, the permissions have to be set correctly. By default, due to security concerns, all raw USB devices are owned by user root and group usb, and have 0664 permissions (the read access is needed, e.g., for lsusb to work and for programs to access USB hubs). Packages (such as SANE and libgphoto2) containing userspace USB device drivers also ship udev rules that change the permissions of the controlled raw USB devices. That is, rules installed by SANE change permissions for known scanners, but not printers. If a package maintainer forgot to write a rule for your device, report a bug to both BLFS (if the package is there) and upstream, and you will need to write your own rule.
There is one situation when such fine-grained access control with pre-generated udev rules doesn't work. Namely, PC emulators such as KVM, QEMU and VirtualBox use raw USB device nodes to present arbitrary USB devices to the guest operating system (note: patches are needed in order to get this to work without the obsolete /proc/bus/usb mount point described below). Obviously, maintainers of these packages cannot know which USB devices are going to be connected to the guest operating system. You can either write separate udev rules for all needed USB devices yourself, or use the default catch-all "usb" group, members of which can send arbitrary commands to all USB devices.
Before Linux-2.6.15, raw USB device access was performed not with /dev/bus/usb/BBB/DDD device nodes, but with /proc/bus/usb/BBB/DDD pseudofiles. Some applications (e.g., VMware Workstation) still use only this deprecated technique and can't use the new device nodes. For them to work, use the "usb" group, but remember that members will have unrestricted access to all USB devices. To create the fstab entry for the obsolete usbfs filesystem:
usbfs /proc/bus/usb usbfs devgid=14,devmode=0660 0 0
Adding users to the "usb" group is inherently insecure, as they can bypass access restrictions imposed through the driver-specific USB device nodes. For instance, they can read sensitive data from USB hard drives without being in the "disk" group. Avoid adding users to this group, if you can.
Fine-tuning of device attributes such as group name and
permissions is possible by creating extra udev rules, matching on something like
this. The vendor and product can be found by searching the
/sys/devices
directory entries
or using udevadm
info after the device has been attached. See
the documentation in the current udev directory of /usr/share/doc
for details.
SUBSYSTEM=="usb_device", SYSFS{idVendor}=="05d8", SYSFS{idProduct}=="4002", \
GROUP:="scanner", MODE:="0660"
The above line is used for descriptive purposes only. The scanner udev rules are put into place when installing SANE-1.0.32.
In some cases, it makes sense to disable udev completely and create static devices. Servers are one example of this situation. Does a server need the capability of handling dynamic devices? Only the system administrator can answer that question, but in many cases the answer will be no.
If dynamic devices are not desired, then static devices must
be created on the system. In the default configuration, the
/etc/rc.d/rcS.d/S10udev
boot
script mounts a tmpfs
partition over the /dev
directory. This problem can be overcome by mounting the root
partition temporarily:
If the instructions below are not followed carefully, your system could become unbootable.
mount --bind / /mnt cp -a /dev/* /mnt/dev rm /etc/rc.d/rcS.d/{S10udev,S50udev_retry} umount /mnt
At this point, the system will use static devices upon the next reboot. Create any desired additional devices using mknod.
If you want to restore the dynamic devices, recreate the
/etc/rc.d/rcS.d/{S10udev,S50udev_retry}
symbolic links and reboot again. Static devices do not need
to be removed (console and null are always needed) because
they are covered by the tmpfs
partition. Disk usage for devices is negligible (about
20–30 bytes per entry.)
If the initial boot process does not set up the /dev/dvd
device properly, it can be
installed using the following modification to the default
udev rules. As the root
user,
run:
sed '1d;/SYMLINK.*cdrom/ a\ KERNEL=="sr0", ENV{ID_CDROM_DVD}=="1", SYMLINK+="dvd", OPTIONS+="link_priority=-100"' \ /lib/udev/rules.d/60-cdrom_id.rules > /etc/udev/rules.d/60-cdrom_id.rules
Together, the /usr/sbin/useradd command and
/etc/skel
directory (both are
easy to set up and use) provide a way to assure new users are
added to your LFS system with the same beginning settings for
things such as the PATH
, keyboard
processing and other environmental variables. Using these two
facilities makes it easier to assure this initial state for
each new user added to the system.
The /etc/skel
directory holds
copies of various initialization and other files that may be
copied to the new user's home directory when the /usr/sbin/useradd program
adds the new user.
The useradd
program uses a collection of default values kept in
/etc/default/useradd
. This file
is created in a base LFS installation by the Shadow package. If it has been removed or
renamed, the useradd program uses some
internal defaults. You can see the default values by running
/usr/sbin/useradd
-D.
To change these values, simply modify the /etc/default/useradd
file as the root
user. An alternative to directly
modifying the file is to run useradd as the root
user while supplying the desired
modifications on the command line. Information on how to do
this can be found in the useradd man page.
To get started, create an /etc/skel
directory and make sure it is
writable only by the system administrator, usually root
. Creating the directory as
root
is the best way to go.
The mode of any files from this part of the book that you put
in /etc/skel
should be writable
only by the owner. Also, since there is no telling what kind of
sensitive information a user may eventually place in their copy
of these files, you should make them unreadable by "group" and
"other".
You can also put other files in /etc/skel
and different permissions may be
needed for them.
Decide which initialization files should be provided in every
(or most) new user's home directory. The decisions you make
will affect what you do in the next two sections, The Bash Shell Startup Files
and The vimrc Files. Some
or all of those files will be useful for root
, any already-existing users, and new
users.
The files from those sections that you might want to place in
/etc/skel
include .inputrc
, .bash_profile
, .bashrc
, .bash_logout
, .dircolors
, and .vimrc
. If you are unsure which of these
should be placed there, just continue to the following
sections, read each section and any references provided, and
then make your decision.
You will run a slightly modified set of commands for files
which are placed in /etc/skel
.
Each section will remind you of this. In brief, the book's
commands have been written for files not added to /etc/skel
and instead just sends the results
to the user's home directory. If the file is going to be in
/etc/skel
, change the book's
command(s) to send output there instead and then just copy the
file from /etc/skel
to the
appropriate directories, like /etc
, ~
or the
home directory of any other user already in the system.
When adding a new user with useradd, use the -m
parameter, which tells useradd to create the user's
home directory and copy files from /etc/skel
(can be overridden) to the new
user's home directory. For example (perform as the root
user):
useradd -m <newuser>
If you are sharing a /home
or
/usr/src
with another Linux
distro (for example, the host distro used for building LFS),
you can create a user with the same UID (and, same primary
group GID) to keep the file ownership consistent across the
systems. First, on the other
distro, get the UID of the user and the GID of the
user's primary group:
getent passwd <username>
| cut -d ':' -f 3,4
The command should output the UID and GID, separated by a colon. Now on the BLFS system, create the primary group and the user:
groupadd -g<GID>
<username>
&& useradd -u<UID>
-g<username>
<username>
Throughout BLFS, many packages install programs that run as
daemons or in some way should have a user or group name
assigned. Generally these names are used to map a user ID (uid)
or group ID (gid) for system use. Generally the specific uid or
gid numbers used by these applications are not significant. The
exception of course, is that root
has a uid and gid of 0 (zero) that is
indeed special. The uid values are stored in /etc/passwd
and the gid values are found in
/etc/group
.
Customarily, Unix systems classify users and groups into two
categories: system users and regular users. The system users
and groups are given low numbers and regular users and groups
have numeric values greater than all the system values. The
cutoff for these numbers is found in two parameters in the
/etc/login.defs
configuration
file. The default UID_MIN value is 1000 and the default GID_MIN
value is 1000. If a specific uid or gid value is not specified
when creating a user with useradd or a group with
groupadd the
values assigned will always be above these cutoff values.
Additionally, the Linux Standard Base recommends that system uid and gid values should be below 100.
Below is a table of suggested uid/gid values used in BLFS beyond those defined in a base LFS installation. These can be changed as desired, but provide a suggested set of consistent values.
Table 3.1. UID/GID Suggested Values
Name | uid | gid |
---|---|---|
bin | 1 | |
lp | 9 | |
adm | 16 | |
atd | 17 | 17 |
messagebus | 18 | 18 |
lpadmin | 19 | |
named | 20 | 20 |
gdm | 21 | 21 |
fcron | 22 | 22 |
systemd-journal | 23 | 23 |
apache | 25 | 25 |
smmsp | 26 | 26 |
polkitd | 27 | 27 |
rpc | 28 | 28 |
exim | 31 | 31 |
postfix | 32 | 32 |
postdrop | 33 | |
sendmail | 34 | |
34 | ||
vmailman | 35 | 35 |
news | 36 | 36 |
kdm | 37 | 37 |
fetchmail | 38 | |
mysql | 40 | 40 |
postgres | 41 | 41 |
dovecot | 42 | 42 |
dovenull | 43 | 43 |
ftp | 45 | 45 |
proftpd | 46 | 46 |
vsftpd | 47 | 47 |
rsyncd | 48 | 48 |
sshd | 50 | 50 |
stunnel | 51 | 51 |
dhcpcd | 52 | 52 |
svn | 56 | 56 |
svntest | 57 | |
git | 58 | 58 |
games | 60 | 60 |
kvm | 61 | |
wireshark | 62 | |
lightdm | 63 | 63 |
sddm | 64 | 64 |
lightdm | 65 | 65 |
scanner | 70 | |
colord | 71 | 71 |
systemd-journal-gateway | 73 | 73 |
systemd-journal-remote | 74 | 74 |
systemd-journal-upload | 75 | 75 |
systemd-network | 76 | 76 |
systemd-resolve | 77 | 77 |
systemd-timesync | 78 | 78 |
systemd-coredump | 79 | 79 |
uuidd | 80 | 80 |
systemd-oom | 81 | 81 |
ldap | 83 | 83 |
avahi | 84 | 84 |
avahi-autoipd | 85 | 85 |
netdev | 86 | |
ntp | 87 | 87 |
unbound | 88 | 88 |
plugdev | 90 | |
wheel | 97 | |
anonymous | 98 | |
nobody | 65534 | |
nogroup | 65534 |
The shell program /bin/bash
(hereafter referred to as just "the shell") uses a collection
of startup files to help create an environment. Each file has a
specific use and may affect login and interactive environments
differently. The files in the /etc
directory generally provide global
settings. If an equivalent file exists in your home directory
it may override the global settings.
An interactive login shell is started after a successful login,
using /bin/login
, by reading the
/etc/passwd
file. This shell
invocation normally reads /etc/profile
and its private equivalent
~/.bash_profile
(or ~/.profile
if called as /bin/sh) upon startup.
An interactive non-login shell is normally started at the
command-line using a shell program (e.g., [prompt]$
/bin/bash) or by the
/bin/su command.
An interactive non-login shell is also started with a terminal
program such as xterm or konsole from within a
graphical environment. This type of shell invocation normally
copies the parent environment and then reads the user's
~/.bashrc
file for additional
startup configuration instructions.
A non-interactive shell is usually present when a shell script is running. It is non-interactive because it is processing a script and not waiting for user input between commands. For these shell invocations, only the environment inherited from the parent shell is used.
The file ~/.bash_logout
is not
used for an invocation of the shell. It is read and executed
when a user exits from an interactive login shell.
Many distributions use /etc/bashrc
for system wide initialization of
non-login shells. This file is usually called from the user's
~/.bashrc
file and is not built
directly into bash itself. This convention
is followed in this section.
For more information see info bash -- Nodes: Bash Startup Files and Interactive Shells.
Most of the instructions below are used to create files
located in the /etc
directory
structure which requires you to execute the commands as the
root
user. If you elect to
create the files in user's home directories instead, you
should run the commands as an unprivileged user.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/bash-shell-startup-files
Here is a base /etc/profile
.
This file starts by setting up some helper functions and some
basic parameters. It specifies some bash history parameters
and, for security purposes, disables keeping a permanent
history file for the root
user. It also sets a default user prompt. It then calls
small, single purpose scripts in the /etc/profile.d
directory to provide most of
the initialization.
For more information on the escape sequences you can use for
your prompt (i.e., the PS1
environment variable) see info
bash -- Node:
Printing a Prompt.
cat > /etc/profile << "EOF"
# Begin /etc/profile
# Written for Beyond Linux From Scratch
# by James Robertson <jameswrobertson@earthlink.net>
# modifications by Dagmar d'Surreal <rivyqntzne@pbzpnfg.arg>
# System wide environment variables and startup programs.
# System wide aliases and functions should go in /etc/bashrc. Personal
# environment variables and startup programs should go into
# ~/.bash_profile. Personal aliases and functions should go into
# ~/.bashrc.
# Functions to help us manage paths. Second argument is the name of the
# path variable to be modified (default: PATH)
pathremove () {
local IFS=':'
local NEWPATH
local DIR
local PATHVARIABLE=${2:-PATH}
for DIR in ${!PATHVARIABLE} ; do
if [ "$DIR" != "$1" ] ; then
NEWPATH=${NEWPATH:+$NEWPATH:}$DIR
fi
done
export $PATHVARIABLE="$NEWPATH"
}
pathprepend () {
pathremove $1 $2
local PATHVARIABLE=${2:-PATH}
export $PATHVARIABLE="$1${!PATHVARIABLE:+:${!PATHVARIABLE}}"
}
pathappend () {
pathremove $1 $2
local PATHVARIABLE=${2:-PATH}
export $PATHVARIABLE="${!PATHVARIABLE:+${!PATHVARIABLE}:}$1"
}
export -f pathremove pathprepend pathappend
# Set the initial path
export PATH=/usr/bin
# Attempt to provide backward compatibility with LFS earlier than 11
if [ ! -L /bin ]; then
pathappend /bin
fi
if [ $EUID -eq 0 ] ; then
pathappend /usr/sbin
if [ ! -L /sbin ]; then
pathappend /sbin
fi
unset HISTFILE
fi
# Set up some environment variables.
export HISTSIZE=1000
export HISTIGNORE="&:[bf]g:exit"
# Set some defaults for graphical systems
export XDG_DATA_DIRS=${XDG_DATA_DIRS:-/usr/share/}
export XDG_CONFIG_DIRS=${XDG_CONFIG_DIRS:-/etc/xdg/}
export XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/tmp/xdg-$USER}
# Set up a red prompt for root and a green one for users.
NORMAL="\[\e[0m\]"
RED="\[\e[1;31m\]"
GREEN="\[\e[1;32m\]"
if [[ $EUID == 0 ]] ; then
PS1="$RED\u [ $NORMAL\w$RED ]# $NORMAL"
else
PS1="$GREEN\u [ $NORMAL\w$GREEN ]\$ $NORMAL"
fi
for script in /etc/profile.d/*.sh ; do
if [ -r $script ] ; then
. $script
fi
done
unset script RED GREEN NORMAL
# End /etc/profile
EOF
Now create the /etc/profile.d
directory, where the individual initialization scripts are
placed:
install --directory --mode=0755 --owner=root --group=root /etc/profile.d
Using the bash completion script below is controversial. Not all users like it. It adds many (usually over 1000) lines to the bash environment and makes it difficult to use the 'set' command to examine simple environment variables. Omitting this script does not interfere with the ability of bash to use the tab key for file name completion.
This script imports bash completion scripts, installed by many other BLFS packages, to allow TAB command line completion.
cat > /etc/profile.d/bash_completion.sh << "EOF"
# Begin /etc/profile.d/bash_completion.sh
# Import bash completion scripts
# If the bash-completion package is installed, use its configuration instead
if [ -f /usr/share/bash-completion/bash_completion ]; then
# Check for interactive bash and that we haven't already been sourced.
if [ -n "${BASH_VERSION-}" -a -n "${PS1-}" -a -z "${BASH_COMPLETION_VERSINFO-}" ]; then
# Check for recent enough version of bash.
if [ ${BASH_VERSINFO[0]} -gt 4 ] || \
[ ${BASH_VERSINFO[0]} -eq 4 -a ${BASH_VERSINFO[1]} -ge 1 ]; then
[ -r "${XDG_CONFIG_HOME:-$HOME/.config}/bash_completion" ] && \
. "${XDG_CONFIG_HOME:-$HOME/.config}/bash_completion"
if shopt -q progcomp && [ -r /usr/share/bash-completion/bash_completion ]; then
# Source completion code.
. /usr/share/bash-completion/bash_completion
fi
fi
fi
else
# bash-completions are not installed, use only bash completion directory
if shopt -q progcomp; then
for script in /etc/bash_completion.d/* ; do
if [ -r $script ] ; then
. $script
fi
done
fi
fi
# End /etc/profile.d/bash_completion.sh
EOF
Make sure that the directory exists:
install --directory --mode=0755 --owner=root --group=root /etc/bash_completion.d
For a more complete installation, see https://wiki.linuxfromscratch.org/blfs/wiki/bash-shell-startup-files#bash-completions.
This script uses the ~/.dircolors
and /etc/dircolors
files to control the
colors of file names in a directory listing. They control
colorized output of things like ls --color. The
explanation of how to initialize these files is at the end
of this section.
cat > /etc/profile.d/dircolors.sh << "EOF"
# Setup for /bin/ls and /bin/grep to support color, the alias is in /etc/bashrc.
if [ -f "/etc/dircolors" ] ; then
eval $(dircolors -b /etc/dircolors)
fi
if [ -f "$HOME/.dircolors" ] ; then
eval $(dircolors -b $HOME/.dircolors)
fi
alias ls='ls --color=auto'
alias grep='grep --color=auto'
EOF
This script adds some useful paths to the PATH
and can be used to customize other PATH
related environment variables (e.g. LD_LIBRARY_PATH, etc)
that may be needed for all users.
cat > /etc/profile.d/extrapaths.sh << "EOF"
if [ -d /usr/local/lib/pkgconfig ] ; then
pathappend /usr/local/lib/pkgconfig PKG_CONFIG_PATH
fi
if [ -d /usr/local/bin ]; then
pathprepend /usr/local/bin
fi
if [ -d /usr/local/sbin -a $EUID -eq 0 ]; then
pathprepend /usr/local/sbin
fi
if [ -d /usr/local/share ]; then
pathprepend /usr/local/share XDG_DATA_DIRS
fi
# Set some defaults before other applications add to these paths.
pathappend /usr/share/man MANPATH
pathappend /usr/share/info INFOPATH
EOF
This script sets up the default inputrc
configuration file. If the user
does not have individual settings, it uses the global file.
cat > /etc/profile.d/readline.sh << "EOF"
# Set up the INPUTRC environment variable.
if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ] ; then
INPUTRC=/etc/inputrc
fi
export INPUTRC
EOF
Setting the umask value is important for security. Here the default group write permissions are turned off for system users and when the user name and group name are not the same.
cat > /etc/profile.d/umask.sh << "EOF"
# By default, the umask should be set.
if [ "$(id -gn)" = "$(id -un)" -a $EUID -gt 99 ] ; then
umask 002
else
umask 022
fi
EOF
This script sets an environment variable necessary for native language support. A full discussion on determining this variable can be found on the LFS Bash Shell Startup Files page.
cat > /etc/profile.d/i18n.sh << "EOF"
# Set up i18n variables
export LANG=<ll>
_<CC>
.<charmap>
<@modifiers>
EOF
Here is a base /etc/bashrc
.
Comments in the file should explain everything you need.
cat > /etc/bashrc << "EOF"
# Begin /etc/bashrc
# Written for Beyond Linux From Scratch
# by James Robertson <jameswrobertson@earthlink.net>
# updated by Bruce Dubbs <bdubbs@linuxfromscratch.org>
# System wide aliases and functions.
# System wide environment variables and startup programs should go into
# /etc/profile. Personal environment variables and startup programs
# should go into ~/.bash_profile. Personal aliases and functions should
# go into ~/.bashrc
# Provides colored /bin/ls and /bin/grep commands. Used in conjunction
# with code in /etc/profile.
alias ls='ls --color=auto'
alias grep='grep --color=auto'
# Provides prompt for non-login shells, specifically shells started
# in the X environment. [Review the LFS archive thread titled
# PS1 Environment Variable for a great case study behind this script
# addendum.]
NORMAL="\[\e[0m\]"
RED="\[\e[1;31m\]"
GREEN="\[\e[1;32m\]"
if [[ $EUID == 0 ]] ; then
PS1="$RED\u [ $NORMAL\w$RED ]# $NORMAL"
else
PS1="$GREEN\u [ $NORMAL\w$GREEN ]\$ $NORMAL"
fi
unset RED GREEN NORMAL
# End /etc/bashrc
EOF
Here is a base ~/.bash_profile
.
If you want each new user to have this file automatically,
just change the output of the command to /etc/skel/.bash_profile
and check the
permissions after the command is run. You can then copy
/etc/skel/.bash_profile
to the
home directories of already existing users, including
root
, and set the owner and
group appropriately.
cat > ~/.bash_profile << "EOF"
# Begin ~/.bash_profile
# Written for Beyond Linux From Scratch
# by James Robertson <jameswrobertson@earthlink.net>
# updated by Bruce Dubbs <bdubbs@linuxfromscratch.org>
# Personal environment variables and startup programs.
# Personal aliases and functions should go in ~/.bashrc. System wide
# environment variables and startup programs are in /etc/profile.
# System wide aliases and functions are in /etc/bashrc.
if [ -f "$HOME/.bashrc" ] ; then
source $HOME/.bashrc
fi
if [ -d "$HOME/bin" ] ; then
pathprepend $HOME/bin
fi
# Having . in the PATH is dangerous
#if [ $EUID -gt 99 ]; then
# pathappend .
#fi
# End ~/.bash_profile
EOF
Here is a base ~/.profile
. The
comments and instructions for using /etc/skel
for .bash_profile
above also apply here. Only
the target file names are different.
cat > ~/.profile << "EOF"
# Begin ~/.profile
# Personal environment variables and startup programs.
if [ -d "$HOME/bin" ] ; then
pathprepend $HOME/bin
fi
# Set up user specific i18n variables
#export LANG=<ll>
_<CC>
.<charmap>
<@modifiers>
# End ~/.profile
EOF
Here is a base ~/.bashrc
.
cat > ~/.bashrc << "EOF"
# Begin ~/.bashrc
# Written for Beyond Linux From Scratch
# by James Robertson <jameswrobertson@earthlink.net>
# Personal aliases and functions.
# Personal environment variables and startup programs should go in
# ~/.bash_profile. System wide environment variables and startup
# programs are in /etc/profile. System wide aliases and functions are
# in /etc/bashrc.
if [ -f "/etc/bashrc" ] ; then
source /etc/bashrc
fi
# Set up user specific i18n variables
#export LANG=<ll>
_<CC>
.<charmap>
<@modifiers>
# End ~/.bashrc
EOF
This is an empty ~/.bash_logout
that can be used as a template. You will notice that the base
~/.bash_logout
does not include
a clear
command. This is because the clear is handled in the
/etc/issue
file.
cat > ~/.bash_logout << "EOF"
# Begin ~/.bash_logout
# Written for Beyond Linux From Scratch
# by James Robertson <jameswrobertson@earthlink.net>
# Personal items to perform on logout.
# End ~/.bash_logout
EOF
If you want to use the dircolors
capability, then run the
following command. The /etc/skel
setup steps shown above also can
be used here to provide a ~/.dircolors
file when a new user is set
up. As before, just change the output file name on the
following command and assure the permissions, owner, and
group are correct on the files created and/or copied.
dircolors -p > /etc/dircolors
If you wish to customize the colors used for different file
types, you can edit the /etc/dircolors
file. The instructions for
setting the colors are embedded in the file.
Finally, Ian Macdonald has written an excellent collection of tips and tricks to enhance your shell environment. You can read it online at https://www.caliban.org/bash/index.shtml.
The LFS book installs Vim as its text editor. At this point it should be noted that there are a lot of different editing applications out there including Emacs, nano, Joe and many more. Anyone who has been around the Internet (especially usenet) for a short time will certainly have observed at least one flame war, usually involving Vim and Emacs users!
The LFS book creates a basic vimrc
file. In this section you'll find an
attempt to enhance this file. At startup, vim reads the global
configuration file (/etc/vimrc
)
as well as a user-specific file (~/.vimrc
). Either or both can be tailored to
suit the needs of your particular system.
Here is a slightly expanded .vimrc
that you can put in ~/.vimrc
to provide user specific effects. Of
course, if you put it into /etc/skel/.vimrc
instead, it will be made
available to users you add to the system later. You can also
copy the file from /etc/skel/.vimrc
to the home directory of
users already on the system, such as root
. Be sure to set permissions, owner,
and group if you do copy anything directly from /etc/skel
.
" Begin .vimrc
set columns=80
set wrapmargin=8
set ruler
" End .vimrc
Note that the comment tags are " instead of the more usual # or
//. This is correct, the syntax for vimrc
is slightly unusual.
Below you'll find a quick explanation of what each of the options in this example file means here:
set columns=80
: This simply
sets the number of columns used on the screen.
set wrapmargin=8
: This is the
number of characters from the right window border where
wrapping starts.
set ruler
: This makes
vim show
the current row and column at the bottom right of the
screen.
More information on the many vim options can be found by
reading the help inside vim itself. Do this by typing
:help
in vim to get the general help,
or by typing :help
usr_toc.txt
to view the User Manual Table of Contents.
When you first boot up your new LFS system, the logon screen
will be nice and plain (as it should be in a bare-bones
system). Many people however, will want their system to display
some information in the logon message. This can be accomplished
using the file /etc/issue
.
The /etc/issue
file is a plain
text file which will also accept certain escape sequences (see
below) in order to insert information about the system. There
is also the file issue.net
which
can be used when logging on remotely. ssh however, will only use it
if you set the option in the configuration file and will
not interpret the escape
sequences shown below.
One of the most common things which people want to do is clear
the screen at each logon. The easiest way of doing that is to
put a "clear" escape sequence into /etc/issue
. A simple way of doing this is to
issue the command clear >
/etc/issue. This will insert the relevant
escape code into the start of the /etc/issue
file. Note that if you do this,
when you edit the file, you should leave the characters
(normally '^[[H^[[2J') on the first line alone.
Terminal escape sequences are special codes recognized by the terminal. The ^[ represents an ASCII ESC character. The sequence ESC [ H puts the cursor in the upper left hand corner of the screen and ESC 2 J erases the screen. For more information on terminal escape sequences see http://rtfm.etla.org/xterm/ctlseq.html
The following sequences are recognized by agetty (the program which
usually parses /etc/issue
). This
information is from man
agetty where you can find extra information
about the logon process.
The issue
file can contain
certain character sequences to display various information. All
issue
sequences consist of a
backslash (\) immediately followed by one of the letters
explained below (so \d
in
/etc/issue
would insert the
current date).
b Insert the baudrate of the current line.
d Insert the current date.
s Insert the system name, the name of the operating system.
l Insert the name of the current tty line.
m Insert the architecture identifier of the machine, e.g., i686.
n Insert the nodename of the machine, also known as the hostname.
o Insert the domainname of the machine.
r Insert the release number of the kernel, e.g., 2.6.11.12.
t Insert the current time.
u Insert the number of current users logged in.
U Insert the string "1 user" or "<n> users" where <n> is the
number of current users logged in.
v Insert the version of the OS, e.g., the build-date etc.
The Linux kernel supplies a random number generator which is
accessed through /dev/random
and
/dev/urandom
. Programs that
utilize the random and urandom devices, such as OpenSSH, will benefit from these
instructions.
When a Linux system starts up without much operator interaction, the entropy pool (data used to compute a random number) may be in a fairly predictable state. This creates the real possibility that the number generated at startup may always be the same. In order to counteract this effect, you should carry the entropy pool information across your shut-downs and start-ups.
Install the /etc/rc.d/init.d/random
init script included
with the blfs-bootscripts-20230101 package.
make install-random
Security takes many forms in a computing environment. After some initial discussion, this chapter gives examples of three different types of security: access, prevention and detection.
Access for users is usually handled by login or an application designed to handle the login function. In this chapter, we show how to enhance login by setting policies with PAM modules. Access via networks can also be secured by policies set by iptables, commonly referred to as a firewall. The Network Security Services (NSS) and Netscape Portable Runtime (NSPR) libraries can be installed and shared among the many applications requiring them. For applications that don't offer the best security, you can use the Stunnel package to wrap an application daemon inside an SSL tunnel.
Prevention of breaches, like a trojan, are assisted by applications like GnuPG, specifically the ability to confirm signed packages, which recognizes modifications of the tarball after the packager creates it.
Finally, we touch on detection with a package that stores "signatures" of critical files (defined by the administrator) and then regenerates those "signatures" and compares for files that have been changed.
All software has bugs. Sometimes, a bug can be exploited, for example to allow users to gain enhanced privileges (perhaps gaining a root shell, or simply accessing or deleting other user's files), or to allow a remote site to crash an application (denial of service), or for theft of data. These bugs are labelled as vulnerabilities.
The main place where vulnerabilities get logged is cve.mitre.org. Unfortunately, many vulnerability numbers (CVE-yyyy-nnnn) are initially only labelled as "reserved" when distributions start issuing fixes. Also, some vulnerabilities apply to particular combinations of configure options, or only apply to old versions of packages which have long since been updated in BLFS.
BLFS differs from distributions—there is no BLFS security team, and the editors only become aware of vulnerabilities after they are public knowledge. Sometimes, a package with a vulnerability will not be updated in the book for a long time. Issues can be logged in the Trac system, which might speed up resolution.
The normal way for BLFS to fix a vulnerability is, ideally, to update the book to a new fixed release of the package. Sometimes that happens even before the vulnerability is public knowledge, so there is no guarantee that it will be shown as a vulnerability fix in the Changelog. Alternatively, a sed command, or a patch taken from a distribution, may be appropriate.
The bottom line is that you are responsible for your own security, and for assessing the potential impact of any problems.
The editors now issue Security Advisories for packages in BLFS (and LFS), which can be found at BLFS Security Advisories, and grade the severity according to what upstream reports, or to what is shown at nvd.nist.gov if that has details.
To keep track of what is being discovered, you may wish to follow the security announcements of one or more distributions. For example, Debian has Debian security. Fedora's links on security are at the Fedora wiki. Details of Gentoo linux security announcements are discussed at Gentoo security. Finally, the Slackware archives of security announcements are at Slackware security.
The most general English source is perhaps the Full Disclosure Mailing List, but please read the comment on that page. If you use other languages you may prefer other sites such as heise.de (German) or cert.hr (Croatian). These are not linux-specific. There is also a daily update at lwn.net for subscribers (free access to the data after 2 weeks, but their vulnerabilities database at lwn.net/Alerts is unrestricted).
For some packages, subscribing to their 'announce' lists will provide prompt news of newer versions.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/vulnerabilities
Public Key Infrastructure (PKI) is a method to validate the authenticity of an otherwise unknown entity across untrusted networks. PKI works by establishing a chain of trust, rather than trusting each individual host or entity explicitly. In order for a certificate presented by a remote entity to be trusted, that certificate must present a complete chain of certificates that can be validated using the root certificate of a Certificate Authority (CA) that is trusted by the local machine.
Establishing trust with a CA involves validating things like company address, ownership, contact information, etc., and ensuring that the CA has followed best practices, such as undergoing periodic security audits by independent investigators and maintaining an always available certificate revocation list. This is well outside the scope of BLFS (as it is for most Linux distributions). The certificate store provided here is taken from the Mozilla Foundation, who have established very strict inclusion policies described here.
Download (HTTP): https://github.com/lfs-book/make-ca/releases/download/v1.13/make-ca-1.13.tar.xz
Download size: 32 KB
Download MD5 Sum: 04bd86fe2eb299788439c3466782ce45
Estimated disk space required: 6.9 MB (with all runtime deps)
Estimated build time: 0.1 SBU (with all runtime deps)
This package ships a CA certificate for validating the
identity of https://hg.mozilla.org/. If
the trust chain of this website has been changed after the
release of make-ca-1.13, it may fail to get the revision of
certdata.txt
from server. Use
an updated make-ca release at the release
page if this issue happens.
p11-kit-0.25.0 (runtime, built after libtasn1-4.19.0, required in the following instructions to generate certificate stores from trust anchors, and each time make-ca is run)
nss-3.89 (to generate a shared NSSDB)
The make-ca script will
download and process the certificates included in the
certdata.txt
file for use as
trust anchors for the p11-kit-0.25.0 trust module.
Additionally, it will generate system certificate stores used
by BLFS applications (if the recommended and optional
applications are present on the system). Any local
certificates stored in /etc/ssl/local
will be imported to both the
trust anchors and the generated certificate stores
(overriding Mozilla's trust). Additionally, any modified
trust values will be copied from the trust anchors to
/etc/ssl/local
prior to any
updates, preserving custom trust values that differ from
Mozilla when using the trust utility from
p11-kit to operate on the
trust store.
To install the various certificate stores, first install the
make-ca script into the
correct location. As the root
user:
make install && install -vdm755 /etc/ssl/local
Technically, this package is already installed at this point. But most packages listing make-ca as a dependency actually requires the system certificate store set up by this package, instead of requiring the make-ca program itself. So the instructions for using make-ca for setting up the system certificate store is included in this section. You should make sure the required runtime dependency for make-ca is satisfied now, and continue to follow the instructions.
As the root
user, download
the certificate source and prepare for system use with the
following command:
If running the script a second time with the same version
of certdata.txt
, for
instance, to update the stores when make-ca is upgraded, or to add
additional stores as the requisite software is installed,
replace the -g
switch with the -r
switch in the command line. If packaging, run make-ca --help to see all
available command line options.
/usr/sbin/make-ca -g
You should periodically update the store with the above
command, either manually, or via a cron
job. If you've installed
Fcron-3.2.1 and completed the section on
periodic jobs, execute the following commands, as the
root
user, to create a weekly cron job:
cat > /etc/cron.weekly/update-pki.sh << "EOF" &&
#!/bin/bash
/usr/sbin/make-ca -g
EOF
chmod 754 /etc/cron.weekly/update-pki.sh
For most users, no additional configuration is necessary,
however, the default certdata.txt
file provided by make-ca is
obtained from the mozilla-release branch, and is modified to
provide a Mercurial revision. This will be the correct
version for most systems. There are several other variants of
the file available for use that might be preferred for one
reason or another, including the files shipped with Mozilla
products in this book. RedHat and OpenSUSE, for instance, use
the version included in nss-3.89. Additional upstream downloads are
available at the links included in /etc/make-ca/make-ca.conf.dist
. Simply copy
the file to /etc/make-ca.conf
and edit as appropriate.
There are three trust types that are recognized by the
make-ca script, SSL/TLS,
S/Mime, and code signing. For OpenSSL, these are serverAuth
, emailProtection
, and codeSigning
respectively. If
one of the three trust arguments is omitted, the certificate
is neither trusted, nor rejected for that role. Clients that
use OpenSSL or NSS encountering this certificate will
present a warning to the user. Clients using GnuTLS without p11-kit support are not aware of trusted
certificates. To include this CA into the ca-bundle.crt
, email-ca-bundle.crt
, or objsign-ca-bundle.crt
files (the
GnuTLS legacy bundles), it
must have the appropriate trust arguments.
The /etc/ssl/local
directory is
available to add additional CA certificates to the system
trust store. This directory is also used to store
certificates that were added to or modified in the system
trust store by p11-kit-0.25.0 so that trust values are
maintained across upgrades. Files in this directory must be
in the OpenSSL trusted
certificate format. Certificates imported using the
trust utility
from p11-kit-0.25.0 will utilize the x509
Extended Key Usage values to assign default trust values for
the system anchors.
If you need to override trust values, or otherwise need to
create an OpenSSL trusted
certificate manually from a regular PEM encoded file, you
need to add trust arguments to the openssl command, and create
a new certificate. For example, using the CAcert roots, if you want
to trust both for all three roles, the following commands
will create appropriate OpenSSL trusted certificates (run as
the root
user after Wget-1.21.3 is
installed):
wget http://www.cacert.org/certs/root.crt && wget http://www.cacert.org/certs/class3.crt && openssl x509 -in root.crt -text -fingerprint -setalias "CAcert Class 1 root" \ -addtrust serverAuth -addtrust emailProtection -addtrust codeSigning \ > /etc/ssl/local/CAcert_Class_1_root.pem && openssl x509 -in class3.crt -text -fingerprint -setalias "CAcert Class 3 root" \ -addtrust serverAuth -addtrust emailProtection -addtrust codeSigning \ > /etc/ssl/local/CAcert_Class_3_root.pem && /usr/sbin/make-ca -r
Occasionally, there may be instances where you don't agree
with Mozilla's inclusion of a particular certificate
authority. If you'd like to override the default trust of a
particular CA, simply create a copy of the existing
certificate in /etc/ssl/local
with different trust arguments. For example, if you'd like to
distrust the "Makebelieve_CA_Root" file, run the following
commands:
openssl x509 -in /etc/ssl/certs/Makebelieve_CA_Root.pem \ -text \ -fingerprint \ -setalias "Disabled Makebelieve CA Root" \ -addreject serverAuth \ -addreject emailProtection \ -addreject codeSigning \ > /etc/ssl/local/Disabled_Makebelieve_CA_Root.pem && /usr/sbin/make-ca -r
When Python3 was installed in LFS it included the pip3 module with vendored certificates from the Certifi module. That was necessary, but it means that whenever pip3 is used it can reference those certificates, primarily when creating a virtual environment or when installing a module with all its wheel dependencies in one go.
It is generally considered that the System Administrator should be in charge of which certificates are available. Now that make-ca-1.13 and p11-kit-0.25.0 have been installed and make-ca has been configured, it is possible to make pip3 use the system certificates.
The vendored certificates installed in LFS are a snapshot from when the pulled-in version of Certifi was created. If you regularly update the system certificates, the vendored version will become out of date.
To use the system certificates in Python3 you should set _PIP_STANDALONE_CERT
to point to them, e.g for
the bash shell:
export _PIP_STANDALONE_CERT=/etc/pki/tls/certs/ca-bundle.crt
If you have created virtual environments, for example when
testing modules, and those include the Requests and Certifi modules in ~/.local/lib/python3.11/
then those local
modules will be used instead of the system certificates
unless you remove the local modules.
To use the system certificates in Python3 with the BLFS profiles add the following variable to your system or personal profiles:
mkdir -pv /etc/profile.d &&
cat > /etc/profile.d/pythoncerts.sh << "EOF"
# Begin /etc/profile.d/pythoncerts.sh
export _PIP_STANDALONE_CERT=/etc/pki/tls/certs/ca-bundle.crt
# End /etc/profile.d/pythoncerts.sh
EOF
The CrackLib package contains a library used to enforce strong passwords by comparing user selected passwords to words in chosen word lists.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/cracklib/cracklib/releases/download/v2.9.11/cracklib-2.9.11.tar.xz
Download MD5 sum: a6dfb1766aab43a54e1cbd78abf0a20a
Download size: 452 KB
Estimated disk space required: 6.8 MB
Estimated build time: 0.1 SBU
Recommended word list for English-speaking countries:
Download (HTTP): https://github.com/cracklib/cracklib/releases/download/v2.9.11/cracklib-words-2.9.11.xz
Download MD5 sum: f27804022dbf2682a7f7c353317f9a53
Download size: 4.0 MB;
There are additional word lists available for download, e.g., from https://wiki.skullsecurity.org/index.php/Passwords. CrackLib can utilize as many, or as few word lists you choose to install.
Users tend to base their passwords on regular words of the spoken language, and crackers know that. CrackLib is intended to filter out such bad passwords at the source using a dictionary created from word lists. To accomplish this, the word list(s) for use with CrackLib must be an exhaustive list of words and word-based keystroke combinations likely to be chosen by users of the system as (guessable) passwords.
The default word list recommended above for downloading mostly satisfies this role in English-speaking countries. In other situations, it may be necessary to download (or even create) additional word lists.
Note that word lists suitable for spell-checking are not usable as CrackLib word lists in countries with non-Latin based alphabets, because of “word-based keystroke combinations” that make bad passwords.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/cracklib
Install CrackLib by running the following commands:
autoreconf -fiv && PYTHON=python3 \ ./configure --prefix=/usr \ --disable-static \ --with-default-dict=/usr/lib/cracklib/pw_dict && make
Now, as the root
user:
make install
Issue the following commands as the root
user to install the recommended word
list and create the CrackLib
dictionary. Other word lists (text based, one word per line)
can also be used by simply installing them into /usr/share/dict
and adding them to the
create-cracklib-dict
command.
install -v -m644 -D ../cracklib-words-2.9.11.xz \ /usr/share/dict/cracklib-words.xz && unxz -v /usr/share/dict/cracklib-words.xz && ln -v -sf cracklib-words /usr/share/dict/words && echo $(hostname) >> /usr/share/dict/cracklib-extra-words && install -v -m755 -d /usr/lib/cracklib && create-cracklib-dict /usr/share/dict/cracklib-words \ /usr/share/dict/cracklib-extra-words
If desired, check the proper operation of the library as an unprivileged user by issuing the following command:
make test
If you are installing CrackLib after your LFS system has been completed and you have the Shadow package installed, you must reinstall Shadow-4.13 if you wish to provide strong password support on your system. If you are now going to install the Linux-PAM-1.5.2 package, you may disregard this note as Shadow will be reinstalled after the Linux-PAM installation.
autoreconf -fiv: The configure script shipped with the package is too old to get the right version string of Python 3.10 or later. This command regenerates it with a more recent version of autotools, which fixes the issue.
PYTHON=python3
: This forces the
installation of python bindings for Python 3, even if Python
2 is installed.
--with-default-dict=/lib/cracklib/pw_dict
:
This parameter forces the installation of the CrackLib dictionary to the /lib
hierarchy.
--disable-static
:
This switch prevents installation of static versions of the
libraries.
install -v -m644 -D
...: This command creates the /usr/share/dict
directory (if it doesn't
already exist) and installs the compressed word list there.
ln -v -s cracklib-words
/usr/share/dict/words: The word list is
linked to /usr/share/dict/words
as historically, words
is the
primary word list in the /usr/share/dict
directory. Omit this
command if you already have a /usr/share/dict/words
file installed on
your system.
echo $(hostname)
>>...: The value of hostname is echoed to a
file called cracklib-extra-words
. This extra file is
intended to be a site specific list which includes easy to
guess passwords such as company or department names, user
names, product names, computer names, domain names, etc.
create-cracklib-dict ...: This command creates the CrackLib dictionary from the word lists. Modify the command to add any additional word lists you have installed.
is used to determine if a password is strong |
|
is used to format text files (lowercases all words, removes control characters and sorts the lists) |
|
creates a database with words read from standard input |
|
displays on standard output the database specified |
|
is used to create the CrackLib dictionary from the given word list(s) |
|
provides a fast dictionary lookup method for strong password enforcement |
cryptsetup is used to set up transparent encryption of block devices using the kernel crypto API.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.kernel.org/pub/linux/utils/cryptsetup/v2.4/cryptsetup-2.4.3.tar.xz
Download MD5 sum: 2303d57e78d4977344188a46e125095c
Download size: 11 MB
Estimated disk space required: 29 MB (add 5 MB for tests)
Estimated build time: 0.2 SBU (add 19 SBU for tests)
JSON-C-0.16, LVM2-2.03.21, and popt-1.19
libpwquality-1.4.5, argon2, libssh, and passwdqc
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/cryptsetup
Encrypted block devices require kernel support. To use it, the appropriate kernel configuration parameters need to be set:
Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) ---> [CONFIG_MD]
<*/M> Device mapper support [CONFIG_BLK_DEV_DM]
<*/M> Crypt target support [CONFIG_DM_CRYPT]
-*- Cryptographic API ---> [CONFIG_CRYPTO]
Length-preserving ciphers and modes --->
<*/M> XTS support [CONFIG_CRYPTO_XTS]
Hashes, digests, and MACs --->
-*- SHA224 and SHA256 digest algorithm [CONFIG_CRYPTO_SHA256]
Block ciphers --->
-*- AES cipher algorithms [CONFIG_CRYPTO_AES]
Userspace interface --->
<*/M> Symmetric key cipher algorithms [CONFIG_CRYPTO_USER_API_SKCIPHER]
For tests:
Block ciphers --->
<*/M> Twofish [CONFIG_CRYPTO_TWOFISH]
Install cryptsetup by running the following commands:
./configure --prefix=/usr --disable-ssh-token && make
To test the result, issue as the root
user: make check. Some tests will
fail if appropriate kernel configuration options are not set.
Some additional options that may be needed for tests are:
CONFIG_SCSI_LOWLEVEL, CONFIG_SCSI_DEBUG,
CONFIG_BLK_DEV_DM_BUILTIN, CONFIG_CRYPTO_USER,
CONFIG_CRYPTO_CRYPTD, CONFIG_CRYPTO_LRW, CONFIG_CRYPTO_XTS,
CONFIG_CRYPTO_ESSIV, CONFIG_CRYPTO_CRCT10DIF,
CONFIG_CRYPTO_AES_TI, CONFIG_CRYPTO_AES_NI_INTEL,
CONFIG_CRYPTO_BLOWFISH, CONFIG_CRYPTO_CAST5,
CONFIG_CRYPTO_SERPENT, CONFIG_CRYPTO_SERPENT_SSE2_X86_64,
CONFIG_CRYPTO_SERPENT_AVX_X86_64,
CONFIG_CRYPTO_SERPENT_AVX2_X86_64, and
CONFIG_CRYPTO_TWOFISH_X86_64.
Now, as the root
user:
make install
--disable-ssh-token
: This option
is required if the optional libssh dependency is not
installed.
Because of the number of possible configurations, setup of encrypted volumes is beyond the scope of the BLFS book. Please see the configuration guide in the cryptsetup FAQ.
is used to setup dm-crypt managed device-mapper mappings |
|
is a tool for offline LUKS device re-encryption |
|
is a tool to manage dm-integrity (block level integrity) volumes |
|
is used to configure dm-verity managed device-mapper mappings. Device-mapper verity target provides read-only transparent integrity checking of block devices using kernel crypto API |
The Cyrus SASL package contains a Simple Authentication and Security Layer implementation, a method for adding authentication support to connection-based protocols. To use SASL, a protocol includes a command for identifying and authenticating a user to a server and for optionally negotiating protection of subsequent protocol interactions. If its use is negotiated, a security layer is inserted between the protocol and the connection.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/cyrusimap/cyrus-sasl/releases/download/cyrus-sasl-2.1.28/cyrus-sasl-2.1.28.tar.gz
Download MD5 sum: 6f228a692516f5318a64505b46966cfa
Download size: 3.9 MB
Estimated disk space required: 28 MB
Estimated build time: 0.2 SBU
Linux-PAM-1.5.2, MIT Kerberos V5-1.20.1, MariaDB-10.6.12 or MySQL, OpenLDAP-2.6.4, PostgreSQL-15.2, sphinx-6.2.1, SQLite-3.41.2, krb4, Dmalloc, and Pod::POM::View::Restructured
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/cyrus-sasl
This package does not support parallel build.
Install Cyrus SASL by running the following commands:
./configure --prefix=/usr \ --sysconfdir=/etc \ --enable-auth-sasldb \ --with-dbpath=/var/lib/sasl/sasldb2 \ --with-sphinx-build=no \ --with-saslauthd=/var/run/saslauthd && make -j1
This package does not come with a test suite. If you are planning on using the GSSAPI authentication mechanism, test it after installing the package using the sample server and client programs which were built in the preceding step. Instructions for performing the tests can be found at https://www.linuxfromscratch.org/hints/downloads/files/cyrus-sasl.txt.
Now, as the root
user:
make install && install -v -dm755 /usr/share/doc/cyrus-sasl-2.1.28/html && install -v -m644 saslauthd/LDAP_SASLAUTHD /usr/share/doc/cyrus-sasl-2.1.28 && install -v -m644 doc/legacy/*.html /usr/share/doc/cyrus-sasl-2.1.28/html && install -v -dm700 /var/lib/sasl
--with-dbpath=/var/lib/sasl/sasldb2
:
This switch forces the sasldb database to be
created in /var/lib/sasl
instead of /etc
.
--with-saslauthd=/var/run/saslauthd
:
This switch forces saslauthd to use the FHS
compliant directory /var/run/saslauthd
for variable run-time
data.
--enable-auth-sasldb
:
This switch enables SASLDB authentication backend.
--with-dblib=gdbm
: This switch
forces GDBM to be used
instead of Berkeley DB.
--with-ldap
: This switch enables
the OpenLDAP support.
--enable-ldapdb
: This switch
enables the LDAPDB authentication backend.
--enable-login
: This option
enables unsupported LOGIN authentication.
--enable-ntlm
: This option
enables unsupported NTLM authentication.
install -v -m644 ...: These commands install documentation which is not installed by the make install command.
install -v -m700 -d /var/lib/sasl: This directory must exist when starting saslauthd or using the sasldb plugin. If you're not going to be running the daemon or using the plugins, you may omit the creation of this directory.
/etc/saslauthd.conf
(for
saslauthd
LDAP configuration) and /etc/sasl2/Appname.conf
(where "Appname"
is the application defined name of the application)
See https://www.cyrusimap.org/sasl/sasl/sysadmin.html for information on what to include in the application configuration files.
See file:///usr/share/doc/cyrus-sasl-2.1.28/LDAP_SASLAUTHD for configuring saslauthd with OpenLDAP.
See https://www.cyrusimap.org/sasl/sasl/gssapi.html#gssapi for configuring saslauthd with Kerberos.
If you need to run the saslauthd daemon at
system startup, install the /etc/rc.d/init.d/saslauthd
init script
included in the blfs-bootscripts-20230101 package
using the following command:
make install-saslauthd
You'll need to modify /etc/sysconfig/saslauthd
and modify the
AUTHMECH
parameter with your
desired authentication mechanism.
is used to list loadable SASL plugins and their properties |
|
is the SASL authentication server |
|
is used to list the users in the SASL password
database |
|
is used to set and delete a user's SASL password
and mechanism specific secrets in the SASL password
database |
|
is a test utility for the SASL authentication server |
|
is a general purpose authentication library for server and client applications |
The GnuPG package is GNU's tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC2440 and the S/MIME standard as described by several RFCs. GnuPG 2 is the stable version of GnuPG integrating support for OpenPGP and S/MIME.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.gnupg.org/ftp/gcrypt/gnupg/gnupg-2.4.0.tar.bz2
Download (FTP): ftp://ftp.gnupg.org/gcrypt/gnupg/gnupg-2.4.0.tar.bz2
Download MD5 sum: 1a9dd55be7a9d0a6ef34ec3ba0d674a5
Download size: 7.3 MB
Estimated disk space required: 164 MB (with tests)
Estimated build time: 0.5 SBU (using parallelism=4; add 0.4 SBU for tests)
libassuan-2.5.5, libgcrypt-1.10.2, libksba-1.6.3, and npth-1.6
GnuTLS-3.8.0 (required to communicate with keyservers using https or hkps protocol) and pinentry-1.2.1 (Run-time requirement for most of the package's functionality)
cURL-8.4.0, Fuse-3.14.1, ImageMagick-7.1.0-61 (for the convert utility, used for generating the documentation), libusb-1.0.26, an MTA, OpenLDAP-2.6.4, SQLite-3.41.2, texlive-20230313 (or install-tl-unx), fig2dev (for generating documentation), and GNU adns
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/gnupg2
Install GnuPG by running the following commands:
mkdir build && cd build && ../configure --prefix=/usr \ --localstatedir=/var \ --sysconfdir=/etc \ --docdir=/usr/share/doc/gnupg-2.4.0 && make && makeinfo --html --no-split -I doc -o doc/gnupg_nochunks.html ../doc/gnupg.texi && makeinfo --plaintext -I doc -o doc/gnupg.txt ../doc/gnupg.texi && make -C doc html
If you have texlive-20230313 installed and you wish to create documentation in alternate formats, issue the following commands (fig2dev is needed for the ps format):
make -C doc pdf ps
To test the results, issue: make check.
Now, as the root
user:
make install && install -v -m755 -d /usr/share/doc/gnupg-2.4.0/html && install -v -m644 doc/gnupg_nochunks.html \ /usr/share/doc/gnupg-2.4.0/html/gnupg.html && install -v -m644 ../doc/*.texi doc/gnupg.txt \ /usr/share/doc/gnupg-2.4.0 && install -v -m644 doc/gnupg.html/* \ /usr/share/doc/gnupg-2.4.0/html
If you created alternate formats of the documentation,
install them using the following command as the root
user:
install -v -m644 doc/gnupg.{pdf,dvi,ps} \ /usr/share/doc/gnupg-2.4.0
mkdir build && cd build: the Gnupg2 developers recommend to build the package in a dedicated directory.
--docdir=/usr/share/doc/gnupg-2.4.0
:
This switch changes the default docdir to /usr/share/doc/gnupg-2.4.0
.
--enable-all-tests
: This switch
allows more tests to be run with make check.
--enable-g13
: This switch enables
building the g13 program.
is used to create and populate a user's
|
|
is a wrapper script used to run gpgconf with the
|
|
is a tool that takes care of accessing the OpenPGP keyservers |
|
is a tool to contact a running dirmngr and test whether a certificate has been revoked |
|
is a tool to create, mount or unmount an encrypted file system container (optional) |
|
is a daemon used to manage secret (private) keys independently from any protocol. It is used as a backend for gpg and gpgsm as well as for a couple of other utilities |
|
is a tool to manage smart cards and tokens |
|
is a utility used to communicate with a running gpg-agent |
|
is the OpenPGP part of the GNU Privacy Guard (GnuPG). It is a tool used to provide digital encryption and signing services using the OpenPGP standard |
|
is a utility used to automatically and reasonably
safely query and modify configuration files in the
|
|
is a utility currently only useful for debugging.
Run it with |
|
executes the given scheme program or spawns an interactive shell |
|
is a tool similar to gpg used to provide digital encryption and signing services on X.509 certificates and the CMS protocol. It is mainly used as a backend for S/MIME mail processing |
|
splits an OpenPGP message into packets |
|
is a tool to encrypt or sign files into an archive |
|
is a verify only version of gpg |
|
is a client for the Web Key Service protocol |
|
provides a server for the Web Key Service protocol |
|
is used to list, export and import Keybox data |
|
is used to listen to a Unix Domain socket created by any of the GnuPG tools |
The GnuTLS package contains libraries and userspace tools which provide a secure layer over a reliable transport layer. Currently the GnuTLS library implements the proposed standards by the IETF's TLS working group. Quoting from the TLS 1.3 protocol specification :
“ TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery. ”
GnuTLS provides support for TLS 1.3, TLS 1.2, TLS 1.1, TLS 1.0, and (optionally) SSL 3.0 protocols. It also supports TLS extensions, including server name and max record size. Additionally, the library supports authentication using the SRP protocol, X.509 certificates, and OpenPGP keys, along with support for the TLS Pre-Shared-Keys (PSK) extension, the Inner Application (TLS/IA) extension, and X.509 and OpenPGP certificate handling.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.gnupg.org/ftp/gcrypt/gnutls/v3.8/gnutls-3.8.0.tar.xz
Download (FTP): ftp://ftp.gnupg.org/gcrypt/gnutls/v3.8/gnutls-3.8.0.tar.xz
Download MD5 sum: 20a662caf20112b6b9ad1f4a64db3a97
Download size: 6.1 MB
Estimated disk space required: 165 MB (add 113 MB for tests)
Estimated build time: 0.8 SBU (add 2.3 SBU for tests; both using parallelism=4)
make-ca-1.13, libunistring-1.1, libtasn1-4.19.0, and p11-kit-0.25.0
Brotli-1.0.9, Doxygen-1.9.6, GTK-Doc-1.33.2, libidn-1.41 or libidn2-2.3.4, libseccomp-2.5.4, Net-tools-2.10 (used during the test suite), texlive-20230313 or install-tl-unx, Unbound-1.17.1 (to build the DANE library), Valgrind-3.20.0 (used during the test suite), autogen, cmocka and datefudge (used during the test suite if the DANE library is built), and Trousers (Trusted Platform Module support)
Note that if you do not install libtasn1-4.19.0, a version shipped in the GnuTLS tarball will be used instead.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/gnutls
Install GnuTLS by running the following commands:
./configure --prefix=/usr \ --docdir=/usr/share/doc/gnutls-3.8.0 \ --with-default-trust-store-pkcs11="pkcs11:" && make
To test the results, issue: make check.
Now, as the root
user:
make install
--with-default-trust-store-pkcs11="pkcs11:"
:
This switch tells gnutls to use the PKCS #11 trust store as
the default trust. Omit this switch if p11-kit-0.25.0 is not
installed.
--with-default-trust-store-file=/etc/pki/tls/certs/ca-bundle.crt
:
This switch tells configure where to find the
legacy CA certificate bundle and to use it instead of PKCS
#11 module by default. Use this if p11-kit-0.25.0 is not
installed.
--enable-gtk-doc
: Use this
parameter if GTK-Doc is
installed and you wish to rebuild and install the API
documentation.
--enable-openssl-compatibility
:
Use this switch if you wish to build the OpenSSL
compatibility library.
--without-p11-kit
: use this
switch if you have not installed p11-kit.
--with-included-unistring
: uses
the bundled version of libunistring, instead of the system
one. Use this switch if you have not installed libunistring-1.1.
is used to generate X.509 certificates, certificate requests, and private keys |
|
is a tool used to generate and check DNS resource records for the DANE protocol |
|
is a simple client program to set up a TLS connection to some other computer |
|
is a simple client program to set up a TLS connection to some other computer and produces very verbose progress results |
|
is a simple server program that listens to incoming TLS connections |
|
is a program that can parse and print information about OCSP requests/responses, generate requests and verify responses |
|
is a program that allows handling data from PKCS #11 smart cards and security modules |
|
is a simple program that generates random keys for use with TLS-PSK |
|
is a simple program that emulates the programs in the Stanford SRP (Secure Remote Password) libraries using GnuTLS |
|
contains the core API functions and X.509 certificate API functions |
The GPGME package is a C library that allows cryptography support to be added to a program. It is designed to make access to public key crypto engines like GnuPG or GpgSM easier for applications. GPGME provides a high-level crypto API for encryption, decryption, signing, signature verification and key management.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.gnupg.org/ftp/gcrypt/gpgme/gpgme-1.20.0.tar.bz2
Download (FTP): ftp://ftp.gnupg.org/gcrypt/gpgme/gpgme-1.20.0.tar.bz2
Download MD5 sum: 526949233610f46655741cafd09e66a7
Download size: 1.7 MB
Estimated disk space required: 189 MB (Add 34 MB for tests)
Estimated build time: 0.6 SBU (with all bindings, add 0.8 SBU for tests; both with parallelism=4)
Doxygen-1.9.6 and Graphviz-8.0.3 (for API documentation), GnuPG-2.4.0 (required if Qt or SWIG are installed; used during the test suite), Clisp-2.49, Qt-5.15.11, and/or SWIG-4.1.1 (for language bindings)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/gpgme
Install GPGME by running the following commands:
./configure --prefix=/usr --disable-gpg-test && make
To test the results, you should have GnuPG-2.4.0 installed and remove the --disable-gpg-test above. Issue: make -k check. One test, TestRemarks. is known to fail.
Now, as the root
user:
make install
--disable-gpg-test
:
if this parameter is not passed to configure, the test
programs are built during make stage, which requires
GnuPG-2.4.0. This parameter is not needed
if GnuPG-2.4.0 is installed.
outputs GPGME commands in JSON format |
|
is an assuan server exposing GPGME operations, such as printing fingerprints and keyids with keyservers |
|
contains the GPGME API functions |
|
contains the C++ GPGME API functions |
|
contains API functions for handling GPG operations in Qt applications |
iptables is a userspace command line program used to configure the Linux 2.4 and later kernel packet filtering ruleset.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.netfilter.org/projects/iptables/files/iptables-1.8.9.tar.xz
Download (FTP): ftp://ftp.netfilter.org/pub/iptables/iptables-1.8.9.tar.xz
Download MD5 sum: ffa00f68d63e723c21b8a091c5c0271b
Download size: 633 KB
Estimated disk space required: 16 MB
Estimated build time: 0.1 SBU
libpcap-1.10.4 (required for BPF compiler or nfsynproxy support), bpf-utils (required for Berkeley Packet Filter support), libnfnetlink (required for connlabel support), libnetfilter_conntrack (required for connlabel support), and nftables
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/iptables
A firewall in Linux is accomplished through the netfilter interface. To use iptables to configure netfilter, the following kernel configuration parameters are required:
[*] Networking support ---> [CONFIG_NET]
Networking Options --->
[*] Network packet filtering framework (Netfilter) ---> [CONFIG_NETFILTER]
[*] Advanced netfilter configuration [CONFIG_NETFILTER_ADVANCED]
Core Netfilter Configuration --->
<*/M> Netfilter connection tracking support [CONFIG_NF_CONNTRACK]
<*/M> Netfilter Xtables support (required for ip_tables) [CONFIG_NETFILTER_XTABLES]
<*/M> LOG target support [CONFIG_NETFILTER_XT_TARGET_LOG]
IP: Netfilter Configuration --->
<*/M> IP tables support (required for filtering/masq/NAT) [CONFIG_IP_NF_IPTABLES]
Include any connection tracking protocols that will be used, as well as any protocols that you wish to use for match support under the "Core Netfilter Configuration" section. The above options are enough for running Creating a Personal Firewall With iptables below.
The installation below does not include building some
specialized extension libraries which require the raw
headers in the Linux
source code. If you wish to build the additional extensions
(if you aren't sure, then you probably don't), you can look
at the INSTALL
file to see an
example of how to change the KERNEL_DIR=
parameter to
point at the Linux source
code. Note that if you upgrade the kernel version, you may
also need to recompile iptables and that the BLFS team has
not tested using the raw kernel headers.
Install iptables by running the following commands:
./configure --prefix=/usr \ --disable-nftables \ --enable-libipq && make
This package does not come with a test suite.
Now, as the root
user:
make install
--disable-nftables
:
This switch disables building nftables compatibility.
--enable-libipq
: This
switch enables building of libipq.so
which can be used by some
packages outside of BLFS.
--enable-nfsynproxy
: This switch
enables installation of nfsynproxy SYNPROXY configuration tool.
In the following example configurations, LAN1 is used for the internal LAN interface, and WAN1 is used for the external interface connected to the Internet. You will need to replace these values with appropriate interface names for your system.
A Personal Firewall is designed to let you access all the services offered on the Internet while keeping your computer secure and your data private.
Below is a slightly modified version of Rusty Russell's recommendation from the Linux 2.4 Packet Filtering HOWTO. It is still applicable to the Linux 5.x kernels.
cat > /etc/rc.d/rc.iptables << "EOF"
#!/bin/sh
# Begin rc.iptables
# Insert connection-tracking modules
# (not needed if built into the kernel)
modprobe nf_conntrack
modprobe xt_LOG
# Enable broadcast echo Protection
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
# Disable Source Routed Packets
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 0 > /proc/sys/net/ipv4/conf/default/accept_source_route
# Enable TCP SYN Cookie Protection
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
# Disable ICMP Redirect Acceptance
echo 0 > /proc/sys/net/ipv4/conf/default/accept_redirects
# Do not send Redirect Messages
echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects
# Drop Spoofed Packets coming in on an interface, where responses
# would result in the reply going out a different interface.
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
# Log packets with impossible addresses.
echo 1 > /proc/sys/net/ipv4/conf/all/log_martians
echo 1 > /proc/sys/net/ipv4/conf/default/log_martians
# be verbose on dynamic ip-addresses (not needed in case of static IP)
echo 2 > /proc/sys/net/ipv4/ip_dynaddr
# disable Explicit Congestion Notification
# too many routers are still ignorant
echo 0 > /proc/sys/net/ipv4/tcp_ecn
# Set a known state
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
# These lines are here in case rules are already in place and the
# script is ever rerun on the fly. We want to remove all rules and
# pre-existing user defined chains before we implement new rules.
iptables -F
iptables -X
iptables -Z
iptables -t nat -F
# Allow local-only connections
iptables -A INPUT -i lo -j ACCEPT
# Free output on any interface to any ip for any service
# (equal to -P ACCEPT)
iptables -A OUTPUT -j ACCEPT
# Permit answers on already established connections
# and permit new connections related to established ones
# (e.g. port mode ftp)
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Log everything else.
iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT "
# End $rc_base/rc.iptables
EOF
chmod 700 /etc/rc.d/rc.iptables
This script is quite simple, it drops all traffic coming into your computer that wasn't initiated from your computer, but as long as you are simply surfing the Internet you are unlikely to exceed its limits.
If you frequently encounter certain delays at accessing FTP servers, take a look at BusyBox with iptables example number 4.
Even if you have daemons or services running on your system, these will be inaccessible everywhere but from your computer itself. If you want to allow access to services on your machine, such as ssh or ping, take a look at Creating a BusyBox With iptables.
A Network Firewall has two interfaces, one connected to an intranet, in this example LAN1, and one connected to the Internet, here WAN1. To provide the maximum security for the firewall itself, make sure that there are no unnecessary servers running on it such as X11. As a general principle, the firewall itself should not access any untrusted service (think of a remote server giving answers that makes a daemon on your system crash, or even worse, that implements a worm via a buffer-overflow).
cat > /etc/rc.d/rc.iptables << "EOF"
#!/bin/sh
# Begin rc.iptables
echo
echo "You're using the example configuration for a setup of a firewall"
echo "from Beyond Linux From Scratch."
echo "This example is far from being complete, it is only meant"
echo "to be a reference."
echo "Firewall security is a complex issue, that exceeds the scope"
echo "of the configuration rules below."
echo "You can find additional information"
echo "about firewalls in Chapter 4 of the BLFS book."
echo "https://www.linuxfromscratch.org/blfs"
echo
# Insert iptables modules (not needed if built into the kernel).
modprobe nf_conntrack
modprobe nf_conntrack_ftp
modprobe xt_conntrack
modprobe xt_LOG
modprobe xt_state
# Enable broadcast echo Protection
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
# Disable Source Routed Packets
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
# Enable TCP SYN Cookie Protection
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
# Disable ICMP Redirect Acceptance
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
# Don't send Redirect Messages
echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects
# Drop Spoofed Packets coming in on an interface where responses
# would result in the reply going out a different interface.
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
# Log packets with impossible addresses.
echo 1 > /proc/sys/net/ipv4/conf/all/log_martians
# Be verbose on dynamic ip-addresses (not needed in case of static IP)
echo 2 > /proc/sys/net/ipv4/ip_dynaddr
# Disable Explicit Congestion Notification
# Too many routers are still ignorant
echo 0 > /proc/sys/net/ipv4/tcp_ecn
# Set a known state
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
# These lines are here in case rules are already in place and the
# script is ever rerun on the fly. We want to remove all rules and
# pre-existing user defined chains before we implement new rules.
iptables -F
iptables -X
iptables -Z
iptables -t nat -F
# Allow local connections
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow forwarding if the initiated on the intranet
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD ! -i WAN1 -m conntrack --ctstate NEW -j ACCEPT
# Do masquerading
# (not needed if intranet is not using private ip-addresses)
iptables -t nat -A POSTROUTING -o WAN1 -j MASQUERADE
# Log everything for debugging
# (last of all rules, but before policy rules)
iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT "
iptables -A FORWARD -j LOG --log-prefix "FIREWALL:FORWARD "
iptables -A OUTPUT -j LOG --log-prefix "FIREWALL:OUTPUT "
# Enable IP Forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
EOF
chmod 700 /etc/rc.d/rc.iptables
With this script your intranet should be reasonably secure against external attacks. No one should be able to setup a new connection to any internal service and, if it's masqueraded, makes your intranet invisible to the Internet. Furthermore, your firewall should be relatively safe because there are no services running that a cracker could attack.
This scenario isn't too different from the Creating a Masquerading Router With iptables, but additionally offers some services to your intranet. Examples of this can be when you want to administer your firewall from another host on your intranet or use it as a proxy or a name server.
Outlining specifically how to protect a server that offers services on the Internet goes far beyond the scope of this document. See the references in the section called “Extra Information” for more information.
Be cautious. Every service you have enabled makes your setup more complex and your firewall less secure. You are exposed to the risks of misconfigured services or running a service with an exploitable bug. A firewall should generally not run any extra services. See the introduction to the Creating a Masquerading Router With iptables for some more details.
If you want to add services such as internal Samba or name servers that do not need to access the Internet themselves, the additional statements are quite simple and should still be acceptable from a security standpoint. Just add the following lines into the script before the logging rules.
iptables -A INPUT -i ! WAN1 -j ACCEPT
iptables -A OUTPUT -o ! WAN1 -j ACCEPT
If daemons, such as squid, have to access the Internet themselves, you could open OUTPUT generally and restrict INPUT.
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -j ACCEPT
However, it is generally not advisable to leave OUTPUT unrestricted. You lose any control over trojans who would like to "call home", and a bit of redundancy in case you've (mis-)configured a service so that it broadcasts its existence to the world.
To accomplish this, you should restrict INPUT and OUTPUT on all ports except those that it's absolutely necessary to have open. Which ports you have to open depends on your needs: mostly you will find them by looking for failed accesses in your log files.
Have a Look at the Following Examples:
Squid is caching the web:
iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --sport 80 -m conntrack --ctstate ESTABLISHED \
-j ACCEPT
Your caching name server (e.g., named) does its lookups via UDP:
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
You want to be able to ping your computer to ensure it's still alive:
iptables -A INPUT -p icmp -m icmp --icmp-type echo-request -j ACCEPT
iptables -A OUTPUT -p icmp -m icmp --icmp-type echo-reply -j ACCEPT
If you are frequently accessing FTP servers or enjoy chatting, you might notice delays because some implementations of these daemons query an identd daemon on your system to obtain usernames. Although there's really little harm in this, having an identd running is not recommended because many security experts feel the service gives out too much additional information.
To avoid these delays you could reject the requests with a 'tcp-reset' response:
iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset
To log and drop invalid packets (packets that came in after netfilter's timeout or some types of network scans) insert these rules at the top of the chain:
iptables -I INPUT 0 -p tcp -m conntrack --ctstate INVALID \
-j LOG --log-prefix "FIREWALL:INVALID "
iptables -I INPUT 1 -p tcp -m conntrack --ctstate INVALID -j DROP
Anything coming from the outside should not have a private address, this is a common attack called IP-spoofing:
iptables -A INPUT -i WAN1 -s 10.0.0.0/8 -j DROP
iptables -A INPUT -i WAN1 -s 172.16.0.0/12 -j DROP
iptables -A INPUT -i WAN1 -s 192.168.0.0/16 -j DROP
There are other addresses that you may also want to drop: 0.0.0.0/8, 127.0.0.0/8, 224.0.0.0/3 (multicast and experimental), 169.254.0.0/16 (Link Local Networks), and 192.0.2.0/24 (IANA defined test network).
If your firewall is a DHCP client, you need to allow those packets:
iptables -A INPUT -i WAN1 -p udp -s 0.0.0.0 --sport 67 \
-d 255.255.255.255 --dport 68 -j ACCEPT
To simplify debugging and be fair to anyone who'd like to access a service you have disabled, purposely or by mistake, you could REJECT those packets that are dropped.
Obviously this must be done directly after logging as the very last lines before the packets are dropped by policy:
iptables -A INPUT -j REJECT
These are only examples to show you some of the
capabilities of the firewall code in Linux. Have a look at
the man page of iptables. There you will find much more
information. The port numbers needed for this can be found
in /etc/services
, in case you
didn't find them by trial and error in your log file.
To set up the iptables firewall at boot, install the
/etc/rc.d/init.d/iptables
init script included in the blfs-bootscripts-20230101 package.
make install-iptables
is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel |
|
is a safer way to update iptables remotely |
|
is used to interact with iptables using the legacy command set |
|
is used to restore a set of legacy iptables rules |
|
is used to save a set of legacy iptables rules |
|
is used to restore IP Tables from data specified on STDIN. Use I/O redirection provided by your shell to read from a file |
|
is used to dump the contents of an IP Table in easily parseable format to STDOUT. Use I/O-redirection provided by your shell to write to a file |
|
is used to convert the output of iptables-save to
an XML format. Using the |
|
are a set of commands for IPV6 that parallel the iptables commands above |
|
(optional) configuration tool. SYNPROXY target makes handling of large SYN floods possible without the large performance penalties imposed by the connection tracking in such cases |
|
is a binary that behaves according to the name it is called by |
The purpose of a firewall is to protect a computer or a network against malicious access. In a perfect world every daemon or service, on every machine, is perfectly configured and immune to security flaws, and all users are trusted implicitly to use the equipment as intended. However, this is rarely, if ever, the case. Daemons may be misconfigured, or updates may not have been applied for known exploits against essential services. Additionally, you may wish to choose which services are accessible by certain machines or users, or you may wish to limit which machines or applications are allowed external access. Alternatively, you simply may not trust some of your applications or users. For these reasons, a carefully designed firewall should be an essential part of system security.
While a firewall can greatly limit the scope of the above issues, do not assume that having a firewall makes careful configuration redundant, or that any negligent misconfiguration is harmless. A firewall does not prevent the exploitation of any service you offer outside of it. Despite having a firewall, you need to keep applications and daemons properly configured and up to date.
The word firewall can have several different meanings.
This is a hardware device or software program, intended to secure a home or desktop computer connected to the Internet. This type of firewall is highly relevant for users who do not know how their computers might be accessed via the Internet or how to disable that access, especially if they are always online and connected via broadband links.
An example configuration for a personal firewall is provided at Creating a Personal Firewall With iptables.
This is a system placed between the Internet and an intranet. To minimize the risk of compromising the firewall itself, it should generally have only one role—that of protecting the intranet. Although not completely risk-free, the tasks of doing the routing and IP masquerading (rewriting IP headers of the packets it routes from clients with private IP addresses onto the Internet so that they seem to come from the firewall itself) are commonly considered relatively secure.
An example configuration for a masquerading firewall is provided at Creating a Masquerading Router With iptables.
This is often an old computer you may have retired and nearly forgotten, performing masquerading or routing functions, but offering non-firewall services such as a web-cache or mail. This may be used for home networks, but is not to be considered as secure as a firewall only machine because the combination of server and router/firewall on one machine raises the complexity of the setup.
An example configuration for a BusyBox is provided at Creating a BusyBox With iptables.
This type of firewall performs masquerading or routing, but grants public access to some branch of your network that is physically separated from your regular intranet and is essentially a separate network with direct Internet access. The servers on this network are those which must be easily accessible from both the Internet and intranet. The firewall protects both networks. This type of firewall has a minimum of three network interfaces.
The example configurations provided for iptables-1.8.9 are not intended to be a complete guide to securing systems. Firewalling is a complex issue that requires careful configuration. The configurations provided by BLFS are intended only to give examples of how a firewall works. They are not intended to fit any particular configuration and may not provide complete protection from an attack.
BLFS provides an utility to manage the kernel Netfilter interface, iptables-1.8.9. It has been around since early 2.4 kernels, and has been the standard since. This is likely the set of tools that will be most familiar to existing admins. Other tools have been developed more recently, see the list of further readings below for more details. Here you will find a list of URLs that contain comprehensive information about building firewalls and further securing your system.
www.netfilter.org - Homepage of the netfilter/iptables/nftables projects
Netfilter related FAQ
Netfilter related HOWTO's
nftables HOWTO
tldp.org/LDP/nag2/x-087-2-firewall.html
tldp.org/HOWTO/Security-HOWTO.html
tldp.org/HOWTO/Firewall-HOWTO.html
linuxsecurity.com/howtos
www.e-infomax.com/ipmasq
www.circlemud.org/jelson/writings/security/index.htm
insecure.org/reading.html
The libcap package was installed in LFS, but if Linux-PAM support is desired, the PAM module must be built (after installation of Linux-PAM).
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.kernel.org/pub/linux/libs/security/linux-privs/libcap2/libcap-2.68.tar.xz
Download MD5 sum: ffb9e9c87704f92ac75201327841e753
Download size: 188 KB
Estimated disk space required: 2.2 MB
Estimated build time: less than 0.1 SBU
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/libcap
If you are upgrading libcap from a previous version, use the instructions in LFS libcap page to upgrade libcap. If Linux-PAM-1.5.2 has been built, the PAM module will automatically be built too.
Install libcap by running the following commands:
make -C pam_cap
This package does not come with a test suite.
Now, as the root
user:
install -v -m755 pam_cap/pam_cap.so /usr/lib/security && install -v -m644 pam_cap/capability.conf /etc/security
In order to allow Linux-PAM
to grant privileges based on POSIX capabilities, you need to
add the libcap module to the beginning of the /etc/pam.d/system-auth
file. Make the
required edits with the following commands:
mv -v /etc/pam.d/system-auth{,.bak} &&
cat > /etc/pam.d/system-auth << "EOF" &&
# Begin /etc/pam.d/system-auth
auth optional pam_cap.so
EOF
tail -n +3 /etc/pam.d/system-auth.bak >> /etc/pam.d/system-auth
Additionally, you'll need to modify the /etc/security/capability.conf
file to grant
necessary privileges to users, and utilize the setcap utility to set
capabilities on specific utilities as needed. See
man 8 setcap
and man 3
cap_from_text for additional information.
The Linux PAM package contains Pluggable Authentication Modules used by the local system administrator to control how application programs authenticate users.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/linux-pam/linux-pam/releases/download/v1.5.2/Linux-PAM-1.5.2.tar.xz
Download MD5 sum: 895e8adfa14af334f679bbeb28503f66
Download size: 966 KB
Estimated disk space required: 39 MB (with tests)
Estimated build time: 0.4 SBU (with tests)
Optional Documentation
Download (HTTP): https://github.com/linux-pam/linux-pam/releases/download/v1.5.2/Linux-PAM-1.5.2-docs.tar.xz
Download MD5 sum: ceb3dc248cb2f49a40904b93cb91db1b
Download size 433 KB
Berkeley DB-5.3.28, libnsl-2.0.0, libtirpc-1.3.3, libaudit, and Prelude
docbook-xml-4.5, docbook-xsl-nons-1.79.2, fop-2.8, libxslt-1.1.37 and either Lynx-2.8.9rel.1 or W3m
Shadow-4.13 must be reinstalled and reconfigured after installing and configuring Linux PAM.
With Linux-PAM-1.4.0 and higher, the pam_cracklib module is not installed by default. Use libpwquality-1.4.5 to enforce strong passwords.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/linux-pam
First, prevent the installation of an unneeded systemd file:
sed -e /service_DATA/d \ -i modules/pam_namespace/Makefile.am && autoreconf
If you downloaded the documentation, unpack the tarball by issuing the following command.
tar -xf ../Linux-PAM-1.5.2-docs.tar.xz --strip-components=1
If you want to regenerate the documentation yourself, fix the configure script so it will detect lynx:
sed -e 's/dummy elinks/dummy lynx/' \ -e 's/-no-numbering -no-references/-force-html -nonumbers -stdin/' \ -i configure
Compile and link Linux PAM by running the following commands:
./configure --prefix=/usr \ --sbindir=/usr/sbin \ --sysconfdir=/etc \ --libdir=/usr/lib \ --enable-securedir=/usr/lib/security \ --docdir=/usr/share/doc/Linux-PAM-1.5.2 && make
To test the results, a suitable /etc/pam.d/other
configuration file must
exist.
If you have a system with Linux PAM installed and working,
be careful when modifying the files in /etc/pam.d
, since your system may become
totally unusable. If you want to run the tests, you do not
need to create another /etc/pam.d/other
file. The existing file
can be used for the tests.
You should also be aware that make install overwrites
the configuration files in /etc/security
as well as /etc/environment
. If you have modified
those files, be sure to back them up.
For a first-time installation, create a configuration file by
issuing the following commands as the root
user:
install -v -m755 -d /etc/pam.d &&
cat > /etc/pam.d/other << "EOF"
auth required pam_deny.so
account required pam_deny.so
password required pam_deny.so
session required pam_deny.so
EOF
Now run the tests by issuing make check. Be sure the tests produced no errors before continuing the installation. Note that the tests are very long. Redirect the output to a log file, so you can inspect it thoroughly.
For a first-time installation, remove the configuration file
created earlier by issuing the following command as the
root
user:
rm -fv /etc/pam.d/other
Now, as the root
user:
make install && chmod -v 4755 /usr/sbin/unix_chkpwd
--enable-securedir=/usr/lib/security
:
This switch sets the installation location for the
PAM modules.
--disable-regenerate-docu
: If
the needed dependencies (docbook-xml-4.5, docbook-xsl-nons-1.79.2,
libxslt-1.1.37, and Lynx-2.8.9rel.1 or
W3m)
are installed, the manual pages, and the html and text
documentation files, are generated and installed.
Furthermore, if fop-2.8 is installed, the PDF documentation is
generated and installed. Use this switch if you do not want
to rebuild the documentation.
chmod -v 4755
/usr/sbin/unix_chkpwd: The setuid bit for the
unix_chkpwd
helper program must be turned on, so that non-root
processes can access the shadow
file.
Configuration information is placed in /etc/pam.d/
. Here is a sample file:
# Begin /etc/pam.d/other
auth required pam_unix.so nullok
account required pam_unix.so
session required pam_unix.so
password required pam_unix.so nullok
# End /etc/pam.d/other
Now create some generic configuration files. As the
root
user:
install -vdm755 /etc/pam.d && cat > /etc/pam.d/system-account << "EOF" &&# Begin /etc/pam.d/system-account account required pam_unix.so # End /etc/pam.d/system-account
EOF cat > /etc/pam.d/system-auth << "EOF" &&# Begin /etc/pam.d/system-auth auth required pam_unix.so # End /etc/pam.d/system-auth
EOF cat > /etc/pam.d/system-session << "EOF" &&# Begin /etc/pam.d/system-session session required pam_unix.so # End /etc/pam.d/system-session
EOF cat > /etc/pam.d/system-password << "EOF"# Begin /etc/pam.d/system-password # use sha512 hash for encryption, use shadow, and try to use any previously # defined authentication token (chosen password) set by any prior module. # Use the same number of rounds as shadow. password required pam_unix.so sha512 shadow try_first_pass \ rounds=500000 # End /etc/pam.d/system-password
EOF
If you wish to enable strong password support, install libpwquality-1.4.5, and follow the instructions on that page to configure the pam_pwquality PAM module with strong password support.
Next, add a restrictive /etc/pam.d/other
configuration file. With
this file, programs that are PAM aware will not run unless
a configuration file specifically for that application
exists.
cat > /etc/pam.d/other << "EOF"
# Begin /etc/pam.d/other
auth required pam_warn.so
auth required pam_deny.so
account required pam_warn.so
account required pam_deny.so
password required pam_warn.so
password required pam_deny.so
session required pam_warn.so
session required pam_deny.so
# End /etc/pam.d/other
EOF
The PAM man page (man pam) provides a good starting point to learn about the several fields, and allowable entries. The Linux-PAM System Administrators' Guide is recommended for additional information.
You should now reinstall the Shadow-4.13 package .
displays and modifies the authentication failure record files |
|
is a helper binary that creates home directories |
|
is a helper program used to configure a private namespace for a user session |
|
is a helper program that transfers password hashes from passwd or shadow to opasswd |
|
is used to check if the default timestamp is valid |
|
is a helper binary that verifies the password of the current user |
|
is a helper binary that updates the password of a given user |
|
provides the interfaces between applications and the PAM modules |
liboauth is a collection of POSIX-C functions implementing the OAuth Core RFC 5849 standard. Liboauth provides functions to escape and encode parameters according to OAuth specification and offers high-level functionality to sign requests or verify OAuth signatures as well as perform HTTP requests.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://downloads.sourceforge.net/liboauth/liboauth-1.0.3.tar.gz
Download MD5 sum: 689b46c2b3ab1a39735ac33f714c4f7f
Download size: 496 KB
Estimated disk space required: 3.5 MB
Estimated build time: less than 0.1 SBU
Required patch for use with openssl: https://www.linuxfromscratch.org/patches/blfs/svn/liboauth-1.0.3-openssl-1.1.0-3.patch
nss-3.89 and Doxygen-1.9.6 (to build documentation)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/liboauth
Apply a patch for the current version of openssl:
patch -Np1 -i ../liboauth-1.0.3-openssl-1.1.0-3.patch
Install liboauth by running the following commands:
./configure --prefix=/usr --disable-static && make
If you wish to build the documentation (needs Doxygen-1.9.6), issue:
make dox
To test the results, issue: make check.
Now, as the root
user:
make install
If you have previously built the documentation, install it by
running the following commands as the root
user:
install -v -dm755 /usr/share/doc/liboauth-1.0.3 && cp -rv doc/html/* /usr/share/doc/liboauth-1.0.3
--disable-static
:
This switch prevents installation of static versions of the
libraries.
--enable-nss
: Use this switch if
you want to use Mozilla NSS instead of OpenSSL.
The libpwquality package provides common functions for password quality checking and also scoring them based on their apparent randomness. The library also provides a function for generating random passwords with good pronounceability.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/libpwquality/libpwquality/releases/download/libpwquality-1.4.5/libpwquality-1.4.5.tar.bz2
Download MD5 sum: 6b70e355269aef0b9ddb2b9d17936f21
Download size: 424 KB
Estimated disk space required: 5.4 MB
Estimated build time: 0.1 SBU
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/libpwquality
Install libpwquality by running the following commands:
./configure --prefix=/usr \ --disable-static \ --with-securedir=/usr/lib/security \ --with-python-binary=python3 && make
This package does not come with a test suite.
Now, as the root
user:
make install
--with-python-binary=python3
:
This parameter gives the location of the Python binary. The default is python
, and requires Python-2.7.18.
libpwquality is intended to
be a functional replacement for the now-obsolete pam_cracklib.so
PAM module. To configure
the system to use the pam_pwquality
module, execute the following
commands as the root
user:
mv /etc/pam.d/system-password{,.orig} &&
cat > /etc/pam.d/system-password << "EOF"
# Begin /etc/pam.d/system-password
# check new passwords for strength (man pam_pwquality)
password required pam_pwquality.so authtok_type=UNIX retry=1 difok=1 \
minlen=8 dcredit=0 ucredit=0 \
lcredit=0 ocredit=0 minclass=1 \
maxrepeat=0 maxsequence=0 \
maxclassrepeat=0 gecoscheck=0 \
dictcheck=1 usercheck=1 \
enforcing=1 badwords="" \
dictpath=/usr/lib/cracklib/pw_dict
# use sha512 hash for encryption, use shadow, and use the
# authentication token (chosen password) set by pam_pwquality
# above (or any previous modules). Also set the number of crypt rounds
# to the value used in shadow.
password required pam_unix.so sha512 shadow use_authtok \
rounds=500000
# End /etc/pam.d/system-password
EOF
MIT Kerberos V5 is a free implementation of Kerberos 5. Kerberos is a network authentication protocol. It centralizes the authentication database and uses kerberized applications to work with servers or services that support Kerberos allowing single logins and encrypted communication over internal networks or the Internet.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://kerberos.org/dist/krb5/1.20/krb5-1.20.1.tar.gz
Download MD5 sum: 73f5780e7b587ccd8b8cfc10c965a686
Download size: 8.3 MB
Estimated disk space required: 94 MB (add 14 MB for tests)
Estimated build time: 0.3 SBU (Using parallelism=4; add 5.8 SBU for tests)
BIND Utilities-9.18.14, GnuPG-2.4.0 (to authenticate the package), keyutils-1.6.1, OpenLDAP-2.6.4, Valgrind-3.20.0 (used during the test suite), yasm-1.3.0, libedit, cmocka, kdcproxy, pyrad, and resolv_wrapper
Some sort of time synchronization facility on your system (like ntp-4.2.8p15) is required since Kerberos won't authenticate if there is a time difference between a kerberized client and the KDC server.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/mitkrb
Build MIT Kerberos V5 by running the following commands:
cd src && sed -i -e '/eq 0/{N;s/12 //}' plugins/kdb/db2/libdb2/test/run.test && sed -i '/t_kadm5.py/d' lib/kadm5/Makefile.in && ./configure --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var/lib \ --runstatedir=/run \ --with-system-et \ --with-system-ss \ --with-system-verto=no \ --enable-dns-for-realm && make
To test the build, issue as the root
user: make -k -j1 check. Some
tests may fail with the latest version of dejagnu and glibc.
Some tests may hang for a long time and fail if the system is
not connected to a network.
Now, as the root
user:
make install && install -v -dm755 /usr/share/doc/krb5-1.20.1 && cp -vfr ../doc/* /usr/share/doc/krb5-1.20.1
The two sed commands remove tests that are known to fail.
--localstatedir=/var/lib
: This
option is used so that the Kerberos variable runtime data is
located in /var/lib
instead of
/usr/var
.
--runstatedir=/run
:
This option is used so that the Kerberos runtime state
information is located in /run
instead of the deprecated /var/run
.
--with-system-et
:
This switch causes the build to use the system-installed
versions of the error-table support software.
--with-system-ss
:
This switch causes the build to use the system-installed
versions of the subsystem command-line interface software.
--with-system-verto=no
: This
switch fixes a bug in the package: it does not recognize its
own verto library installed previously. This is not a
problem, if reinstalling the same version, but if you are
updating, the old library is used as system's one, instead of
installing the new version.
--enable-dns-for-realm
: This
switch allows realms to be resolved using the DNS server.
--with-ldap
: Use this switch if
you want to compile the OpenLDAP database backend module.
You should consider installing some sort of password
checking dictionary so that you can configure the
installation to only accept strong passwords. A
suitable dictionary to use is shown in the CrackLib-2.9.11 instructions.
Note that only one file can be used, but you can
concatenate many files into one. The configuration file
shown below assumes you have installed a dictionary to
/usr/share/dict/words
.
Create the Kerberos configuration file with the following
commands issued by the root
user:
cat > /etc/krb5.conf << "EOF"
# Begin /etc/krb5.conf
[libdefaults]
default_realm = <EXAMPLE.ORG>
encrypt = true
[realms]
<EXAMPLE.ORG>
= {
kdc = <belgarath.example.org>
admin_server = <belgarath.example.org>
dict_file = /usr/share/dict/words
}
[domain_realm]
.<example.org>
= <EXAMPLE.ORG>
[logging]
kdc = SYSLOG:INFO:AUTH
admin_server = SYSLOG:INFO:AUTH
default = SYSLOG:DEBUG:DAEMON
# End /etc/krb5.conf
EOF
You will need to substitute your domain and proper
hostname for the occurrences of the <belgarath>
and
<example.org>
names.
default_realm
should be the
name of your domain changed to ALL CAPS. This isn't
required, but both Heimdal and MIT recommend it.
encrypt = true
provides
encryption of all traffic between kerberized clients and
servers. It's not necessary and can be left off. If you
leave it off, you can encrypt all traffic from the client
to the server using a switch on the client program
instead.
The [realms]
parameters tell
the client programs where to look for the KDC
authentication services.
The [domain_realm]
section
maps a domain to a realm.
Create the KDC database:
kdb5_util create -r <EXAMPLE.ORG>
-s
Now you should populate the database with principals
(users). For now, just use your regular login name or
root
.
kadmin.localkadmin.local:
add_policy dict-onlykadmin.local:
addprinc -policy dict-only<loginname>
The KDC server and any machine running kerberized server daemons must have a host key installed:
kadmin.local:
addprinc -randkey host/<belgarath.example.org>
After choosing the defaults when prompted, you will have to export the data to a keytab file:
kadmin.local:
ktadd host/<belgarath.example.org>
This should have created a file in /etc
named krb5.keytab
(Kerberos 5). This file
should have 600 (root
rw
only) permissions. Keeping the keytab files from public
access is crucial to the overall security of the Kerberos
installation.
Exit the kadmin program (use quit or exit) and return back to the shell prompt. Start the KDC daemon manually, just to test out the installation:
/usr/sbin/krb5kdc
Attempt to get a ticket with the following command:
kinit <loginname>
You will be prompted for the password you created. After you get your ticket, you can list it with the following command:
klist
Information about the ticket should be displayed on the screen.
To test the functionality of the keytab file, issue the
following command as the root
user:
ktutilktutil:
rkt /etc/krb5.keytabktutil:
l
This should dump a list of the host principal, along with the encryption methods used to access the principal.
Create an empty ACL file that can be modified later:
touch /var/lib/krb5kdc/kadm5.acl
At this point, if everything has been successful so far, you can feel fairly confident in the installation and configuration of the package.
For additional information consult the documentation for krb5-1.20.1 on which the above instructions are based.
If you want to start Kerberos services at boot, install the
/etc/rc.d/init.d/krb5
init
script included in the blfs-bootscripts-20230101
package using the following command:
make install-krb5
is a GSSAPI test client |
|
is a GSSAPI test server |
|
is a host keytable manipulation utility |
|
is an utility used to make modifications to the Kerberos database |
|
is an utility similar to kadmin, but if the database is db2, the local client kadmin.local, is intended to run directly on the master KDC without Kerberos authentication |
|
is a server for administrative access to a Kerberos database |
|
allows an administrator to manage realms, Kerberos services and ticket policies |
|
is the KDC database utility |
|
removes the current set of tickets |
|
is used to authenticate to the Kerberos server as a principal and acquire a ticket granting ticket that can later be used to obtain tickets for other services |
|
reads and displays the current tickets in the credential cache |
|
is a program for changing Kerberos 5 passwords |
|
takes a principal database in a specified format and converts it into a stream of database records |
|
receives a database sent by kprop and writes it as a local database |
|
displays the contents of the KDC database update log to standard output |
|
gives information on how to link programs against libraries |
|
is the Kerberos 5 server |
|
sends a problem report (PR) to a central support site |
|
is the super user program using Kerberos protocol.
Requires a properly configured |
|
makes the specified credential cache the primary cache for the collection, if a cache collection is available |
|
is a program for managing Kerberos keytabs |
|
prints keyversion numbers of Kerberos principals |
|
is used to contact a sample server and authenticate to it using Kerberos 5 tickets, then display the server's response |
|
is a simple UDP-based sample client program, for demonstration |
|
is a simple UDP-based server application, for demonstration |
|
is the sample Kerberos 5 server |
|
is another sample client |
|
is another sample server |
|
contains the Generic Security Service Application Programming Interface (GSSAPI) functions which provides security services to callers in a generic fashion, supportable with a range of underlying mechanisms and technologies and hence allowing source-level portability of applications to different environments |
|
contains the administrative authentication and password checking functions required by Kerberos 5 client-side programs |
|
contains the administrative authentication and password checking functions required by Kerberos 5 servers |
|
is a Kerberos 5 authentication/authorization database access library |
|
contains the internal support library for RADIUS functionality |
|
is an all-purpose Kerberos 5 library |
The Nettle package contains a low-level cryptographic library that is designed to fit easily in many contexts.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://ftp.gnu.org/gnu/nettle/nettle-3.8.1.tar.gz
Download (FTP): ftp://ftp.gnu.org/gnu/nettle/nettle-3.8.1.tar.gz
Download MD5 sum: e15c5fd5cc901f5dde6a271d7f2320d1
Download size: 2.3 MB
Estimated disk space required: 90 MB (with tests)
Estimated build time: 0.1 SBU (with tests; both using parallelism=4)
Valgrind-3.20.0 (optional for the tests)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/nettle
Install Nettle by running the following commands:
./configure --prefix=/usr --disable-static && make
To test the results, issue: make check.
Now, as the root
user:
make install && chmod -v 755 /usr/lib/lib{hogweed,nettle}.so && install -v -m755 -d /usr/share/doc/nettle-3.8.1 && install -v -m644 nettle.html /usr/share/doc/nettle-3.8.1
--disable-static
:
This switch prevents installation of static versions of the
libraries.
calculates a hash value using a specified algorithm |
|
outputs a sequence of pseudorandom (non-cryptographic) bytes, using Knuth's lagged fibonacci generator. The stream is useful for testing, but should not be used to generate cryptographic keys or anything else that needs real randomness |
|
is a password-based key derivation function that takes a password or a passphrase as input and returns a strengthened password, which is protected against pre-computation attacks by using salting and other expensive computations. |
|
converts private and public RSA keys from PKCS #1 format to sexp format |
|
converts an s-expression to a different encoding |
The Network Security Services (NSS) package is a set of libraries designed to support cross-platform development of security-enabled client and server applications. Applications built with NSS can support SSL v2 and v3, TLS, PKCS #5, PKCS #7, PKCS #11, PKCS #12, S/MIME, X.509 v3 certificates, and other security standards. This is useful for implementing SSL and S/MIME or other Internet security standards into an application.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://archive.mozilla.org/pub/security/nss/releases/NSS_3_89_RTM/src/nss-3.89.tar.gz
Download MD5 sum: 9d5b910dcec7ef8aeb88b53803454f63
Download size: 69 MB
Estimated disk space required: 297 MB (add 139 MB for tests)
Estimated build time: 1.1 SBU (with parallelism=4, add less than 20 SBU for tests on AMD ryzens or at least 30 SBU on Intel machines)
SQLite-3.41.2 and p11-kit-0.25.0 (runtime)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/nss
Install NSS by running the following commands:
patch -Np1 -i ../nss-3.89-standalone-1.patch && cd nss && make BUILD_OPT=1 \ NSPR_INCLUDE_DIR=/usr/include/nspr \ USE_SYSTEM_ZLIB=1 \ ZLIB_LIBS=-lz \ NSS_ENABLE_WERROR=0 \ $([ $(uname -m) = x86_64 ] && echo USE_64=1) \ $([ -f /usr/include/sqlite3.h ] && echo NSS_USE_SYSTEM_SQLITE=1)
To run the tests, execute the following commands:
cd tests && HOST=localhost DOMSUF=localdomain ./all.sh cd ../
Some information about the tests:
HOST=localhost and DOMSUF=localdomain are required.
Without these variables, a FQDN is required to be
specified and this generic way should work for
everyone, provided localhost.localdomain
is defined
in /etc/hosts
, as done
in
the LFS book.
The tests take an extremely long time to run. If desired there is information in the all.sh script about running subsets of the total test suite.
When interrupting the tests, the test suite fails to spin down test servers that are run. This leads to an infinite loop in the tests where the test suite tries to kill a server that doesn't exist anymore because it pulls the wrong PID.
Test suite results (in HTML format!) can be found at ../../test_results/security/localhost.1/results.html
A few tests might fail on some Intel machines for unknown reasons.
Now, as the root
user:
cd ../dist && install -v -m755 Linux*/lib/*.so /usr/lib && install -v -m644 Linux*/lib/{*.chk,libcrmf.a} /usr/lib && install -v -m755 -d /usr/include/nss && cp -v -RL {public,private}/nss/* /usr/include/nss && install -v -m755 Linux*/bin/{certutil,nss-config,pk12util} /usr/bin && install -v -m644 Linux*/lib/pkgconfig/nss.pc /usr/lib/pkgconfig
BUILD_OPT=1
: This
option is passed to make so that the build is
performed with no debugging symbols built into the binaries
and the default compiler optimizations are used.
NSPR_INCLUDE_DIR=/usr/include/nspr
:
This option sets the location of the nspr headers.
USE_SYSTEM_ZLIB=1
:
This option is passed to make to ensure that the
libssl3.so
library is linked to
the system installed zlib
instead of the in-tree version.
ZLIB_LIBS=-lz
: This
option provides the linker flags needed to link to the system
zlib.
$([ $(uname -m) = x86_64 ]
&& echo USE_64=1): The USE_64=1
option is required on x86_64, otherwise
make will try
(and fail) to create 32-bit objects. The [ $(uname -m) =
x86_64 ] test ensures it has no effect on a 32 bit system.
([ -f /usr/include/sqlite3.h ]
&& echo NSS_USE_SYSTEM_SQLITE=1):
This tests if sqlite is
installed and if so it echos the option
NSS_USE_SYSTEM_SQLITE=1 to make so that libsoftokn3.so
will link against the system
version of sqlite.
NSS_DISABLE_GTESTS=1
: If you
don't need to run NSS test suite, append this option to
make command,
to prevent the compilation of tests and save some build time.
If p11-kit-0.25.0 is installed, the
p11-kit trust module
(/usr/lib/pkcs11/p11-kit-trust.so
) can be
used as a drop-in replacement for /usr/lib/libnssckbi.so
to transparently
make the system CAs available to NSS aware applications, rather than the
static list provided by /usr/lib/libnssckbi.so
. As the root
user, execute the following command:
ln -sfv ./pkcs11/p11-kit-trust.so /usr/lib/libnssckbi.so
Additionally, for dependent applications that do not use the
internal database (/usr/lib/libnssckbi.so
), the /usr/sbin/make-ca
script included on the
make-ca-1.13 page can generate a system
wide NSS DB with the -n
switch, or by modifying the
/etc/make-ca/make-ca.conf
file.
is the Mozilla Certificate Database Tool. It is a command-line utility that can create and modify the Netscape Communicator cert8.db and key3.db database files. It can also list, generate, modify, or delete certificates within the cert8.db file and create or change the password, generate new public and private key pairs, display the contents of the key database, or delete key pairs within the key3.db file |
|
is used to determine the NSS library settings of the installed NSS libraries |
|
is a tool for importing certificates and keys from pkcs #12 files into NSS or exporting them. It can also list certificates and keys in such files |
The p11-kit package provides a way to load and enumerate PKCS #11 (a Cryptographic Token Interface Standard) modules.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/p11-glue/p11-kit/releases/download/0.25.0/p11-kit-0.25.0.tar.xz
Download MD5 sum: 67b2539bdca6b4bedaeecc12864d2796
Download size: 820 KB
Estimated disk space required: 44 MB (with tests)
Estimated build time: 0.5 SBU (with tests)
GTK-Doc-1.33.2, libxslt-1.1.37, and nss-3.89 (runtime)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/p11-kit
Prepare the distribution specific anchor hook:
sed '20,$ d' -i trust/trust-extract-compat &&
cat >> trust/trust-extract-compat << "EOF"
# Copy existing anchor modifications to /etc/ssl/local
/usr/libexec/make-ca/copy-trust-modifications
# Update trust stores
/usr/sbin/make-ca -r
EOF
Install p11-kit by running the following commands:
mkdir p11-build && cd p11-build && meson setup .. \ --prefix=/usr \ --buildtype=release \ -Dtrust_paths=/etc/pki/anchors && ninja
To test the results, issue: ninja test.
Now, as the root
user:
ninja install && ln -sfv /usr/libexec/p11-kit/trust-extract-compat \ /usr/bin/update-ca-certificates
--buildtype=release
:
Specify a buildtype suitable for stable releases of the
package, as the default may produce unoptimized binaries.
-Dtrust_paths=/etc/pki/anchors
:
this switch sets the location of trusted certificates used by
libp11-kit.so.
-Dhash_impl=freebl
: Use this
switch if you want to use the Freebl library from
NSS for SHA1 and MD5
hashing.
-Dgtk_doc=true
: Use this switch
if you have installed GTK-Doc-1.33.2 and libxslt-1.1.37 and wish
to rebuild the documentation and generate manual pages.
The p11-kit trust module
(/usr/lib/pkcs11/p11-kit-trust.so
) can be
used as a drop-in replacement for /usr/lib/libnssckbi.so
to transparently
make the system CAs available to NSS aware applications, rather than the
static list provided by /usr/lib/libnssckbi.so
. As the root
user, execute the following
commands:
ln -sfv ./pkcs11/p11-kit-trust.so /usr/lib/libnssckbi.so
is a command line tool that can be used to perform operations on PKCS#11 modules configured on the system |
|
is a command line tool to examine and modify the shared trust policy store |
|
is a command line tool to both extract local
certificates from an updated anchor store, and
regenerate all anchors and certificate stores on
the system. This is done unconditionally on BLFS
using the |
|
contains functions used to coordinate initialization and finalization of any PKCS#11 module |
|
is the PKCS#11 proxy module |
Polkit is a toolkit for defining and handling authorizations. It is used for allowing unprivileged processes to communicate with privileged processes.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://gitlab.freedesktop.org/polkit/polkit/-/archive/122/polkit-122.tar.gz
Download MD5 sum: bbe3e745fc5bc1a41f1b5044f09a0f26
Download size: 728 KB
Estimated disk space required: 7.0 MB (with tests)
Estimated build time: 0.3 SBU (with tests, using parallelism=4)
gobject-introspection-1.76.1, libxslt-1.1.37, Linux-PAM-1.5.2, and elogind-246.10
Since elogind uses PAM to register user sessions, it is a good idea to build Polkit with PAM support so elogind can track Polkit sessions.
GTK-Doc-1.33.2, JS-102.10.0 (can be used in place of duktape), and dbusmock-0.29.0 (for tests)
One polkit authentication agent for using polkit in the graphical environment: polkit-kde-agent in Plasma-5.26.5 for KDE, the agent built in gnome-shell-43.3 for GNOME3, polkit-gnome-0.105 for XFCE, and lxpolkit in LXSession-0.5.5 for LXDE
If libxslt-1.1.37 is installed, then
docbook-xml-4.5 and docbook-xsl-nons-1.79.2 are
required. If you have installed libxslt-1.1.37, but
you do not want to install any of the DocBook packages
mentioned, you will need to use -Dman=false
in the instructions below.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/polkit
There should be a dedicated user and group to take control of
the polkitd
daemon after it is started. Issue the following commands as
the root
user:
groupadd -fg 27 polkitd && useradd -c "PolicyKit Daemon Owner" -d /etc/polkit-1 -u 27 \ -g polkitd -s /bin/false polkitd
Install Polkit by running the following commands:
mkdir build && cd build && meson setup .. \ --prefix=/usr \ --buildtype=release \ -Dman=true \ -Dsession_tracking=libelogind \ -Dsystemdsystemunitdir=/tmp \ -Dtests=true && ninja
To test the results, first ensure that the system D-Bus daemon is running, and both D-Bus Python-1.3.2 and dbusmock-0.29.0 are installed. Then run ninja test.
Now, as the root
user:
ninja install && rm -v /tmp/*.service
--buildtype=release
:
Specify a buildtype suitable for stable releases of the
package, as the default may produce unoptimized binaries.
-Dtests=true
: This
switch allows to run the test suite of this package. As
Polkit is used for
authorizations, its integrity can affect system security. So
it's recommended to run the test suite building this package.
-Djs_engine=mozjs
: This switch
allows using the JS-102.10.0 JavaScript engine instead of
the duktape-2.7.0 JavaScript engine.
-Dos_type=lfs
: Use this switch if
you did not create the /etc/lfs-release
file or distribution auto
detection will fail and you will be unable to use
Polkit.
-Dauthfw=shadow
: This switch
enables the package to use the Shadow rather than the Linux PAM Authentication framework. Use
it if you have not installed Linux
PAM.
-Dintrospection=false
: Use this
option if you are certain that you do not need
gobject-introspection files for polkit, or do not have
gobject-introspection installed.
-Dman=false
: Use this option to
disable generating and installing manual pages. This is
useful if libxslt is not installed.
-Dexamples=true
: Use this option
to build the example programs.
-Dgtk_doc=true
: Use this option
to enable building and installing the API documentation.
is used to obtain information about registered PolicyKit actions |
|
is used to check whether a process is authorized for action |
|
allows an authorized user to execute a command as another user |
|
is used to start a textual authentication agent for the subject |
|
provides the org.freedesktop.PolicyKit1 D-Bus service on the system message bus |
|
contains the Polkit authentication agent API functions |
|
contains the Polkit authorization API functions |
The Polkit GNOME package provides an Authentication Agent for Polkit that integrates well with the GNOME Desktop environment.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://download.gnome.org/sources/polkit-gnome/0.105/polkit-gnome-0.105.tar.xz
Download (FTP): ftp://ftp.acc.umu.se/pub/gnome/sources/polkit-gnome/0.105/polkit-gnome-0.105.tar.xz
Download MD5 sum: 50ecad37c8342fb4a52f590db7530621
Download size: 305 KB
Estimated disk space required: 5.0 MB
Estimated build time: 0.1 SBU
AccountsService-23.13.9, GTK+-3.24.38, and Polkit-122
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/polkit-gnome
First, apply some fixes that allow for the proper user icon to be used, as well as some security fixes:
patch -Np1 -i ../polkit-gnome-0.105-consolidated_fixes-1.patch
Install Polkit GNOME by running the following commands:
./configure --prefix=/usr && make
This package does not come with a test suite.
Now, as the root
user:
make install
For the authentication framework to work, polkit-gnome-authentification-agent-1 needs to be started. However, make install did not install a startup file for the Polkit GNOME so you have to create it by yourself.
Issue the following commands as the root
user to create a startup file for
Polkit GNOME:
mkdir -p /etc/xdg/autostart &&
cat > /etc/xdg/autostart/polkit-gnome-authentication-agent-1.desktop << "EOF"
[Desktop Entry]
Name=PolicyKit Authentication Agent
Comment=PolicyKit Authentication Agent
Exec=/usr/libexec/polkit-gnome-authentication-agent-1
Terminal=false
Type=Application
Categories=
NoDisplay=true
OnlyShowIn=GNOME;XFCE;Unity;
AutostartCondition=GNOME3 unless-session gnome
EOF
Shadow was indeed installed in LFS and there is no reason to reinstall it unless you installed CrackLib or Linux-PAM after your LFS system was completed. If you have installed CrackLib after LFS, then reinstalling Shadow will enable strong password support. If you have installed Linux-PAM, reinstalling Shadow will allow programs such as login and su to utilize PAM.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/shadow-maint/shadow/releases/download/4.13/shadow-4.13.tar.xz
Download MD5 sum: b1ab01b5462ddcf43588374d57bec123
Download size: 1.7 MB
Estimated disk space required: 45 MB
Estimated build time: 0.2 SBU
Linux-PAM-1.5.2 or CrackLib-2.9.11
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/shadow
The installation commands shown below are for installations where Linux-PAM has been installed and Shadow is being reinstalled to support the Linux-PAM installation.
If you are reinstalling Shadow to provide strong password
support using the CrackLib
library without using Linux-PAM, ensure you add the
--with-libcrack
parameter to the configure script below
and also issue the following command:
sed -i 's@DICTPATH.*@DICTPATH\t/lib/cracklib/pw_dict@' etc/login.defs
Reinstall Shadow by running the following commands:
sed -i 's/groups$(EXEEXT) //' src/Makefile.in && find man -name Makefile.in -exec sed -i 's/groups\.1 / /' {} \; && find man -name Makefile.in -exec sed -i 's/getspnam\.3 / /' {} \; && find man -name Makefile.in -exec sed -i 's/passwd\.5 / /' {} \; && sed -e 's@#ENCRYPT_METHOD DES@ENCRYPT_METHOD SHA512@' \ -e 's@#\(SHA_CRYPT_..._ROUNDS 5000\)@\100@' \ -e 's@/var/spool/mail@/var/mail@' \ -e '/PATH=/{s@/sbin:@@;s@/bin:@@}' \ -i etc/login.defs && ./configure --sysconfdir=/etc \ --disable-static \ --with-group-name-max-length=32 && make
This package does not come with a test suite.
Now, as the root
user:
make exec_prefix=/usr install
The man pages were installed in LFS, but if reinstallation is
desired, run (as the root
user):
make -C man install-man
sed -i 's/groups$(EXEEXT) //' src/Makefile.in: This sed is used to suppress the installation of the groups program as the version from the Coreutils package installed during LFS is preferred.
find man -name Makefile.in -exec ... {} \;: The first command is used to suppress the installation of the groups man pages so the existing ones installed from the Coreutils package are not replaced. The two other commands prevent installation of manual pages that are already installed by Man-pages in LFS.
sed -e 's@#ENCRYPT_METHOD
DES@ENCRYPT_METHOD SHA512@' -e 's@#\(SHA_CRYPT_..._ROUNDS
5000\)@\100@' -e 's@/var/spool/mail@/var/mail@' -e
'/PATH=/{s@/sbin:@@;s@/bin:@@}' -i
etc/login.defs: Instead of using the default
'DES' method, this command modifies the installation to use
the more secure 'SHA512' method of hashing passwords, which
also allows passwords longer than eight characters. The
number of rounds is also increased to prevent brute force
password attacks. The command also changes the obsolete
/var/spool/mail
location for
user mailboxes that Shadow
uses by default to the /var/mail
location. It also changes the
default path to be consistent with that set in LFS.
--with-group-name-max-length=32
:
The maximum user name is 32 characters. Make the maximum
group name the same.
The rest of this page is devoted to configuring Shadow to work properly with Linux-PAM. If you do not have Linux-PAM installed, and you reinstalled Shadow to support strong passwords via the CrackLib library, no further configuration is required.
Configuring your system to use Linux-PAM can be a complex task. The information below will provide a basic setup so that Shadow's login and password functionality will work effectively with Linux-PAM. Review the information and links on the Linux-PAM-1.5.2 page for further configuration information. For information specific to integrating Shadow, Linux-PAM and libpwquality, you can visit the following link:
The login
program currently performs many functions which
Linux-PAM modules should
now handle. The following sed command will
comment out the appropriate lines in /etc/login.defs
, and stop login from performing
these functions (a backup file named /etc/login.defs.orig
is also created to
preserve the original file's contents). Issue the
following commands as the root
user:
install -v -m644 /etc/login.defs /etc/login.defs.orig && for FUNCTION in FAIL_DELAY \ FAILLOG_ENAB \ LASTLOG_ENAB \ MAIL_CHECK_ENAB \ OBSCURE_CHECKS_ENAB \ PORTTIME_CHECKS_ENAB \ QUOTAS_ENAB \ CONSOLE MOTD_FILE \ FTMP_FILE NOLOGINS_FILE \ ENV_HZ PASS_MIN_LEN \ SU_WHEEL_ONLY \ CRACKLIB_DICTPATH \ PASS_CHANGE_TRIES \ PASS_ALWAYS_WARN \ CHFN_AUTH ENCRYPT_METHOD \ ENVIRON_FILE do sed -i "s/^${FUNCTION}/# &/" /etc/login.defs done
As mentioned previously in the Linux-PAM instructions, Linux-PAM has two supported methods
for configuration. The commands below assume that you've
chosen to use a directory based configuration, where each
program has its own configuration file. You can
optionally use a single /etc/pam.conf
configuration file by
using the text from the files below, and supplying the
program name as an additional first field for each line.
As the root
user, create
the following Linux-PAM
configuration files in the /etc/pam.d/
directory (or add the
contents to the /etc/pam.conf
file) using the following
commands:
cat > /etc/pam.d/login << "EOF"
# Begin /etc/pam.d/login
# Set failure delay before next prompt to 3 seconds
auth optional pam_faildelay.so delay=3000000
# Check to make sure that the user is allowed to login
auth requisite pam_nologin.so
# Check to make sure that root is allowed to login
# Disabled by default. You will need to create /etc/securetty
# file for this module to function. See man 5 securetty.
#auth required pam_securetty.so
# Additional group memberships - disabled by default
#auth optional pam_group.so
# include system auth settings
auth include system-auth
# check access for the user
account required pam_access.so
# include system account settings
account include system-account
# Set default environment variables for the user
session required pam_env.so
# Set resource limits for the user
session required pam_limits.so
# Display date of last login - Disabled by default
#session optional pam_lastlog.so
# Display the message of the day - Disabled by default
#session optional pam_motd.so
# Check user's mail - Disabled by default
#session optional pam_mail.so standard quiet
# include system session and password settings
session include system-session
password include system-password
# End /etc/pam.d/login
EOF
cat > /etc/pam.d/passwd << "EOF"
# Begin /etc/pam.d/passwd
password include system-password
# End /etc/pam.d/passwd
EOF
cat > /etc/pam.d/su << "EOF"
# Begin /etc/pam.d/su
# always allow root
auth sufficient pam_rootok.so
# Allow users in the wheel group to execute su without a password
# disabled by default
#auth sufficient pam_wheel.so trust use_uid
# include system auth settings
auth include system-auth
# limit su to users in the wheel group
# disabled by default
#auth required pam_wheel.so use_uid
# include system account settings
account include system-account
# Set default environment variables for the service user
session required pam_env.so
# include system session settings
session include system-session
# End /etc/pam.d/su
EOF
cat > /etc/pam.d/chpasswd << "EOF"
# Begin /etc/pam.d/chpasswd
# always allow root
auth sufficient pam_rootok.so
# include system auth and account settings
auth include system-auth
account include system-account
password include system-password
# End /etc/pam.d/chpasswd
EOF
sed -e s/chpasswd/newusers/ /etc/pam.d/chpasswd >/etc/pam.d/newusers
cat > /etc/pam.d/chage << "EOF"
# Begin /etc/pam.d/chage
# always allow root
auth sufficient pam_rootok.so
# include system auth and account settings
auth include system-auth
account include system-account
# End /etc/pam.d/chage
EOF
for PROGRAM in chfn chgpasswd chsh groupadd groupdel \ groupmems groupmod useradd userdel usermod do install -v -m644 /etc/pam.d/chage /etc/pam.d/${PROGRAM} sed -i "s/chage/$PROGRAM/" /etc/pam.d/${PROGRAM} done
At this point, you should do a simple test to see if
Shadow is working as
expected. Open another terminal and log in as
root
, and then run
login and
login as another user. If you do not see any errors,
then all is well and you should proceed with the rest
of the configuration. If you did receive errors, stop
now and double check the above configuration files
manually. Any error is the sign of an error in the
above procedure. You can also run the test suite from
the Linux-PAM package
to assist you in determining the problem. If you cannot
find and fix the error, you should recompile
Shadow adding the
--without-libpam
switch to
the configure command in
the above instructions (also move the /etc/login.defs.orig
backup file to
/etc/login.defs
). If you
fail to do this and the errors remain, you will be
unable to log into your system.
Instead of using the /etc/login.access
file for controlling
access to the system, Linux-PAM uses the pam_access.so
module along with the
/etc/security/access.conf
file. Rename the /etc/login.access
file using the
following command:
if [ -f /etc/login.access ]; then mv -v /etc/login.access{,.NOUSE}; fi
Instead of using the /etc/limits
file for limiting usage of
system resources, Linux-PAM uses the pam_limits.so
module along with the
/etc/security/limits.conf
file. Rename the /etc/limits
file using the following
command:
if [ -f /etc/limits ]; then mv -v /etc/limits{,.NOUSE}; fi
Be sure to test the login capabilities of the system before logging out. Errors in the configuration can cause a permanent lockout requiring a boot from an external source to correct the problem.
A list of the installed files, along with their short descriptions can be found at ../../../../lfs/view/development/chapter08/shadow.html#contents-shadow.
The ssh-askpass is a generic executable name for many packages, with similar names, that provide a interactive X service to grab password for packages requiring administrative privileges to be run. It prompts the user with a window box where the necessary password can be inserted. Here, we choose Damien Miller's package distributed in the OpenSSH tarball.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-9.3p1.tar.gz
Download (FTP): ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-9.3p1.tar.gz
Download MD5 sum: 3430d5e6e71419e28f440a42563cb553
Download size: 1.8 MB
Estimated disk space required: 10 MB
Estimated build time: less than 0.1 SBU
GTK+-3.24.38, Sudo-1.9.13p3 (runtime), Xorg Libraries, and a graphical environment (runtime)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/ssh-askpass
Install ssh-askpass by running the following commands:
cd contrib && make gnome-ssh-askpass3
Now, as the root
user:
install -v -d -m755 /usr/libexec/openssh/contrib && install -v -m755 gnome-ssh-askpass3 /usr/libexec/openssh/contrib && ln -sv -f contrib/gnome-ssh-askpass3 /usr/libexec/openssh/ssh-askpass
The use of /usr/libexec/openssh/contrib and a symlink is justified by the eventual necessity of a different program for that service.
As the root
user, configure
Sudo-1.9.13p3 to use ssh-askpass:
cat >> /etc/sudo.conf << "EOF" &&
# Path to askpass helper program
Path askpass /usr/libexec/openssh/ssh-askpass
EOF
chmod -v 0644 /etc/sudo.conf
If a given graphical <application> requires administrative privileges, use sudo -A <application> from an x-terminal, from a Window Manager menu and/or replace "Exec=<application> ..." by "Exec=sudo -A <application> ..." in the <application>.desktop file.
gnome-ssh-askpass3
)
The stunnel package contains a program that allows you to encrypt arbitrary TCP connections inside SSL (Secure Sockets Layer) so you can easily communicate with clients over secure channels. stunnel can also be used to tunnel PPP over network sockets without changes to the server package source code.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (FTP): ftp://ftp.stunnel.org/stunnel/archive/5.x/stunnel-5.69.tar.gz
Download MD5 sum: 0a153cb9a24913c10d870bdfabcb555b
Download size: 860 KB
Estimated disk space required: 6.9 MB
Estimated build time: less than 0.1 SBU
libnsl-2.0.0, netcat (required for tests), tcpwrappers, and TOR
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/stunnel
The stunnel
daemon will be run in a chroot jail by an
unprivileged user. Create the new user and group using the
following commands as the root
user:
groupadd -g 51 stunnel && useradd -c "stunnel Daemon" -d /var/lib/stunnel \ -g stunnel -s /bin/false -u 51 stunnel
A signed SSL Certificate and a Private Key is necessary to
run the stunnel daemon. After the
package is installed, there are instructions to generate
them. However, if you own or have already created a signed
SSL Certificate you wish to use, copy it to /etc/stunnel/stunnel.pem
before starting
the build (ensure only root
has read and write access). The .pem
file must be formatted as shown
below:
-----BEGIN PRIVATE KEY-----
<many encrypted lines of private key>
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
<many encrypted lines of certificate>
-----END CERTIFICATE-----
-----BEGIN DH PARAMETERS-----
<encrypted lines of dh parms>
-----END DH PARAMETERS-----
Install stunnel by running the following commands:
./configure --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --disable-systemd && make
If you have installed the optional netcat application, the regression tests can be run with make check.
Now, as the root
user:
make docdir=/usr/share/doc/stunnel-5.69 install
If you do not already have a signed SSL Certificate and
Private Key, create the stunnel.pem
file in the /etc/stunnel
directory using the command
below. You will be prompted to enter the necessary
information. Ensure you reply to the
Common Name (FQDN of your server) [localhost]:
prompt with the name or IP address you will be using to access the service(s).
To generate a certificate, as the root
user, issue:
make cert
--disable-systemd
:
This switch disables systemd socket activation support which
is not available in BLFS.
make docdir=... install: This command installs the package and changes the documentation installation directory to standard naming conventions.
As the root
user, create
the directory used for the .pid
file created when the stunnel daemon starts:
install -v -m750 -o stunnel -g stunnel -d /var/lib/stunnel/run && chown stunnel:stunnel /var/lib/stunnel
Next, create a basic /etc/stunnel/stunnel.conf
configuration
file using the following commands as the root
user:
cat > /etc/stunnel/stunnel.conf << "EOF"
; File: /etc/stunnel/stunnel.conf
; Note: The pid and output locations are relative to the chroot location.
pid = /run/stunnel.pid
chroot = /var/lib/stunnel
client = no
setuid = stunnel
setgid = stunnel
cert = /etc/stunnel/stunnel.pem
;debug = 7
;output = stunnel.log
;[https]
;accept = 443
;connect = 80
;; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SSL
;; Microsoft implementations do not use SSL close-notify alert and thus
;; they are vulnerable to truncation attacks
;TIMEOUTclose = 0
EOF
Finally, add the service(s) you wish to encrypt to the configuration file. The format is as follows:
[<service>
]
accept = <hostname:portnumber>
connect = <hostname:portnumber>
For a full explanation of the commands and syntax used in the configuration file, issue man stunnel.
To automatically start the stunnel daemon when the
system is booted, install the /etc/rc.d/init.d/stunnel
bootscript from
the blfs-bootscripts-20230101 package.
make install-stunnel
The Sudo package allows a
system administrator to give certain users (or groups of
users) the ability to run some (or all) commands as
root
or another user while
logging the commands and arguments.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.sudo.ws/dist/sudo-1.9.13p3.tar.gz
Download (FTP): ftp://ftp.sudo.ws/pub/sudo/sudo-1.9.13p3.tar.gz
Download MD5 sum: be560d914b60376dab3449c99b9f19ef
Download size: 4.8 MB
Estimated disk space required: 50 MB (add 16 MB for tests)
Estimated build time: 0.4 SBU (add 0.1 SBU for tests)
Linux-PAM-1.5.2, MIT Kerberos V5-1.20.1, OpenLDAP-2.6.4, MTA (that provides a sendmail command), AFS, FWTK, and Opie
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/sudo
Install Sudo by running the following commands:
./configure --prefix=/usr \ --libexecdir=/usr/lib \ --with-secure-path \ --with-all-insults \ --with-env-editor \ --docdir=/usr/share/doc/sudo-1.9.13p3 \ --with-passprompt="[sudo] password for %p: " && make
To test the results, issue: env LC_ALL=C make check 2>&1 | tee make-check.log. Check the results with grep failed make-check.log.
Now, as the root
user:
make install && ln -sfv libsudo_util.so.0.0.0 /usr/lib/sudo/libsudo_util.so.0
--libexecdir=/usr/lib
: This
switch controls where private programs are installed.
Everything in that directory is a library, so they belong
under /usr/lib
instead of
/usr/libexec
.
--with-secure-path
:
This switch transparently adds /sbin
and /usr/sbin
directories to the PATH
environment variable.
--with-all-insults
:
This switch includes all the sudo insult sets.
--with-env-editor
:
This switch enables use of the environment variable EDITOR
for visudo.
--with-passprompt
:
This switch sets the password prompt. The %p
will be expanded to the name
of the user whose password is being requested.
--without-pam
: This switch avoids
building Linux-PAM support
when Linux-PAM is installed
on the system.
There are many options to sudo's configure command. Check the configure --help output for a complete list.
ln -sfv libsudo_util...: Works around a bug in the installation process, which links to the previously installed version (if there is one) instead of the new one.
The sudoers
file can be quite
complicated. It is composed of two types of entries:
aliases (basically variables) and user specifications
(which specify who may run what). The installation installs
a default configuration that has no privileges installed
for any user.
A couple of common configuration changes are to set the
path for the super user and to allow members of the wheel
group to execute all commands after providing their own
credientials. Use the following commands to create the
/etc/sudoers.d/00-sudo
configuration file as the root
user:
cat > /etc/sudoers.d/00-sudo << "EOF"
Defaults secure_path="/usr/sbin:/usr/bin"
%wheel ALL=(ALL) ALL
EOF
In very simple installations where there is only one
user, it may be easier to just edit the /etc/sudoers
file directly. In that
case, the secure_path
entry
may not be needed and using sudo -E ... can import
the non-privileged user's full environment into the
privileged session.
The files in the /etc/sudoers.d
directory are parsed in
sorted lexical order. Be careful that entries in an added
file do not overwrite previous entries.
For details, see man sudoers.
The Sudo developers
highly recommend using the visudo program to edit
the sudoers
file. This will
provide basic sanity checking like syntax parsing and
file permission to avoid some possible mistakes that
could lead to a vulnerable configuration.
If PAM is installed on the
system, Sudo is built with
PAM support. In that case,
issue the following command as the root
user to create the PAM configuration file:
cat > /etc/pam.d/sudo << "EOF"
# Begin /etc/pam.d/sudo
# include the default auth settings
auth include system-auth
# include the default account settings
account include system-account
# Set default environment variables for the service user
session required pam_env.so
# include system session defaults
session include system-session
# End /etc/pam.d/sudo
EOF
chmod 644 /etc/pam.d/sudo
converts between sudoers file formats |
|
executes a command as another user as permitted by
the |
|
is a sudo event and I/O log server |
|
sends sudo I/O logs to the log server |
|
is a symlink to sudo that implies
the |
|
is used to play back or list the output logs created by sudo |
|
allows for safer editing of the |
The Tripwire package contains programs used to verify the integrity of the files on a given system.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/Tripwire/tripwire-open-source/releases/download/2.4.3.7/tripwire-open-source-2.4.3.7.tar.gz
Download MD5 sum: a5cf1bc2f235f5d8ca458f00548db6ee
Download size: 980 KB
Estimated disk space required: 29 MB
Estimated build time: 1.6 SBU (scripting install)
An MTA
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/tripwire
Compile Tripwire by running the following commands:
sed -e '/^CLOBBER/s/false/true/' \ -e 's|TWDB="${prefix}|TWDB="/var|' \ -e '/TWMAN/ s|${prefix}|/usr/share|' \ -e '/TWDOCS/s|${prefix}/doc/tripwire|/usr/share/doc/tripwire-2.4.3.7|' \ -i installer/install.cfg && find . -name Makefile.am | xargs \ sed -i 's/^[[:alpha:]_]*_HEADERS.*=/noinst_HEADERS =/' && sed '/dist/d' -i man/man?/Makefile.am && autoreconf -fi && ./configure --prefix=/usr --sysconfdir=/etc/tripwire && make CPPFLAGS=-std=c++11
The default configuration is to use a local MTA. If you
don't have an MTA installed and have no wish to install
one, modify install/install.cfg
to use an SMTP server
instead. Otherwise the install will fail.
This package does not come with a test suite.
Now, as the root
user:
make install && cp -v policy/*.txt /usr/share/doc/tripwire-2.4.3.7
During make install, several questions are asked, including passwords. If you want to make a script, you have to apply a sed before running make install:
sed -i -e 's@installer/install.sh@& -n -s<site-password>
-l<local-password>
@' Makefile
Of course, you should do this with dummy passwords and change them later.
Another issue when scripting is that the installer exits when the standard input is not a terminal. You may disable this behavior with the following sed:
sed '/-t 0/,+3d' -i installer/install.sh
sed ...
installer/install.cfg: This command tells the
package to install the program database and reports in
/var/lib/tripwire
and sets the
proper location for man pages and documentation.
find ..., sed ..., and autoreconf -fi: The build system is unusable as is, and has to be modified for the build to succeed.
CPPFLAGS=-std=c++11
: Setting the
C++ preprocessor flags to version 11 is necessary to prevent
a conflict with the default version which is c++17 in recent
version of gcc.
make install:
This command creates the Tripwire security keys as well as
installing the binaries. There are two keys: a site key and a
local key which are stored in /etc/tripwire/
.
cp -v policy/*.txt /usr/doc/tripwire-2.4.3.7: This command installs the tripwire sample policy files with the other tripwire documentation.i
Tripwire uses a policy
file to determine which files are integrity checked. The
default policy file (/etc/tripwire/twpol.txt
) is for a default
installation and will need to be updated for your system.
Policy files should be tailored to each individual
distribution and/or installation. Some example policy files
can be found in /usr/share/doc/tripwire/
.
If desired, copy the policy file you'd like to try into
/etc/tripwire/
instead of
using the default policy file, twpol.txt
. It is, however, recommended
that you edit your policy file. Get ideas from the examples
above and read /usr/share/doc/tripwire/policyguide.txt
for additional information. twpol.txt
is a good policy file for
learning about Tripwire as
it will note any changes to the file system and can even be
used as an annoying way of keeping track of changes for
uninstallation of software.
After your policy file has been edited to your satisfaction
you may begin the configuration steps (perform as the
root
) user:
twadmin --create-polfile --site-keyfile /etc/tripwire/site.key \ /etc/tripwire/twpol.txt && tripwire --init
Depending on your system and the contents of the policy file, the initialization phase above can take a relatively long time.
Tripwire will identify file changes in the critical system files specified in the policy file. Using Tripwire while making frequent changes to these directories will flag all these changes. It is most useful after a system has reached a configuration that the user considers stable.
To use Tripwire after creating a policy file to run a report, use the following command:
tripwire --check > /etc/tripwire/report.txt
View the output to check the integrity of your files. An automatic integrity report can be produced by using a cron facility to schedule the runs.
Reports are stored in binary and, if desired, encrypted.
View reports, as the root
user, with:
twprint --print-report -r /var/lib/tripwire/report/<report-name.twr>
After you run an integrity check, you should examine the
report (or email) and then modify the Tripwire database to reflect the
changed files on your system. This is so that Tripwire will not continually notify
you hat files you intentionally changed are a security
violation. To do this you must first ls -l
/var/lib/tripwire/report/ and note the name
of the newest file which starts with your system name as
presented by the command uname -n
and ends in
.twr
. These files were
created during report creation and the most current one is
needed to update the Tripwire database of your system. As
the root
user, type in the
following command making the appropriate report name:
tripwire --update --twrfile /var/lib/tripwire/report/<report-name.twr>
You will be placed into Vim with a copy of the report in front of you. If all the changes were good, then just type :wq and after entering your local key, the database will be updated. If there are files which you still want to be warned about, remove the 'x' before the filename in the report and type :wq.
is a signature gathering utility that displays the hash function values for the specified files |
|
is the main file integrity checking program |
|
administrative and utility tool used to perform certain administrative functions related to Tripwire files and configuration options |
|
prints Tripwire database and report files in clear text format |
The volume_key package provides a library for manipulating storage volume encryption keys and storing them separately from volumes to handle forgotten passphrases.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/felixonmars/volume_key/archive/volume_key-0.3.12.tar.gz
Download MD5 sum: d1c76f24e08ddd8c1787687d0af5a814
Download size: 196 KB
Estimated disk space required: 11 MB
Estimated build time: 0.2 SBU
cryptsetup-2.4.3, GLib-2.76.2, GnuPG-2.4.0, GPGME-1.20.0, and nss-3.89
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/volume_key
This package expands to the directory volume_key-volume_key-0.3.12.
Tell the building system how to locate GPGME and GnuPG correctly:
sed -e '/AM_PATH_GPGME/iAM_PATH_GPG_ERROR' \ -e 's/gpg2/gpg/' -i configure.ac
Install volume_key by running the following commands:
autoreconf -fiv && ./configure --prefix=/usr \ --without-python && make
To test the results, issue: make check.
Now, as the root
user:
make install
--without-python
:
This parameter prevents building the Python 2 bindings, if Python-2.7.18 is
installed.
--without-python3
: Use this
option if you do not want to build the Python 3 bindings. In this case,
SWIG-4.1.1 is not needed.
Journaling file systems reduce the time needed to recover a file system that was not unmounted properly. While this can be extremely important in reducing downtime for servers, it has also become popular for desktop environments. This chapter contains other journaling file systems you can use instead of the default TFS extended file system (ext2/3/4). It also provides introductory material on managing disk arrays.
The only purpose of an initramfs is to mount the root filesystem. The initramfs is a complete set of directories that you would find on a normal root filesystem. It is bundled into a single cpio archive and compressed with one of several compression algorithms.
At boot time, the boot loader loads the kernel and the initramfs image into memory and starts the kernel. The kernel checks for the presence of the initramfs and, if found, mounts it as / and runs /init. The init program is typically a shell script. Note that the boot process takes longer, possibly significantly longer, if an initramfs is used.
For most distributions, kernel modules are the biggest reason to have an initramfs. In a general distribution, there are many unknowns such as file system types and disk layouts. In a way, this is the opposite of LFS where the system capabilities and layout are known and a custom kernel is normally built. In this situation, an initramfs is rarely needed.
There are only four primary reasons to have an initramfs in the LFS environment: loading the rootfs from a network, loading it from an LVM logical volume, having an encrypted rootfs where a password is required, or for the convenience of specifying the rootfs as a LABEL or UUID. Anything else usually means that the kernel was not configured properly.
If you do decide to build an initramfs, the following scripts will provide a basis to do it. The scripts will allow specifying a rootfs via partition UUID or partition LABEL or a rootfs on an LVM logical volume. They do not support an encrypted root file system or mounting the rootfs over a network card. For a more complete capability see the LFS Hints or dracut.
To install these scripts, run the following commands as the
root
user:
cat > /usr/sbin/mkinitramfs << "EOF"
#!/bin/bash
# This file based in part on the mkinitramfs script for the LFS LiveCD
# written by Alexander E. Patrakov and Jeremy Huntwork.
copy()
{
local file
if [ "$2" = "lib" ]; then
file=$(PATH=/usr/lib type -p $1)
else
file=$(type -p $1)
fi
if [ -n "$file" ] ; then
cp $file $WDIR/usr/$2
else
echo "Missing required file: $1 for directory $2"
rm -rf $WDIR
exit 1
fi
}
if [ -z $1 ] ; then
INITRAMFS_FILE=initrd.img-no-kmods
else
KERNEL_VERSION=$1
INITRAMFS_FILE=initrd.img-$KERNEL_VERSION
fi
if [ -n "$KERNEL_VERSION" ] && [ ! -d "/usr/lib/modules/$1" ] ; then
echo "No modules directory named $1"
exit 1
fi
printf "Creating $INITRAMFS_FILE... "
binfiles="sh cat cp dd killall ls mkdir mknod mount "
binfiles="$binfiles umount sed sleep ln rm uname"
binfiles="$binfiles readlink basename"
# Systemd installs udevadm in /bin. Other udev implementations have it in /sbin
if [ -x /usr/bin/udevadm ] ; then binfiles="$binfiles udevadm"; fi
sbinfiles="modprobe blkid switch_root"
# Optional files and locations
for f in mdadm mdmon udevd udevadm; do
if [ -x /usr/sbin/$f ] ; then sbinfiles="$sbinfiles $f"; fi
done
# Add lvm if present (cannot be done with the others because it
# also needs dmsetup
if [ -x /usr/sbin/lvm ] ; then sbinfiles="$sbinfiles lvm dmsetup"; fi
unsorted=$(mktemp /tmp/unsorted.XXXXXXXXXX)
DATADIR=/usr/share/mkinitramfs
INITIN=init.in
# Create a temporary working directory
WDIR=$(mktemp -d /tmp/initrd-work.XXXXXXXXXX)
# Create base directory structure
mkdir -p $WDIR/{dev,run,sys,proc,usr/{bin,lib/{firmware,modules},sbin}}
mkdir -p $WDIR/etc/{modprobe.d,udev/rules.d}
touch $WDIR/etc/modprobe.d/modprobe.conf
ln -s usr/bin $WDIR/bin
ln -s usr/lib $WDIR/lib
ln -s usr/sbin $WDIR/sbin
ln -s lib $WDIR/lib64
# Create necessary device nodes
mknod -m 640 $WDIR/dev/console c 5 1
mknod -m 664 $WDIR/dev/null c 1 3
# Install the udev configuration files
if [ -f /etc/udev/udev.conf ]; then
cp /etc/udev/udev.conf $WDIR/etc/udev/udev.conf
fi
for file in $(find /etc/udev/rules.d/ -type f) ; do
cp $file $WDIR/etc/udev/rules.d
done
# Install any firmware present
cp -a /usr/lib/firmware $WDIR/usr/lib
# Copy the RAID configuration file if present
if [ -f /etc/mdadm.conf ] ; then
cp /etc/mdadm.conf $WDIR/etc
fi
# Install the init file
install -m0755 $DATADIR/$INITIN $WDIR/init
if [ -n "$KERNEL_VERSION" ] ; then
if [ -x /usr/bin/kmod ] ; then
binfiles="$binfiles kmod"
else
binfiles="$binfiles lsmod"
sbinfiles="$sbinfiles insmod"
fi
fi
# Install basic binaries
for f in $binfiles ; do
ldd /usr/bin/$f | sed "s/\t//" | cut -d " " -f1 >> $unsorted
copy /usr/bin/$f bin
done
for f in $sbinfiles ; do
ldd /usr/sbin/$f | sed "s/\t//" | cut -d " " -f1 >> $unsorted
copy $f sbin
done
# Add udevd libraries if not in /usr/sbin
if [ -x /usr/lib/udev/udevd ] ; then
ldd /usr/lib/udev/udevd | sed "s/\t//" | cut -d " " -f1 >> $unsorted
elif [ -x /usr/lib/systemd/systemd-udevd ] ; then
ldd /usr/lib/systemd/systemd-udevd | sed "s/\t//" | cut -d " " -f1 >> $unsorted
fi
# Add module symlinks if appropriate
if [ -n "$KERNEL_VERSION" ] && [ -x /usr/bin/kmod ] ; then
ln -s kmod $WDIR/usr/bin/lsmod
ln -s kmod $WDIR/usr/bin/insmod
fi
# Add lvm symlinks if appropriate
# Also copy the lvm.conf file
if [ -x /usr/sbin/lvm ] ; then
ln -s lvm $WDIR/usr/sbin/lvchange
ln -s lvm $WDIR/usr/sbin/lvrename
ln -s lvm $WDIR/usr/sbin/lvextend
ln -s lvm $WDIR/usr/sbin/lvcreate
ln -s lvm $WDIR/usr/sbin/lvdisplay
ln -s lvm $WDIR/usr/sbin/lvscan
ln -s lvm $WDIR/usr/sbin/pvchange
ln -s lvm $WDIR/usr/sbin/pvck
ln -s lvm $WDIR/usr/sbin/pvcreate
ln -s lvm $WDIR/usr/sbin/pvdisplay
ln -s lvm $WDIR/usr/sbin/pvscan
ln -s lvm $WDIR/usr/sbin/vgchange
ln -s lvm $WDIR/usr/sbin/vgcreate
ln -s lvm $WDIR/usr/sbin/vgscan
ln -s lvm $WDIR/usr/sbin/vgrename
ln -s lvm $WDIR/usr/sbin/vgck
# Conf file(s)
cp -a /etc/lvm $WDIR/etc
fi
# Install libraries
sort $unsorted | uniq | while read library ; do
# linux-vdso and linux-gate are pseudo libraries and do not correspond to a file
# libsystemd-shared is in /lib/systemd, so it is not found by copy, and
# it is copied below anyway
if [[ "$library" == linux-vdso.so.1 ]] ||
[[ "$library" == linux-gate.so.1 ]] ||
[[ "$library" == libsystemd-shared* ]]; then
continue
fi
copy $library lib
done
if [ -d /usr/lib/udev ]; then
cp -a /usr/lib/udev $WDIR/usr/lib
fi
if [ -d /usr/lib/systemd ]; then
cp -a /usr/lib/systemd $WDIR/usr/lib
fi
if [ -d /usr/lib/elogind ]; then
cp -a /usr/lib/elogind $WDIR/usr/lib
fi
# Install the kernel modules if requested
if [ -n "$KERNEL_VERSION" ]; then
find \
/usr/lib/modules/$KERNEL_VERSION/kernel/{crypto,fs,lib} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/{block,ata,nvme,md,firewire} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/{scsi,message,pcmcia,virtio} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/usb/{host,storage} \
-type f 2> /dev/null | cpio --make-directories -p --quiet $WDIR
cp /usr/lib/modules/$KERNEL_VERSION/modules.{builtin,order} \
$WDIR/usr/lib/modules/$KERNEL_VERSION
if [ -f /usr/lib/modules/$KERNEL_VERSION/modules.builtin.modinfo ]; then
cp /usr/lib/modules/$KERNEL_VERSION/modules.builtin.modinfo \
$WDIR/usr/lib/modules/$KERNEL_VERSION
fi
depmod -b $WDIR $KERNEL_VERSION
fi
( cd $WDIR ; find . | cpio -o -H newc --quiet | gzip -9 ) > $INITRAMFS_FILE
# Prepare early loading of microcode if available
if ls /usr/lib/firmware/intel-ucode/* >/dev/null 2>&1 ||
ls /usr/lib/firmware/amd-ucode/* >/dev/null 2>&1; then
# first empty WDIR to reuse it
rm -r $WDIR/*
DSTDIR=$WDIR/kernel/x86/microcode
mkdir -p $DSTDIR
if [ -d /usr/lib/firmware/amd-ucode ]; then
cat /usr/lib/firmware/amd-ucode/microcode_amd*.bin > $DSTDIR/AuthenticAMD.bin
fi
if [ -d /usr/lib/firmware/intel-ucode ]; then
cat /usr/lib/firmware/intel-ucode/* > $DSTDIR/GenuineIntel.bin
fi
( cd $WDIR; find . | cpio -o -H newc --quiet ) > microcode.img
cat microcode.img $INITRAMFS_FILE > tmpfile
mv tmpfile $INITRAMFS_FILE
rm microcode.img
fi
# Remove the temporary directories and files
rm -rf $WDIR $unsorted
printf "done.\n"
EOF
chmod 0755 /usr/sbin/mkinitramfs
mkdir -p /usr/share/mkinitramfs &&
cat > /usr/share/mkinitramfs/init.in << "EOF"
#!/bin/sh
PATH=/usr/bin:/usr/sbin
export PATH
problem()
{
printf "Encountered a problem!\n\nDropping you to a shell.\n\n"
sh
}
no_device()
{
printf "The device %s, which is supposed to contain the\n" $1
printf "root file system, does not exist.\n"
printf "Please fix this problem and exit this shell.\n\n"
}
no_mount()
{
printf "Could not mount device %s\n" $1
printf "Sleeping forever. Please reboot and fix the kernel command line.\n\n"
printf "Maybe the device is formatted with an unsupported file system?\n\n"
printf "Or maybe filesystem type autodetection went wrong, in which case\n"
printf "you should add the rootfstype=... parameter to the kernel command line.\n\n"
printf "Available partitions:\n"
}
do_mount_root()
{
mkdir /.root
[ -n "$rootflags" ] && rootflags="$rootflags,"
rootflags="$rootflags$ro"
case "$root" in
/dev/* ) device=$root ;;
UUID=* ) eval $root; device="/dev/disk/by-uuid/$UUID" ;;
PARTUUID=*) eval $root; device="/dev/disk/by-partuuid/$PARTUUID" ;;
LABEL=* ) eval $root; device="/dev/disk/by-label/$LABEL" ;;
"" ) echo "No root device specified." ; problem ;;
esac
while [ ! -b "$device" ] ; do
no_device $device
problem
done
if ! mount -n -t "$rootfstype" -o "$rootflags" "$device" /.root ; then
no_mount $device
cat /proc/partitions
while true ; do sleep 10000 ; done
else
echo "Successfully mounted device $root"
fi
}
do_try_resume()
{
case "$resume" in
UUID=* ) eval $resume; resume="/dev/disk/by-uuid/$UUID" ;;
LABEL=*) eval $resume; resume="/dev/disk/by-label/$LABEL" ;;
esac
if $noresume || ! [ -b "$resume" ]; then return; fi
ls -lH "$resume" | ( read x x x x maj min x
echo -n ${maj%,}:$min > /sys/power/resume )
}
init=/sbin/init
root=
rootdelay=
rootfstype=auto
ro="ro"
rootflags=
device=
resume=
noresume=false
mount -n -t devtmpfs devtmpfs /dev
mount -n -t proc proc /proc
mount -n -t sysfs sysfs /sys
mount -n -t tmpfs tmpfs /run
read -r cmdline < /proc/cmdline
for param in $cmdline ; do
case $param in
init=* ) init=${param#init=} ;;
root=* ) root=${param#root=} ;;
rootdelay=* ) rootdelay=${param#rootdelay=} ;;
rootfstype=*) rootfstype=${param#rootfstype=} ;;
rootflags=* ) rootflags=${param#rootflags=} ;;
resume=* ) resume=${param#resume=} ;;
noresume ) noresume=true ;;
ro ) ro="ro" ;;
rw ) ro="rw" ;;
esac
done
# udevd location depends on version
if [ -x /sbin/udevd ]; then
UDEVD=/sbin/udevd
elif [ -x /lib/udev/udevd ]; then
UDEVD=/lib/udev/udevd
elif [ -x /lib/systemd/systemd-udevd ]; then
UDEVD=/lib/systemd/systemd-udevd
else
echo "Cannot find udevd nor systemd-udevd"
problem
fi
${UDEVD} --daemon --resolve-names=never
udevadm trigger
udevadm settle
if [ -f /etc/mdadm.conf ] ; then mdadm -As ; fi
if [ -x /sbin/vgchange ] ; then /sbin/vgchange -a y > /dev/null ; fi
if [ -n "$rootdelay" ] ; then sleep "$rootdelay" ; fi
do_try_resume # This function will not return if resuming from disk
do_mount_root
killall -w ${UDEVD##*/}
exec switch_root /.root "$init" "$@"
EOF
LVM2-2.03.21 and/or mdadm-4.2 must be installed before generating the initramfs, if the system partition uses them.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/initramfs
To build an initramfs, run the following as the root
user:
mkinitramfs [KERNEL VERSION]
The optional argument is the directory where the appropriate
kernel modules are located. This must be a subdirectory of
/lib/modules
. If no modules are
specified, then the initramfs is named initrd.img-no-kmods. If a kernel
version is specified, the initrd is named initrd.img-$KERNEL_VERSION and is
only appropriate for the specific kernel specified. The
output file will be placed in the current directory.
If early loading of microcode is needed (see the
section called “Microcode updates for CPUs”),
you can install the appropriate blob or container in
/lib/firmware
. It will be
automatically added to the initrd when running mkinitramfs.
After generating the initrd, copy it to the /boot
directory.
Now edit /boot/grub/grub.cfg
and add a new menuentry. Below are several examples.
# Generic initramfs and root fs identified by UUID menuentry "LFS Dev (LFS-7.0-Feb14) initrd, Linux 3.0.4" { linux /vmlinuz-3.0.4-lfs-20120214 root=UUID=54b934a9-302d-415e-ac11-4988408eb0a8 ro initrd /initrd.img-no-kmods }
# Generic initramfs and root fs on LVM partition menuentry "LFS Dev (LFS-7.0-Feb18) initrd lvm, Linux 3.0.4" { linux /vmlinuz-3.0.4-lfs-20120218 root=/dev/mapper/myroot ro initrd /initrd.img-no-kmods }
# Specific initramfs and root fs identified by LABEL menuentry "LFS Dev (LFS-7.1-Feb20) initrd label, Linux 3.2.6" { linux /vmlinuz-3.2.6-lfs71-120220 root=LABEL=lfs71 ro initrd /initrd.img-3.2.6-lfs71-120220 }
Finally, reboot the system and select the desired system.
The btrfs-progs package contains administration and debugging tools for the B-tree file system (btrfs).
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.kernel.org/pub/linux/kernel/people/kdave/btrfs-progs/btrfs-progs-v6.2.2.tar.xz
Download MD5 sum: 877637aaccba7151ab91e991fcf69cb9
Download size: 2.3 MB
Estimated disk space required: 84 MB (transcient files created during tests need up to 10 GB)
Estimated build time: 0.2 SBU (add 5.0 SBU for tests, up to 80 SBU on slow disks if reiserfsprogs is installed)
LVM2-2.03.21 (dmsetup is used in tests), reiserfsprogs-3.6.27 (for tests), and sphinx-6.2.1 (required to build documentation)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/btrfs-progs
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<*/M> Btrfs filesystem support [CONFIG_BTRFS_FS]
In addition to the above and to the options required for LVM2-2.03.21 and reiserfsprogs-3.6.27, the following options must be enabled for running tests:
File systems --->
[*] Btrfs POSIX Access Control Lists [CONFIG_BTRFS_FS_POSIX_ACL]
[*] ReiserFS extended attributes [CONFIG_REISERFS_FS_XATTR]
[*] ReiserFS POSIX Access Control Lists [CONFIG_REISERFS_FS_POSIX_ACL]
Install btrfs-progs by running the following commands:
./configure --prefix=/usr \ --disable-static \ --disable-documentation && make
Some tests require grep built with perl regular expressions. To obtain this, rebuild grep with the LFS Chapter 8 instructions after installing pcre2-10.42.
Before running tests, build a support program:
make fssum
To test the results, issue (as the root
user):
pushd tests ./fsck-tests.sh ./mkfs-tests.sh ./cli-tests.sh sed 's/,orphan_file//' /etc/mke2fs.conf >./custom_mke2fs.conf && export MKE2FS_CONFIG=$PWD/custom_mke2fs.conf && ./convert-tests.sh unset MKE2FS_CONFIG && rm custom_mke2fs.conf ./misc-tests.sh ./fuzz-tests.sh popd
If the above mentioned kernel options are not enabled, some tests fail, and prevent all the remaining tests to run because the test disk image is not cleanly unmounted.
The mkfs test 025-zoned-parallel is known to fail.
Install the package as the root
user:
make install
If you have passed --disable-documentation
to
configure and
you need the manual pages, install them by running, as the
root
user:
for i in 5 8; do install Documentation/*.$i /usr/share/man/man$i done
--disable-static
:
This switch prevents installation of static versions of the
libraries.
--disable-documentation
: This
switch disables rebuilding the manual pages, because it
requires sphinx-6.2.1.
sed 's/,orphan_file//" ...: In this version of btrfs-progs, the btrfs-convert program produces a btrfs filesystem containing errors if converting from an ext4 filesystem created with the “orphan_file” feature. This command creates a custom configuration file that prevents creating a filesystem with this feature.
This version of btrfs-progs
does not convert correctly ext4 filesystems to btrfs if the
ext4 orphan_file
feature is
turned on. If you happen to convert such a filesystem, you
need to first run:
tune2fs -O ^orphan_file /dev/sdxx
where /dev/sdxx
is the
partition of the filesystem you want to convert.
is the main interface into btrfs filesystem operations |
|
converts from an ext2/3/4 or reiserfs filesystem to btrfs (see the section called “Using the btrfs-convert Program” above) |
|
is a filter to find btrfs root |
|
maps btrfs logical extent to physical extent |
|
overwrites the primary superblock with a backup copy |
|
tunes various filesystem parameters |
|
does nothing, but is present for consistency with fstab |
|
creates a btrfs file system |
The dosfstools package contains various utilities for use with the FAT family of file systems.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/dosfstools/dosfstools/releases/download/v4.2/dosfstools-4.2.tar.gz
Download MD5 sum: 49c8e457327dc61efab5b115a27b087a
Download size: 314 KB
Estimated disk space required: 3.5 MB
Estimated build time: less than 0.1 SBU
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/dosfstools
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<DOS/FAT/EXFAT/NT Filesystems --->
<*/M> MSDOS fs support [CONFIG_MSDOS_FS]
<*/M> VFAT (Windows-95) fs support [CONFIG_VFAT_FS]
Install dosfstools by running the following commands:
./configure --prefix=/usr \ --enable-compat-symlinks \ --mandir=/usr/share/man \ --docdir=/usr/share/doc/dosfstools-4.2 && make
This package does not come with a test suite.
Now, as the root
user:
make install
--enable-compat-symlinks
: This
switch creates the dosfsck,
dosfslabel, fsck.msdos, fsck.vfat, mkdosfs, mkfs.msdos, and mkfs.vfat symlinks required by some
programs.
FUSE (Filesystem in Userspace) is a simple interface for userspace programs to export a virtual filesystem to the Linux kernel. Fuse also aims to provide a secure method for non privileged users to create and mount their own filesystem implementations.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://github.com/libfuse/libfuse/releases/download/fuse-3.14.1/fuse-3.14.1.tar.xz
Download MD5 sum: d5fec7879ffcbc57b789113e6c61c10b
Download size: 4.2 MB
Estimated disk space required: 108 MB (with tests and documentation)
Estimated build time: 0.2 SBU (add 0.3 SBU for tests)
Doxygen-1.9.6 (to rebuild the API documentation) and pytest-7.3.1 (required for tests)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/fuse
Enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
Character devices in userspace should be enabled too for running the tests:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
<*/M> Character device in Userspace support [CONFIG_CUSE]
Install Fuse by running the following commands:
sed -i '/^udev/,$ s/^/#/' util/meson.build && mkdir build && cd build && meson setup --prefix=/usr --buildtype=release .. && ninja
The API documentation is included in the package, but if you have Doxygen-1.9.6 installed and wish to rebuild it, issue:
pushd .. && doxygen doc/Doxyfile && popd
To test the results, run (as the root
user):
python3 -m pytest test/
The pytest-7.3.1 Python module is required for
the tests. One test named test_cuse
will fail if the CONFIG_CUSE
configuration item
was not enabled when the kernel was built. Two tests,
test_ctests.py
and test_examples.py
will produce a warning
because a deprecated Python module is used.
Now, as the root
user:
ninja install && chmod u+s /usr/bin/fusermount3 && cd .. && install -v -m755 -d /usr/share/doc/fuse-3.14.1 && install -v -m644 doc/{README.NFS,kernel.txt} \ /usr/share/doc/fuse-3.14.1 && cp -Rv doc/html /usr/share/doc/fuse-3.14.1
sed ... util/meson.build: This command disables the installation of a boot script and udev rule that are not needed.
--buildtype=release
:
Specify a buildtype suitable for stable releases of the
package, as the default may produce unoptimized binaries.
Some options regarding mount policy can be set in the file
/etc/fuse.conf
. To install
the file run the following command as the root
user:
cat > /etc/fuse.conf << "EOF"
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#
#mount_max = 1000
# Allow non-root users to specify the 'allow_other' or 'allow_root'
# mount options.
#
#user_allow_other
EOF
Additional information about the meaning of the configuration options are found in the man page.
The jfsutils package contains administration and debugging tools for the jfs file system.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://jfs.sourceforge.net/project/pub/jfsutils-1.1.15.tar.gz
Download MD5 sum: 8809465cd48a202895bc2a12e1923b5d
Download size: 532 KB
Estimated disk space required: 8.9 MB
Estimated build time: 0.1 SBU
Required patch to fix issues exposed by GCC 10 and later: https://www.linuxfromscratch.org/patches/blfs/svn/jfsutils-1.1.15-gcc10_fix-1.patch
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/jfs
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<*/M> JFS filesystem support [CONFIG_JFS_FS]
First, fix some issues exposed by GCC 10 and later:
patch -Np1 -i ../jfsutils-1.1.15-gcc10_fix-1.patch
Install jfsutils by running the following commands:
sed -i "/unistd.h/a#include <sys/types.h>" fscklog/extract.c && sed -i "/ioctl.h/a#include <sys/sysmacros.h>" libfs/devices.c && ./configure && make
This package does not come with a test suite.
Now, as the root
user:
make install
sed ...: Fixes building with glibc 2.28.
is used to replay the JFS transaction log, check a JFS formatted device for errors, and fix any errors found |
|
is a hard link to fsck.jfs |
|
constructs an JFS file system |
|
is a hard link to mkfs.jfs |
|
is a program which can be used to perform various low-level actions on a JFS formatted device |
|
extracts a JFS fsck service log into a file and/or formats and displays the extracted file |
|
dumps the contents of the journal log from the specified JFS formatted device into output file ./jfslog.dmp |
|
adjusts tunable file system parameters on JFS file systems |
The LVM2 package is a set of tools that manage logical partitions. It allows spanning of file systems across multiple physical disks and disk partitions and provides for dynamic growing or shrinking of logical partitions, mirroring and low storage footprint snapshots.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://sourceware.org/ftp/lvm2/LVM2.2.03.21.tgz
Download (FTP): ftp://sourceware.org/pub/lvm2/LVM2.2.03.21.tgz
Download MD5 sum: 1730b322321bed204487ba241105e005
Download size: 2.6 MB
Estimated disk space required: 38 MB (add 25 MB for tests; transient files can grow up to around 800 MB in the /tmp directory during tests)
Estimated build time: 0.1 SBU (using parallelism=4; add 9 to 48 SBU for tests, depending on disk speed and whether ram block device is enabled in the kernel)
mdadm-4.2, reiserfsprogs-3.6.27, Valgrind-3.20.0, Which-2.21, xfsprogs-6.2.0 (all five may be used, but are not required, for tests), thin-provisioning-tools, and vdo
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/lvm2
Enable the following options in the kernel configuration and recompile the kernel:
There are several other Device Mapper options in the kernel beyond those listed below. In order to get reasonable results if running the regression tests, all must be enabled either internally or as a module. The tests will all time out if Magic SysRq key is not enabled.
Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) ---> [CONFIG_MD]
<*/M> Device mapper support [CONFIG_BLK_DEV_DM]
<*/M> Crypt target support [CONFIG_DM_CRYPT]
<*/M> Snapshot target [CONFIG_DM_SNAPSHOT]
<*/M> Thin provisioning target [CONFIG_DM_THIN_PROVISIONING]
<*/M> Cache target (EXPERIMENTAL) [CONFIG_DM_CACHE]
<*/M> Mirror target [CONFIG_DM_MIRROR]
<*/M> Zero target [CONFIG_DM_ZERO]
<*/M> I/O delaying target [CONFIG_DM_DELAY]
[*] Block devices --->
<*/M> RAM block device support [CONFIG_BLK_DEV_RAM]
Kernel hacking --->
Generic Kernel Debugging Instruments --->
[*] Magic SysRq key [CONFIG_MAGIC_SYSRQ]
Install LVM2 by running the following commands:
PATH+=:/usr/sbin \ ./configure --prefix=/usr \ --enable-cmdlib \ --enable-pkgconfig \ --enable-udev_sync && make
The tests use udev for
logical volume synchronization, so the LVM udev rules and
some utilities need to be installed before running the tests.
If you are installing LVM2
for the first time, and do not want to install the full
package before running the tests, the minimal set of
utilities can be installed by running the following
instructions as the root
user:
make -C tools install_tools_dynamic && make -C udev install && make -C libdm install
To test the results, issue, as the root
user:
LC_ALL=en_US.UTF-8 make check_local
Some tests may hang. In this case they can be skipped by adding S=<testname> to the make command. Other targets are available and can be listed with make -C test help. The test timings are very dependent on the speed of the disk(s), and on the number of enabled kernel options.
The tests do not implement the “expected fail” possibility, and a small number of test failures is expected by upstream. More failures may happen because some kernel options are missing. For example, the lack of the dm-delay device mapper target explains some failures. Some tests may fail if there is insufficient free space available in the partition with the /tmp directory. At least one test fails if 16 TB is not available. Some tests are flagged “warned” if thin-provisioning-tools are not installed. A workaround is to add the following flags to configure:
--with-thin-check= \ --with-thin-dump= \ --with-thin-repair= \ --with-thin-restore= \ --with-cache-check= \ --with-cache-dump= \ --with-cache-repair= \ --with-cache-restore= \
Some tests may hang. They can be removed if necessary, for example: rm test/shell/lvconvert-raid-reshape.sh. The tests generate a lot of kernel messages, which may clutter your terminal. You can disable them by issuing dmesg -D before running the tests (do not forget to issue dmesg -E when tests are done).
The checks create device nodes in the /tmp directory. The tests will fail if /tmp is mounted with the nodev option.
Now, as the root
user:
make install rm -fv /usr/lib/udev/rules.d/69-dm-lvm.rules
PATH+=:/usr/sbin: The path
must contain /usr/sbin
for
proper system tool detection by the configure script. This
instruction ensures that PATH is properly set even if you
build as an unprivileged user.
--enable-cmdlib
: This
switch enables building of the shared command library. It is
required when building the event daemon.
--enable-pkgconfig
:
This switch enables installation of pkg-config support files.
--enable-udev_sync
:
This switch enables synchronisation with Udev processing.
--enable-dmeventd
: This switch
enables building of the Device
Mapper event daemon.
rm .../69-dm-lvm.rules: Under certain circumstances, this udev rule calls systemd-run, which is not available on sysv. It performs actions that are done by another boot script anyway, so it is not needed.
is a utility to deactivate block devices |
|
(optional) is the Device Mapper event daemon |
|
is a low level logical volume management tool |
|
is a utility used to resize or check filesystem on a device |
|
provides the command-line tools for LVM2. Commands are implemented via symbolic links to this program to manage physical devices (pv*), volume groups (vg*) and logical volumes (lv*) |
|
is a tool used to dump various information concerning LVM2 |
|
is used to import a duplicated VG (e.g. hardware snapshot) |
|
contains the Device Mapper API functions |
LVM manages disk drives. It allows multiple drives and partitions to be combined into larger volume groups, assists in making backups through a snapshot, and allows for dynamic volume resizing. It can also provide mirroring similar to a RAID 1 array.
A complete discussion of LVM is beyond the scope of this introduction, but basic concepts are presented below.
To run any of the commands presented here, the LVM2-2.03.21 package must
be installed. All commands must be run as the root
user.
Management of disks with lvm is accomplished using the following concepts:
These are physical disks or partitions such as /dev/sda3 or /dev/sdb.
These are named groups of physical volumes that can be manipulated by the administrator. The number of physical volumes that make up a volume group is arbitrary. Physical volumes can be dynamically added or removed from a volume group.
Volume groups may be subdivided into logical volumes. Each logical volume can then be individually formatted as if it were a regular Linux partition. Logical volumes may be dynamically resized by the administrator according to need.
To give a concrete example, suppose that you have two 2 TB
disks. Also suppose a really large amount of space is required
for a very large database, mounted on /srv/mysql
. This is what the initial set of
partitions would look like:
Partition Use Size Partition Type
/dev/sda1 /boot 100MB 83 (Linux)
/dev/sda2 / 10GB 83 (Linux)
/dev/sda3 swap 2GB 82 (Swap)
/dev/sda4 LVM remainder 8e (LVM)
/dev/sdb1 swap 2GB 82 (Swap)
/dev/sdb2 LVM remainder 8e (LVM)
First initialize the physical volumes:
pvcreate /dev/sda4 /dev/sdb2
A full disk can be used as part of a physical volume, but beware that the pvcreate command will destroy any partition information on that disk.
Next create a volume group named lfs-lvm:
vgcreate lfs-lvm /dev/sda4 /dev/sdb2
The status of the volume group can be checked by running the command vgscan. Now create the logical volumes. Since there is about 3900 GB available, leave about 900 GB free for expansion. Note that the logical volume named mysql is larger than any physical disk.
lvcreate --name mysql --size 2500G lfs-lvm lvcreate --name home --size 500G lfs-lvm
Finally the logical volumes can be formatted and mounted. In this example, the jfs file system (jfsutils-1.1.15) is used for demonstration purposes.
mkfs -t ext4 /dev/lfs-lvm/home mkfs -t jfs /dev/lfs-lvm/mysql mount /dev/lfs-lvm/home /home mkdir -p /srv/mysql mount /dev/lfs-lvm/mysql /srv/mysql
It may be needed to activate those logical volumes, for them to
appear in /dev
. They can all be
activated at the same time by issuing, as the root
user:
vgchange -a y
The LFS boot scripts automatically make these logical volumes
available to the system in the udev script. Edit the
/etc/fstab
file as required to
automatically mount them.
A LVM logical volume can host a root filesystem, but requires
the use of an initramfs (initial RAM file system). The
initramfs proposed in the section called “About
initramfs” allows to pass the lvm volume in the
root=
switch of the
kernel command line.
For more information about LVM, see the LVM HOWTO and the lvm man pages. A good in-depth guide is available from RedHat®, although it makes sometimes reference to proprietary tools.
The storage technology known as RAID (Redundant Array of Independent Disks) combines multiple physical disks into a logical unit. The drives can generally be combined to provide data redundancy or to extend the size of logical units beyond the capability of the physical disks or both. The technology also allows for providing hardware maintenance without powering down the system.
The types of RAID organization are described in the RAID Wiki.
Note that while RAID provides protection against disk failures, it is not a substitute for backups. A file deleted is still deleted on all the disks of a RAID array. Modern backups are generally done via rsync-3.2.7.
There are three major types of RAID implementation: Hardware RAID, BIOS-based RAID, and Software RAID.
Hardware based RAID provides capability through proprietary hardware and data layouts. The control and configuration is generally done via firmware in conjunction with executable programs made available by the device manufacturer. The capabilities are generally supplied via a PCI card, although there are some instances of RAID components integrated in to the motherboard. Hardware RAID may also be available in a stand-alone enclosure.
One advantage of hardware-based RAID is that the drives are offered to the operating system as a logical drive and no operating system dependent configuration is needed.
Disadvantages include difficulties in transferring drives from one system to another, updating firmware, or replacing failed RAID hardware.
Some computers offer a hardware-like RAID implementation in the system BIOS. Sometime this is referred to as 'fake' RAID as the capabilities are generally incorporated into firmware without any hardware acceleration.
The advantages and disadvantages of BIOS-based RAID are generally the same as hardware RAID with the additional disadvantage that there is no hardware acceleration.
In some cases, BIOS-based RAID firmware is enabled by default (e.g. some DELL systems). If software RAID is desired, this option must be explicitly disabled in the BIOS.
Software based RAID is the most flexible form of RAID. It is easy to install and update and provides full capability on all or part of any drives available to the system. In BLFS, the RAID software is found in mdadm-4.2.
Configuring a RAID device is straightforward using
mdadm. Generally devices are
created in the /dev
directory
as /dev/mdx
where x is an integer.
The first step in creating a RAID array is to use
partitioning software such as fdisk
or parted-3.6 to define
the partitions needed for the array. Usually, there will be
one partition on each drive participating in the RAID array,
but that is not strictly necessary. For this example, there
will be four disk drives: /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
. They will be partitioned as
follows:
Partition Size Type Use
sda1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
sda2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
sda3: 2 GB 83 Linux swap swap
sda4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdb1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
sdb2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
sdb3: 2 GB 83 Linux swap swap
sdb4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdc1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
sdc2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdd1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
sdd2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
In this arrangement, a separate boot partition is created as
the first small RAID array and a root filesystem as the
secong RAID array, both mirrored. The third partition is a
large (about 1TB) array for the /home
directory. This provides an ability
to stripe data across multiple devices, improving speed for
both reading and writing large files. Finally, a fourth array
is created that concatenates two partitions into a larger
device.
All mdadm commands must be
run as the root
user.
To create these RAID arrays the commands are:
/sbin/mdadm -Cv /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 /sbin/mdadm -Cv /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2 /sbin/mdadm -Cv /dev/md3 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1 /sbin/mdadm -Cv /dev/md2 --level=5 --raid-devices=4 \ /dev/sda4 /dev/sdb4 /dev/sdc2 /dev/sdd2
The devices created can be examined by device. For example,
to see the details of /dev/md1
,
use /sbin/mdadm --detail
/dev/md1
:
Version : 1.2
Creation Time : Tue Feb 7 17:08:45 2012
Raid Level : raid1
Array Size : 10484664 (10.00 GiB 10.74 GB)
Used Dev Size : 10484664 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Feb 7 23:11:53 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : core2-blfs:0 (local to host core2-blfs)
UUID : fcb944a4:9054aeb2:d987d8fe:a89121f8
Events : 17
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
From this point, the partitions can be formatted with the
filesystem of choice (e.g. ext3, ext4, xfsprogs-6.2.0,
reiserfsprogs-3.6.27, etc). The
formatted partitions can then be mounted. The /etc/fstab
file can use the devices created
for mounting at boot time and the linux command line in
/boot/grub/grub.cfg
can specify
root=/dev/md1
.
The swap devices should be specified in the /etc/fstab
file as normal. The kernel
normally stripes swap data across multiple swap files and
should not be made part of a RAID array.
For further options and management details of RAID devices,
refer to man
mdadm
.
Additional details for monitoring RAID arrays and dealing with problems can be found at the Linux RAID Wiki.
The mdadm package contains administration tools for software RAID.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://www.kernel.org/pub/linux/utils/raid/mdadm/mdadm-4.2.tar.xz
Download MD5 sum: a304eb0a978ca81045620d06547050a6
Download size: 444 KB
Estimated disk space required: 5.0 MB
Estimated build time: 0.1 SBU
A MTA
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/mdadm
Kernel versions in series 4.1 through 4.4.1 have a broken RAID implementation. Use a kernel with version at or above 4.4.2.
Enable the following options in the kernel configuration and recompile the kernel, if necessary. Only the RAID types desired are required.
Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) ---> [CONFIG_MD]
<*> RAID support [CONFIG_BLK_DEV_MD]
[*] Autodetect RAID arrays during kernel boot [CONFIG_MD_AUTODETECT]
<*/M> Linear (append) mode [CONFIG_MD_LINEAR]
<*/M> RAID-0 (striping) mode [CONFIG_MD_RAID0]
<*/M> RAID-1 (mirroring) mode [CONFIG_MD_RAID1]
<*/M> RAID-10 (mirrored striping) mode [CONFIG_MD_RAID10]
<*/M> RAID-4/RAID-5/RAID-6 mode [CONFIG_MD_RAID456]
Build mdadm by running the following command:
make
This package does not come with a working test suite.
Now, as the root
user:
make BINDIR=/usr/sbin install
make everything: This optional target creates extra programs, particularly a statically-linked version of mdadm. This needs to be manually installed.
--keep-going
: Run the
tests to the end, even if one or more tests fail.
--logdir=test-logs
:
Defines the directory where test logs are saved.
--save-logs
:
Instructs the test suite to save the logs.
--tests=
:
Optional comma separated list of tests to be executed (all
tests, if this option is not passed).
<test1,test2,...>
A new read-write driver for NTFS, called NTFS3, has been added into the Linux kernel since the 5.15 release. The performance of NTFS3 is much better than ntfs-3g. To enable NTFS3, enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
DOS/FAT/EXFAT/NT Filesystems --->
<*/M> NTFS Read-Write file system support [CONFIG_NTFS3_FS]
To ensure the mount command uses NTFS3 for ntfs partitions, create a wrapper script:
cat > /usr/sbin/mount.ntfs << "EOF" &&
#!/bin/sh
exec mount -t ntfs3 "$@"
EOF
chmod -v 755 /usr/sbin/mount.ntfs
With the kernel support available, ntfs-3g is only needed if you need the utilities from it (for example, to create NTFS filesystems).
The Ntfs-3g package contains a stable, read-write open source driver for NTFS partitions. NTFS partitions are used by most Microsoft operating systems. Ntfs-3g allows you to mount NTFS partitions in read-write mode from your Linux system. It uses the FUSE kernel module to be able to implement NTFS support in userspace. The package also contains various utilities useful for manipulating NTFS partitions.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://tuxera.com/opensource/ntfs-3g_ntfsprogs-2022.10.3.tgz
Download MD5 sum: a038af61be7584b79f8922ff11244090
Download size: 1.3 MB
Estimated disk space required: 22 MB
Estimated build time: 0.2 SBU
fuse 2.x (this disables user mounts)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/ntfs-3g
Enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
Install Ntfs-3g by running the following commands:
./configure --prefix=/usr \ --disable-static \ --with-fuse=internal \ --docdir=/usr/share/doc/ntfs-3g-2022.10.3 && make
This package does not come with a test suite.
Now, as the root
user:
make install &&
It's recommended to use the in-kernel NTFS3 driver for mounting NTFS filesystems, instead of ntfs-3g (see the note at the start of this page). However, if you want to use ntfs-3g to mount the NTFS filesystems anyway, create a symlink for the mount command:
ln -sv ../bin/ntfs-3g /usr/sbin/mount.ntfs && ln -sv ntfs-3g.8 /usr/share/man/man8/mount.ntfs.8
--disable-static
:
This switch prevents installation of static versions of the
libraries.
--with-fuse=internal
:
This switch dynamically forces ntfs-3g to use an internal copy of the
fuse-2.x library. This is
required if you wish to allow users to mount NTFS partitions.
--disable-ntfsprogs
: Disables
installation of various utilities used to manipulate NTFS
partitions.
chmod -v 4755 /usr/bin/ntfs-3g: Making mount.ntfs setuid root allows non root users to mount NTFS partitions.
To mount a Windows partition at boot time, put a line like this in /etc/fstab:
/dev/sda1 /mnt/windows auto defaults 0 0
To allow users to mount a usb stick with an NTFS filesystem on it, put a line similar to this (change sdc1 to whatever a usb stick would be on your system) in /etc/fstab:
/dev/sdc1 /mnt/usb auto user,noauto,umask=0,utf8 0 0
In order for a user to be able to mount the usb stick, they
will need to be able to write to /mnt/usb
, so as the root
user:
chmod -v 777 /mnt/usb
is similar to ntfs-3g but uses the Fuse low-level interface |
|
is a symlink to mkntfs |
|
creates an NTFS file system |
|
is a symlink to lowntfs-3g |
|
mounts an NTFS filesystem |
|
is a symbolic link to ntfs-3g |
|
is an NTFS driver, which can create, remove, rename, move files, directories, hard links, and streams. It can also read and write files, including streams, sparse files and transparently compressed files. It can also handle special files like symbolic links, devices, and FIFOs; moreover it provides standard management of file ownership and permissions, including POSIX ACLs |
|
tests if an NTFS volume is mountable read only or read-write, and exits with a status value accordingly. The volume can be a block device or image file |
|
identifies files in a specified region of an NTFS volume |
|
copies a file to an NTFS volume |
|
fixes common errors and forces Windows to check an NTFS partition |
|
lists directory contents on an NTFS filesystem |
|
prints NTFS files and streams on the standard output |
|
clones an NTFS filesystem |
|
compares two NTFS filesystems and shows the differences |
|
dumps a file's attributes |
|
displays or changes the label on an ntfs file system |
|
resizes an NTFS filesystem without data loss |
|
recovers a deleted file from an NTFS volume |
|
contains the Ntfs-3g API functions |
The gptfdisk package is a set of programs for creation and maintenance of GUID Partition Table (GPT) disk drives. A GPT partitioned disk is required for drives greater than 2 TB and is a modern replacement for legacy PC-BIOS partitioned disk drives that use a Master Boot Record (MBR). The main program, gdisk, has an interface similar to the classic fdisk program.
Development versions of BLFS may not build or run some packages properly if dependencies have been updated since the most recent stable versions of the book.
Download (HTTP): https://downloads.sourceforge.net/gptfdisk/gptfdisk-1.0.9.tar.gz
Download MD5 sum: 01c11ecfa454096543562e3068530e01
Download size: 212 KB
Estimated disk space required: 2.3 MB
Estimated build time: less than 0.1 SBU (add 0.2 S