Fedora 21 features the 3.16.3 kernel.
3.1.1. Modular Kernel Packaging
The kernel package is now a meta package that pulls in kernel-core and kernel-modules. The kernel-core package is smaller than a full package and is well-suited for virtualized environments. By optionally uninstalling kernel-modules, cloud image size can be reduced.
The kernel-modules package should be included when Fedora is installed on real hardware.
Please note, that a new initramfs is only automatically generated by the kernel-core package but not the kernel-modules package. If you only installed kernel-core at first and install kernel-modules at a later point in time, you need to create a new initramfs manually using dracut, if any of the newly installed modules has become critical for your system's boot up.
The dracut
utility is used to create the initramfs on Fedora. To regenerate an initramfs for all installed kernels, use the following command:
#
dracut --regenerate-all
3.2.1. Built-in Help in the Graphical Installer
Each screen in the installer's graphical interface and in the Initial Setup utility now has a Help button in the top right corner. Clicking this button opens the section of the Fedora Installation Guide relevant to the current screen using the Yelp help browser.
The help is only available in the English language.
3.2.2. zRAM Swap Support
The Anaconda installer now supports swap
on zRAM during the installation.
zRAM is a standard block device with compressed contents. Placing swap
into such a device during the installation allows the installer to store more data in RAM instead of in the hard drive. This is especially helpful on low-memory systems; on these systems, the installation can be performed much faster with this feature enabled.
This feature is automatically enabled if Anaconda detects 2 GB or less memory, and disabled on systems with more memory. To force zRAM swap on or off, use the inst.zram=on
or inst.zram=off
boot option within the boot menu.
Specific limits, numbers and way of implementation may be changed in the future.
3.2.3. Changes in Boot Options
A boot option is used to modify the installer's behavior using the boot command line. The following boot options have been added in Fedora 21:
inst.zram=
: Use this option to force zRAM swap on (inst.zram=on
) or off (inst.zram=off
).
inst.dnf
: Use the experimental DNF backend for package installation instead of YUM.
inst.memcheck
: Perform a check at the beginning of the installation to determine if there is enough available RAM. If there is not enough memory detected, the installation will stop with an error message. This option is enabled by default; use inst.memcheck=0
to disable it.
3.2.4. Changes in Anaconda Command Line Options
Anaconda command line options are used when running the installer from a terminal within an already installed system, as for example, when installing into a disk image.
The built-in help available through the anaconda -h
command now provides descriptions for all available commands.
--memcheck
: Check if the system has sufficient RAM to complete the installation and abort the installation if it does not. This check is approximate. Memory usage during installation depends on the package selection, user interface (graphical/text) and other parameters.
--nomemcheck
: Do not check if the system has enough memory to complete the installation.
--leavebootorder
: Boot drives in their existing order - used to override the default of booting into the newly installed drive on IBM Power Systems servers and EFI systems. This is useful for systems that, for example, should network boot first before falling back to a local boot.
--extlinux
: Use extlinux as the boot loader. Note that there is no attempt to check whether this will work for your platform, which means your system may be unable to boot after completing the installation if you use this option.
--dnf
: Use the experimental
DNF package management backend to replace the default
YUM package manager. See
http://dnf.baseurl.org for more information about the DNF project.
3.2.5. Changes in Kickstart Syntax
This section provides a list of changes to Kickstart commands and options. A list of these changes can also be viewed using the following command on a Fedora system:
$
ksverdiff -f F20 -t F21
This command will only work on Fedora 21 with the pykickstart package installed.
3.2.5.1. New Commands and Options
fcoe --autovlan
: Enable automatic discovery of VLANs.
bootloader --disabled
: Do not attempt to install a boot loader. This option overrides all other boot loader configuration; all other boot loader options will be ignored and no boot loader packages will be installed.
network --interfacename=
: Specify a custom interface name for a VLAN device. This option should be used when the default name generated by the --vlanid=
option is not desired, and it must always be used together with --vlanid=
.
ostreesetup
: New optional command. Used for OSTree installations. Available options are:
--osname=
(required): Management root for OS installation.
--remote=
(optional): Name of the remote repository.
--url=
(required): Repository URL.
--ref=
(required): Name of branch inside the repository.
--nogpgcheck
(optional): Disable GPG key verification.
clearpart --disklabel=
: Create a custom disk label when relabeling disks.
autopart --fstype=
: Specify a file system type (such as ext4
or xfs
) to replace the default when doing automatic partitioning.
repo --install
: Writes the repository information into the /etc/yum.repos.d/
directory. This makes the repository configured in Kickstart available on the installed system as well.
Changes in the %packages
section:
You can now specify an environment to be installed in the %packages
section by adding an environment name prefixed by @^
. For example:
%packages
@core
@^Infrastructure Server
%end
The %packages --nocore
option can now be used to disable installing of the Core
package group.
You can now exclude the kernel from installing. This is done the same way as excluding any other package - by prefixing the package name with -
:
%packages
@core
-kernel
%end
3.2.5.2. Changes in Existing Commands and Options
3.2.6. Additional Changes
Software RAID configuration in the graphical user interface has been tweaked.
You can now use the + and - keys as shortcuts in the manual partitioning screen in the graphical user interface.
The ksverdiff
utility (part of the pykickstart package) has a new option: --listversions
. Use this option to list all available operating system versions which can be used as arguments for the --from=
and --to=
options.
3.3.1. SSSD GPO-Based Access Control
SSSD now supports centrally managed, host-based access control in an Active Directory (AD) environment, using Group Policy Objects (GPOs).
GPO policy settings are commonly used to manage host-based access control in an AD environment. SSSD supports local logons, remote logons, service logons and more. Each of these standard GPO security options can be mapped to any PAM service, allowing administrators to comprehensively configure their systems.
This enhancement to SSSD is related only to the retrieval and enforcement of AD policy settings. Administrators can continue to use the existing AD tool set to specify policy settings.
The new functionality only affects SSSD's AD provider and has no effect on any other SSSD providers (e.g. IPA provider). By default, SSSD's AD provider will be installed in "permissive" mode, so that it won't break upgrades. Administrators will need to set "enforcing" mode manually (see sssd-ad(5)).
The Apache Accumulo sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system. Apache Accumulo is based on Google's BigTable design and is built on top of Apache Hadoop, Zookeeper, and Thrift. Apache Accumulo features a few novel improvements on the BigTable design in the form of cell-based access control and a server-side programming mechanism that can modify key/value pairs at various points in the data management process.
Please note that Accumulo's optional monitor service is not provided in the initial F21 release. It will be made available as soon as all its dependencies are in place.
Apache HBase is used when you need random, real-time read/write access to your Big Data. Apache HBase hosts very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is a distributed, versioned, non-relational database modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
The Apache Hive data warehouse software facilitates querying and managing large data sets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
In Fedora 21, MariaDB have been updated to the upstream version 10.0, which provides various bug fixes and enhancements. Among others, the support for parallel and multi-source replication has been added as well as the support for global transaction IDs. In addition, several new storage engines have been implemented.
systemd
in Fedora 21 has been updated to version 215. This release includes substantial enhancements, including improved resource management, service isolation and other security improvements, and network management from systemd-networkd
.
Many of these improvements enhance management of services running inside containers, and management of the containers themselves. systemd-nspawn
creates securely isolated containers, and tools such as machinectl
are available to manage them. systemd-networkd
provides network services for the containers, and systemd
itself manages resource allocations.
To learn more about enhancements to
systemd
, read:
/usr/share/doc/systemd/NEWS
for the upstream changelog.
Manpages and other documentation provided with the systemd package, listed with rpm -qd systemd
.
3.7.3. Systemd PrivateDevices and PrivateNetwork
Two new security-related options are now being used by systemd
for long-running services which do not require access to physical devices or the network:
The PrivateDevices
setting, when set to "yes", provides a private, minimal /dev
that does not include physical devices. This allows long-running services to have limited access, increasing security.
The PrivateNetwork
setting, when set to "yes", provides a private network with only a loopback interface. This allows long-running services that do not require network access to be cut off from the network.
The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. Apache Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elastic Search) with APIs for resource management and scheduling across entire data center and cloud environments.
Apache Oozie is a workflow scheduler to manage Hadoop jobs. It is integrated with the rest of the Hadoop stack and supports several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).
Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which, in turn, enables them to handle very large data sets. At the present time, Pig's infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop sub-project).
Apache Spark is a fast and general engine for large-scale data processing. It supports developing custom analytic processing applications over large data sets or streaming data. Because it has the capability to cache intermediate results in cluster memory and schedule DAGs of computations, Spark programs can run up to 100x faster than equivalent Hadoop MapReduce jobs. Spark applications are easy to develop, parallel, fast, and resilient to failure, and they can operate on data from in-memory collections, local files, a Hadoop-compatible filesystem, or from a variety of streaming sources. Spark also includes libraries for distributed machine learning and graph algorithms.