https://elinux.org/api.php?action=feedcontributions&user=Greywolf82&feedformat=atomeLinux.org - User contributions [en]2024-03-29T10:20:20ZUser contributionsMediaWiki 1.31.0https://elinux.org/index.php?title=User:Greywolf82&diff=16925User:Greywolf822010-01-30T08:26:33Z<p>Greywolf82: /* My recent eLinux wiki activity */</p>
<hr />
<div>= Marco Stornelli =<br />
<br />
This is the user page of Marco Stornelli.<br />
<br />
== Background ==<br />
<br />
My first meeting with Linux, has been at university, during the course of "advanced Linux" with Daniel P. Bovet (the author of Understanding the Linux Kernel). I fell in love. Now I've been working on Linux for three years. I'm a researcher in the embedded systems field, in particular, design and implementation of Linux embedded platforms for telecommunication systems. In addition, the scope of my activity is extended to the study of Linux Real-Time and Linux High-availability aspects.<br />
<br />
== My recent eLinux wiki activity ==<br />
<br />
I've been worked on the PRAMFS for a while, I added XIP feature to it and I made some benchmark to test the results [[Pram_Fs]]. Now I'm the lead maintainer of Pramfs project.</div>Greywolf82https://elinux.org/index.php?title=Boot_Time&diff=13318Boot Time2009-08-18T16:56:32Z<p>Greywolf82: Added RTC tip to the checklist</p>
<hr />
<div>== Introduction ==<br />
Boot Time includes topics such as measurement, analysis, human factors, initialization techniques, and reduction techniques.<br />
The time that a product takes to boot directly impacts the first perception an end user has of the product.<br />
Regardless of how attractive or well designed a consumer electronic device is, the time required to move the device from off to an interactive, usable state is critical to obtaining a positive end user experience. Turning on a device is Use Case #1.<br />
<br />
Booting up a device involves numerous steps and sequences of events. In order to use consistent terminology, the<br />
[[Bootup Time Working Group]] of the CE Linux Forum came up with a list of terms and their widely accepted definitions<br />
for this functionality area. See the following page for these terms:<br />
* [[Boot-up Time Definition Of Terms]]<br />
<br />
== Technology/Project Pages ==<br />
The following are individual pages with information about various technologies relevant to improving Boot Time for Linux. Some of these describe local patches available on this site. Others point to projects or patches maintained elsewhere.<br />
<br />
=== Measuring Boot-up Time ===<br />
*[[Printk Times]] - simple system for showing timing information for each printk.<br />
*[[Kernel Function Trace]] - system for reporting function timings in the kernel.<br />
*[[Linux Trace Toolkit]] - system for reporting timing data for certain kernel and process events.<br />
*[http://oprofile.sourceforge.net/news/ Oprofile] - system-wide profiler for Linux.<br />
*[[Bootchart]] - a tool for performance analysis and visualization of the Linux boot process. Resource utilization and process information are collected during the user-space portion of the boot process and are later rendered in a PNG, SVG or EPS encoded chart.<br />
*[http://people.redhat.com/berrange/systemtap/bootprobe/ Bootprobe] - a set of [[System Tap]] scripts for analyzing system bootup.<br />
* and, let us not forget: "cat /proc/uptime"<br />
* [[Tims Fastboot Tools#grabserial | grabserial]] - a nice utility from Tim Bird to log and timestamp console output<br />
* [[Tims Fastboot Tools#Tim's quick and dirty process trace|process trace]] - a simple patch from Tim Bird to log exec, fork and exit system calls.<br />
* [[Initcall Debug]] - a kernel command line option to show time taken for initcalls.<br />
* See also: [[Kernel Instrumentation]] which lists some known kernel instrumentation tools. These are of interest for measuring kernel startup time.<br />
<br />
=== Technologies and Techniques for Reducing Boot Time ===<br />
==== Bootloader speedups ====<br />
*[[Kernel XIP]] - Allow kernel to be executed in-place in ROM or FLASH.<br />
*[[DMA Copy Of Kernel On Startup]] - Copy kernel from Flash to RAM using DMA<br />
*[[Uncompressed kernel]] - An uncompressed kernel might boot faster<br />
*[[Fast Kernel Decompression]]<br />
<br />
==== Kernel speedups ====<br />
*[[Disable Console]] - Avoid overhead of console output during system startup.<br />
*Disable bug and printk - Avoid the overhead of bug and printk. Disadvantage is that you loose a lot of info.<br />
*[[RTC No Sync]] - Avoid delay to synchronize system time with RTC clock edge on startup.<br />
*[[Short IDE Delays]] - Reduce duration of IDE startup delays (this is effective but possibly dangerous).<br />
*[[Hardcode kernel module info]] - Reduce the overhead of loading a module, by hardcoding some information used for loading the relocation information<br />
*[[IDE No Probe]] - Force kernel to observe the ide<x>=noprobe option.<br />
*[[Preset LPJ]] - Allow the use of a preset loops_per_jiffy value.<br />
*[[Asynchronous function calls]] - Allow probing or other functions to proceed in parallel, to overlap time-consuming boot-up activities.<br />
**[[Threaded Device Probing]] - Allow drivers to probe devices in parallel. (not mainlined, now deprecated?)<br />
*[[Reordering of driver initialization]] - Allow driver bus probing to start as soon as possible.<br />
*[[Deferred Initcalls]] - defer non-essential module initialization routines to after primary boot<br />
*NAND ECC improvement - The pre 2.6.28 nand_ecc.c has room for improvement. You can find an improved version in the mtd git at http://git.infradead.org/mtd-2.6.git?a=blob_plain;f=drivers/mtd/nand/nand_ecc.c;hb=HEAD. Documentation for this is in http://git.infradead.org/mtd-2.6.git?a=blob_plain;f=Documentation/mtd/nand_ecc.txt;hb=HEAD. This is only interesting if your system uses software ECC correction.<br />
*Check what kernel memory allocator you use. Slob or slub might be better than slab (which is the default in older kernels) <br />
*If your system does not need it, you can remove SYSFS and even PROCFS from the kernel. In one test removing sysfs saved 20 ms.<br />
*Carefully investigate all kernel configuration options on whether they are applicable or not. Even if you select an option that is not used in the end, it contributes to the kernel size and therefore to the kernel load time (assuming you are not doing kernel XIP). Often this will require some trial and measure! E.g. selecting CONFIG_CC_OPTIMIZE_FOR_SIZE (found under general setup) gave in one case a boot improvement of 20 ms. Not dramamtic, but when reducing boot time every penny counts!<br />
*Moving to a different compiler version might lead to shorter and/or faster code. Most often newer compilers produce better code. You might also want to play with compiler options to see what works best.<br />
* If you use initramfs in your kernel and a compressed kernel it is better to have an uncompressed initramfs image. This is to avoid having to uncompress data twice. A patch for this has been submitted to LKML. See http://lkml.org/lkml/2008/11/22/112 <br />
<br />
===== File System issues =====<br />
Different file systems have different initialization (mounting) times, for the same data sets. This<br />
is a function of whether meta-data must be read from storage into RAM or not, and what algorithms are<br />
used during the mount procedure.<br />
<br />
* [[Filesystem Information]] - has information about boot-up times of various file systems<br />
* [[File Systems]] - has information on various file systems that are interesting for embedded systems. Also includes some improvement suggestions.<br />
* [[Avoid Initramfs]] - explains on why intramfs should be avoided if you want to minimize boot time<br />
* Split partitions. If mounting a file system takes long, you can consider splitting that filesystem in two parts, one with the info that is needed during or immediately after boot, and one which can be mounted later on.<br />
* [[Ramdisks demasked]] - explains why using a ram disk generally results in a longer boot time, not a shorter one.<br />
<br />
==== User-space and application speedups ====<br />
* [[Optimize RC Scripts]] - Reduce overhead of running RC scripts<br />
* [[Parallel RC Scripts]] - Run RC scripts in parallel instead of sequentially<br />
* [[Application XIP]] - Allow programs and libraries to be executed in-place in ROM or FLASH<br />
* [[Pre Linking]] - Avoid cost of runtime linking on first program load<br />
* Statically link applications. This avoids the costs of runtime linking. Useful if you have only a few applications. In that case it could also reduce the size of your image as no dynamic libraries are needed<br />
* GNU_HASH: ~ 50% speed improvement in dynamic linking<br />
** See http://sourceware.org/ml/binutils/2006-06/msg00418.html<br />
* [[Application Init Optimizations]] - Improvements in program load and init time via: <br />
** use of mmap vs. read<br />
** control over page mapping characteristics.<br />
* [[Include modules in kernel image]] - Avoid extra overhead of module loading by adding the modules to the kernel image<br />
* Avoid udev, it takes quite some time to populate the /dev directory. In an embedded system it is often known what devices are present and in any case you know what drivers are available, so you know what device entries might be needed in /dev. These should be created statically, not dynamically. mknod is your friend, udev is your enemy.<br />
* If you still like udev and also like fast boot-up's, you might go this way: start your system with udev enabled and make kind of a backup of the created device nodes. Now, modify your init script like this: instead running udev, copy the device nodes that you just made a backup of into the device tree. Now, install the hotplug-daemon like you always do. That trick avoids the device node creation at startup but stills lets your system create device nodes later on. <br />
* If your device has a network connection, preferably use static IP addresses. Getting an address from a DHCP server takes additional time and has extra overhead associated with it.<br />
* Moving to a different compiler version might lead to shorter and/or faster code. Most often newer compilers produce better code. You might also want to play with compiler options to see what works best.<br />
* If possible move from glibc to uClibc. This leads to smaller executables and hence to faster load times.<br />
* library optimiser tool: http://libraryopt.sourceforge.net/ <br/> This will allow you to create an optimised library. As unneeded functions are removed this should lead to a performance gain. Normally there will be library pages which contain unused code (adjacent to code that is used). After optimizing the library this does not occur any more, so less pages are needed and hence less page loads, so some time can be saved.<br />
* Function reordering: http://www.celinux.org/elc08_presentations/DDLink%20FunctionReorder%2008%2004.pdf <br/> This is a technique to rearrange the functions within an executable so they appear in the order they are needed. This improves the load time of the application as all initialization code is grouped into a set of pages, instead of being scattered over a number of pages.<br />
<br />
==== Suspend related improvements ====<br />
Another approach to improve boot time is to use a suspend related mechanism. Two approaches are known.<br />
* Using the standard hibernate/resume approach. This is what has been demonstrated by Chan Ju, Park, from Samsung. See sheet 23 and onwards from this [[Media:LinuxBootupTimeReduction4DSC.ppt|PPT]] and section 2.7 of this [http://www.kernel.org/doc/ols/2006/ols2006v2-pages-239-248.pdf paper]. <br /> Issue with this approach is that flash write is much slower than flash read, so the actual creation of the hibernate image might take quite a while.<br />
* Implementing snapshot boot. This is done by Hiroki Kaminaga from Sony and is described at [[Suspend To Disk For ARM|snapshot boot for ARM]] and http://elinux.org/upload/3/37/Snapshot-boot-final.pdf<br />This is similar to hibernate and resume, but the hibernate file is retained and used upon every boot. Disadvantage is that no writable partitions should be mounted at the time of making the snapshot. Otherwise inconsistencies will occur if a partition is modified, while applications in the hibernate file might have information in the snapshot related to the unmodified partition.<br />
<br />
==== Miscellaneous topics ====<br />
<br />
[[About Compression]] discusses the effects of compression on boot time. This can affect both the kernel boot time as well as user-space startup.<br />
<br />
==== Uninvestigated speedups ====<br />
<br />
This section is a holding pen for ideas for improvement that are not implemented yet but that could result in a boot time gain. Please leave a note here if you are working on one of these items to avoid duplicate work.<br />
<br />
* '''Prepopulated buffer cache''' - As initramfs performs an additional copy of the data the idea is to have a prepopulated buffer cache. A simplistic scenario would allow dumping the buffer cache when the booting is completed and the user applications have initialised. This data then could be used in a subsequent boot to initialize the buffer cache (of course without copying). A possible approach would be to have those data to reside into the kernel image and use them directly. Alternately they could be loaded separately. <br /> Unfortunately my knowledge of the internals in this section is not yet good enough to do a trial implementation.<br /> Caveats:<br />
** is it possible to have the buffer cache split into two different parts, one which is statically allocated, one which is dynamically allocated?<br />
** the pages in the prepopulated buffer cache probably cannot be discarded, so they should be pinned<br />
** apart from the buffer cache data itself also some other variables might need restoring<br />
** a similar approach could also be used for the cached file data.<br />
*'''Dedicated fs''' - currently a lot of abstraction is done in fs to make a nice abstraction allowing easy addition of new filesystems and creating a unified view of those filesystem. While this is pretty neat, the abstraction layers also introduce some overhead. A solution could be to create a dedicated fs system, which supports only one (or maybe 2) filesystems, and eliminates the abstraction overhead. This will give some benefit, but the chance of getting this into the mainline is zero.<br />
<br />
== Articles and Presentations ==<br />
* "One Second Linux Boot Demonstration (new version)" ([http://www.youtube.com/watch?v=-l_DSZe8_F8 Youtube video by MontaVista])<br />
* "Tools and Techniques for Reducing Bootup Time" ([[Media:Tools-and-technique-for-reducing-bootup-time.ppt|PPT]] | [[Media:Tools-and-technique-for-reducing-bootup-time.odp|ODP]] | [[Media:Tools-and-technique-for-reducing-bootup-time.pdf|PDF]] | [http://free-electrons.com/pub/video/2008/elce/elce2008-bird-reducing-bootup-time.ogv video])<br />
** Tim Bird has presented at ELC Europe, on November 7, 2008, his latest collection of tips and tricks for reducing bootup time<br />
** [[Tims Fastboot Tools]] has online materials in support of this presentation<br />
* [http://www.mvista.com/download/author.php?a=37 Christopher Hallinan] has done a presentation at the MontaVista Vision conference 2008 on the topic of reducing boot time. Slides available [http://www.mvista.com/download/power/Reducing-boot-time-techniques-for-fast-booting.pdf here]<br />
* [http://lwn.net/Articles/192082/ Optimizing Linker Load Times]<br />
** (introducing various kinds of bootuptime reduction, prelinking, etc.)<br />
* [http://tuxology.net/2008/07/08/benchmarking-boot-latency-on-x86/ Benchmarking boot latency on x86]<br />
** By Gilad Ben-Yossef, July 2008<br />
** A tutorial on using TSC register and the kernel PRINTK_TIMES feature to measure x86 system boot time, including BIOS, bootloader, kernel and time to first user program.<br />
* [http://tree.celinuxforum.org/CelfPubWiki/KoreaTechJamboree3?action=AttachFile&do=get&target=The_Fast_Booting_of_Embedded_Linux.pdf Fast Booting of Embedded Linux]<br />
** By HoJoon Park, Electrons and Telecommunications Research Institute (ETRI), Korea, Presented at the CELF [http://tree.celinuxforum.org/CelfPubWiki/KoreaTechJamboree3 3rd Korean Technical Jamboree], July 2008<br />
** Explains several different reduction techniques used for different phases of bootup time<br />
*Tim Bird's (Sony) survey of boot-up time reduction techniques:<br />
**[http://kernel.org/doc/ols/2004/ols2004v1-pages-79-88.pdf Methods to Improve Boot-up Time in Linux] - Paper prepared for 2004 Ottawa Linux Symposium<br />
**{{pdf|ReducingStartupTime v0.8.pdf|Reducing Startup Time in Embedded Linux Systems}} - December 2003 Presentation describing some existing boot-up time reduction techniques and strategies.<br />
* [http://free-electrons.com/articles/optimizations Embedded Linux optimizations]<br />
** By Free Electrons<br />
** Tutorial to reduce size, RAM, speed, power and cost of a Linux based embedded system]<br />
*Parallelizing Linux Boot on CE Devices<br />
** [http://tree.celinuxforum.org/CelfPubWiki/ELCEurope2007Presentations?action=AttachFile&do=view&target=par.pdf PDF of Presentation]<br />
**[http://free-electrons.com/pub/video/2007/elce/elce-2007-vitaly-wool-parallel-boot.ogg Video of Presentation]<br />
*[http://www.ibm.com/developerworks/linux/library/l-boot-faster/ Parallelize Applications for Faster Linux Boot]<br />
**Authored by M. Tim Jones for IBM Developer Works<br />
**This article shows you options to increase the speed with which Linux boots, including two options for parallelizing the initialization process. It also shows you how to visualize graphically the performance of the boot process.<br />
<br />
=== Case Studies ===<br />
* Samsung proof-of-acceptability study for digital still camera: see [[Media:LinuxBootupTimeReduction4DSC.ppt|Boot Up Time Reduction PPT]] and the [http://www.kernel.org/doc/ols/2006/ols2006v2-pages-239-248.pdf paper] describing this.<br />
* [https://docs.blackfin.uclinux.org/doku.php?id=fast_boot_example Boot Linux from Processor Reset into user space in less than 1 Second]<br />
** In this white paper, Robin Getz describes the techniques used to fast-boot a blackfin development board.<br />
<br />
=== News ===<br />
* Lineo Solutions announced (Nov. 2008) technology to boot Linux in 2.97 seconds on a low-end system. The system is called "Warp2" and appears to be a form of modified resume (similar to "snapshot boot" mentioned above.<br />
** See http://www.linuxdevices.com/news/NS5185504436.html<br />
<br />
== Additional Projects/Mailing Lists/Resources ==<br />
=== Kexec ===<br />
*Kexec is a system which allows a system to be '''rebooted''' without going through BIOS. That is, a Linux kernel can directly boot into another Linux kernel, without going through firmware. See the white paper at: [http://developer.osdl.org/andyp/kexec/whitepaper/kexec.pdf kexec.pdf]<br />
**2004 Kernel Summit presentation: [http://www.xenotime.net/linux/fastboot/fastboot-ks-2004.pdf fastboot.pdf]<br />
**here's another Kexec white paper:[http://www-106.ibm.com/developerworks/linux/library/l-kexec.html?ca=dgr-lnxw04 Reboot Fast]<br />
<br />
=== Splash Screen projects ===<br />
* [http://splashy.alioth.debian.org/wiki/ Splashy] - Technology to put up a spalsh screen early in the boot sequence. This is user-space code.<br />
** This seems to be the most current splash screen technology, for major distributions. A framebuffer driver for the kernel is required.<br />
* [http://dev.gentoo.org/~spock/projects/gensplash/ Gentoo Splashscreen] - newer technology to put a splash screen early in the boot sequence<br />
** See the HOWTO at: [http://gentoo-wiki.com/HOWTO_fbsplash HOWTO FBSplash]<br />
* [http://butterfeet.org/?p=8 PSplash] - PSplash is a userspace graphical boot splash screen for mainly embedded Linux devices supporting a 16bpp or 32bpp framebuffer.<br />
* [http://www.bootsplash.org/ bootsplash.org] - put up a splash screen early in boot sequence<br />
** This project requires kernel patches<br />
** This project is now abandoned, and work is being done on Splashy.<br />
<br />
=== Others ===<br />
<br />
*[http://www.linuxdevices.com/news/NS5907201615.html FSMLabs Fastboot] - press release by FSMLabs about fast booting of their product. Is any of this published?<br />
<br />
*[http://tree.celinuxforum.org/CelfPubWiki/ snapshot boot] - a technology uses software resume to boot up the system quickly.<br />
<br />
==== Apparently obsolete or abandoned material ====<br />
* [[Image:alert.gif]] ''in progress'' - [[Boot-up Time Reduction Howto]] - this is a project to catalog existing boot-up time reduction techniques.<br />
** Was originally intended to be the authoritative source for bootup time reduction information.<br />
** No one maintains it any more (as of Aug, 2008)<br />
*[[Image:alert.gif]]''no content yet'' - [[Boot-up Time Delay Taxonomy]] - list of delays categorized by boot phase, type and magnitude<br />
** Was to be a survey of common bootup delays found in embedded devices.<br />
** Was never really written.<br />
<br />
???<br />
* [[Bootup Time Spec]]<br />
* [[Bootup Time Things To Investigate]]<br />
* [[Bootup Time Working Group]]<br />
* [[Bootup Time Task List]]<br />
* [[Bootup Time Howto Task List]]<br />
* [[Fast Booting Translation]]<br />
<br />
== Companies, individuals or projects working on fast booting ==<br />
* Intel - Arjan van de Ven - see http://lwn.net/Articles/299483/<br />
* Tripeaks - see http://www.linuxdevices.com/news/NS8282586707.html<br />
* Lineo Solutions - see http://www.linuxdevices.com/news/NS5185504436.html<br />
* Monta Vista - see http://www.linuxdevices.com/news/NS2560585344.html<br />
* fastboot git tree - see http://lwn.net/Articles/299591/<br />
<br />
== Boot time check list ==<br />
<br />
From a [http://www.mail-archive.com/linux-embedded@vger.kernel.org/msg02139.html August 2009 discussion about boot time on ARM devices], several hints and advices regarding boot time optimization are available. While it may repeat a lot of above, below is a check list extracted from this discussion:<br />
<br />
* Is CPU's clock switched to maximum? If the kernel, bootloader or hardware is in charge of setting CPU power and speed scaling, then you should check that it boots with the CPU set at maximum speed instead of slowest. <br />
<br />
* Is your hardware (register) timing configuration of your SoC's memory interfaces (e.g. RAM and NOR/NAND timing) optimized? A lot of vendors ship their hardware with "well, it works, optimize later" settings. What you want is "as fast as possible, but sill stable and reliable" configuration. This might need some hardware knowledge and has to be customized to the individual memory devices used.<br />
<br />
* Does your boot loader uses I- and D-Cache? E.g. U-Boot doesn't enable D-Cache by default on ARM devices, as it needs customized MMU tables to do so.<br />
<br />
* Does kernel copy from permanent storage (e.g. NOR or NAND) to RAM use optimized functions? E.g. DMA, or on ARM at least load/store multiple commands (ldm/stm)?<br />
<br />
* If you use U-Boot's uImage, set "verify=no" in U-Boot to avoid checksum verification.<br />
<br />
* Optimize size of your kernel.<br />
** You might even try some of the embedded system scripts that rip out all the printk strings.<br />
<br />
* How often is kernel (image) data copied? First by boot loader from storage to RAM, then by kernel's uncompressor to it's final destination? Once more? If you use compressed kernel and NOR flash, consider running the uncompressor XIP in NOR flash.<br />
<br />
* If you use compressed kernel, check compression algorithm. zlib is slow on decompression, and lzo is much faster. So if you implement lzo compression, you'll probably speed things up a little as well (check LKML for this). Having no compression at all may also be a good thing to try (see next topic).<br />
<br />
* Check to use uncompressed kernel (depends on your system configuration). Using an uncompressed kernel on a flash-based system may improve boot time. The reason is that compressed kernels are faster only when the throughput to the persistent storage is lower than the decompression throughput, and on typical embedded systems with DMA the throughput to memory outperforms the CPU-based decompression. Of course it depends on a lot of stuff like performance of flash controller, kernel storage filesystem performance, DMA controller performance, cache architecture etc. So it's individual per-system. Example: With using an uncomressed kernel (~2.8MB) uncompressing (running the uncompressor XIP in NOR flash) took ~0.5s longer than copying 2.8MB from flash to RAM.<br />
<br />
* Enable precalculated loops-per-jiffy<br />
<br />
* Enable kernel quiet option<br />
<br />
* If you use UBI: UBI is rather slow in attaching MTD devices. Everything is explained at MTD's [http://www.linux-mtd.infradead.org/doc/ubi.html#L_scalability UBI scalability] and [http://www.linux-mtd.infradead.org/doc/ubifs.html#L_scalability UBI fs scalability] sections. There is not very much you can do to speed it up but implement UBI2. UBIFS would stay intact. There were discussions about this and it does not seem to be impossibly difficult to do UBI2 ([http://www.linux-mtd.infradead.org/faq/ubi.html#L_attach_faster few ideas]).<br />
<br />
* Use static device nodes during boot, and later setup busybox mdev for hotplug.<br />
<br />
* If you have network enabled, there might be some very long timeouts in the network code paths, which appear to be used whether you specify a static address or not. See the definitions of CONF_PRE_OPEN and CON_POST_OPEN in ''net/ipv4/ipconfig.c''. Check [http://patchwork.kernel.org/patch/31678/ ipdelay configuration patch].<br />
<br />
* Parallelize boot process.<br />
<br />
* Disable the option "Set system time from RTC on startup and resume", you can use the command hwclock -s at the of the init instead of slowing down the kernel.<br />
<br />
[[Category:Boot Time]]<br />
[[Category:Bootloader]]</div>Greywolf82https://elinux.org/index.php?title=Memory_Management&diff=11867Memory Management2009-05-24T08:29:29Z<p>Greywolf82: Add memory usage limit notification</p>
<hr />
<div>This page has information about various memory management projects and activities which are of interest to embedded Linux developers.<br />
<br />
== Areas of Interest ==<br />
<br />
Most of these areas have wider reaching implications, but are relatively simpler in the embedded case, largely thanks to not having to contend with swap and things of that nature. Simpler memory management as well as vendors not afraid of deviation from mainline for product programs makes for an excellent playground for experimenting with new things in the memory management and virtual memory space.<br />
<br />
=== Memory Measurement ===<br />
Analyzing the amount of system memory in use and available is trickier than it sounds.<br />
<br />
* See [[Runtime Memory Measurement]] for different methods of measuring and analyzing system memory.<br />
<br />
* See [[Accurate Memory Measurement]] for some different techniques for dealing with inadequacies in current memory measurement systems.<br />
<br />
=== Huge/large/superpages ===<br />
<br />
*This applies to both transparent large page usage as well as the more static usage models, primarily relating to work outside of the hugetlb interface/libhugetlbfs.<br />
*Embedded systems suffer from very small TLBs generally using PAGE_SIZE'd pages (4kB) for coverage. In most cases this places the system under very heavy pressure for any kind of userspace work, and very visibly degrading performance, with most applications taking anywhere from 5-40% of their time on the CPU servicing page faults.<br />
*Preliminary discussion on this subject as well as links to additional information is happening through the wiki here: [http://linux-mm.org/ Huge Pages]<br />
<br />
=== Page cache compression ===<br />
<br />
*This relates to using various compression algorithms for performing run-time compression and decompression of page cache pages, specifically aimed at both reducing memory pressure as well as helping performance in certain workloads.<br />
*More information can be found on the wiki here [http://linux-mm.org/CompressedCaching CompressedCaching] as well as at the [http://linuxcompressed.sourceforge.net SF Compressed Caching] home page.<br />
<br />
=== Reserving (and accessing) the top of memory on startup ===<br />
A quote from Todd's email on how to use the reserved physical memory in "mem=".<br />
<br />
----<br />
<br />
Given that you have a fixed address for your memory, and is already <br />
reserved, the easier way to use it is by calling mmap() over the /dev/ <br />
mem device, use 0 as the start address, and the physical address of <br />
the reserved memory as the offset. The flags could be MAP_WRITE| <br />
MAP_READ. That will return you a pointer on user space for your <br />
memory mapped by the kernel. For example<br />
<br />
If your SDRAM base address is 0x80000000 and your memory is of 64MB, <br />
but you use the cmdline mem=60M to reserve 4MB at the end. Then your <br />
reserved memory will be at 0x83c00000, so all you need to do is<br />
<br />
<pre><br />
int fd;<br />
char *reserved_memory;<br />
<br />
fd = open("/dev/mem",O_RDWR);<br />
reserved_memory = (char *) mmap(0,4*1024*1024,PROT_READ| PROT_WRITE,MAP_SHARED,fd,0x83c00000);<br />
</pre><br />
----<br />
<br />
=== Enhanced Out-Of-Memory (OOM) handling ===<br />
Several technologies have been developed and suggested for improving the handling out-of-memory conditions with Linux systems.<br />
<br />
See http://linux-mm.org/OOM_Killer for information about the OOM killer in the Linux kernel.<br />
<br />
Part of OOM avoidance is for the kernel to have an accurate measure of memory utilization.<br />
See [[Accurate Memory Measurement]] for information on technology in this area.<br />
<br />
Here are some technologies that I know about (these need to be researched and documented better):<br />
* Memory usage limit notification<br />
** This patch updates the Memory Controller cgroup to add a configurable memory usage limit notification. The feature was presented at the April 2009 Embedded Linux Conference.<br />
** See http://lwn.net/Articles/328403/<br />
* mem_notify patches<br />
** This set of patches provided a mechanism to notify user-space when memory is getting low, allowing for application-based handling of the condition. These patches were submitted in January 2008.<br />
** This patch cannot be applied to versions beyond 2.6.28 because the memory management reclaiming sequence have changed.<br />
** See http://lwn.net/Articles/267013/<br />
* Google per-cgroup OOM handler<br />
** Google posted a Request For Comments (RFC) for OOM handling implemented in a per-cgroup fashion. See http://article.gmane.org/gmane.linux.kernel.mm/28376<br />
* Nokia OOM enhancements<br />
** Maemo application enhancements referenced at: http://lwn.net/Articles/267013/ (search for "killable" in the comments)<br />
<pre><br />
User "oak" writes (commenting on the mem_notify patches):<br />
<br />
Posted Feb 3, 2008 14:02 UTC (Sun) by oak (guest, #2786) [Link]<br />
<br />
...<br />
<br />
I thought the point of the patch is for user-space to be able to do the <br />
memory management in *manageable places* in code. As mentioned earlier, <br />
a lot of user-space code[1] doesn't handle memory allocation failures. And <br />
even if it's supposed to be, it can be hard to verify (test) that the <br />
failures are handled in *all* cases properly. If user-space can get a <br />
pre-notification of a low-memory situation, it can in suitable place in <br />
code free memory so that further allocations will succeed (with higher <br />
propability). <br />
<br />
That also allows doing somehing like what maemo does. If system gets <br />
notified about kernel low memory shortage, it kills processes which have <br />
notified it that they are in "background-killable" state (saved their UI <br />
state, able to restore it and not currently visible to user). I think it <br />
also notifies applications (currently) through D-BUS about low memory <br />
condition. Applications visible to user or otherwise non-background <br />
killable are then supposed to free their caches and/or disable features <br />
that could take a lot of additional memory. If the caches are from heap <br />
instead of memory mapped, it's less likely to help because of heap <br />
fragmentation and it requiring more work/time though.<br />
</pre><br />
* Paul Mundt submitted a patch to CELF for the 2.6.12 kernel which provided low-memory notifications to user space. See [[Accurate_Memory_Measurement#Nokia_out-of-memory_notifier_module]] for more information.<br />
** This module was based on the Linux Security Module system, which has been removed from recent kernels.<br />
<br />
=== Type-based memory allocation (old) ===<br />
This is a mechanism (prototyped in the 2.4 kernel by Sony and Panasonic) to allow the kernel to allocate different<br />
types of memory for different sections of a program, based on user policy.<br />
<br />
See [[Memory Type Based Allocation]]<br />
<br />
== Additional Resources/Mailing Lists ==<br />
*[http://linux-mm.org LinuxMM] - links to various sub-projects, and acts as a centralized point for discussion relating to memory management topics ([mailto:majordomo@kvack.org linux-mm] mailing list and [http://marc.theaimsgroup.com/?l=linux-mm archives]).<br />
<br />
*[http://lwn.net/Articles/250967/ Everything about memory that a programmer should know]<br />
<br />
[[Category:Linux]]</div>Greywolf82https://elinux.org/index.php?title=Asynchronous_function_calls&diff=10727Asynchronous function calls2009-04-11T07:54:25Z<p>Greywolf82: </p>
<hr />
<div>In order to make the kernel boot faster, a set of patches was introduced by<br />
Arjan van de Ven in January 2009 <br />
to create infrastructure to allow doing some of the initialization steps<br />
asynchronously. The patches allow overlapping significant portions of the hardware delays<br />
in practice. Asynchronous function calls has been merged in mainline starting from 2.6.29. Starting from 2.6.30 the asynchronous function call infrastructure is enabled by default. <br />
<br />
In order to not change device order and other similar observables, the<br />
patch does NOT do full parallel initialization.<br />
<br />
Rather, it operates more in the way an out of order CPU does; the work may<br />
be done out of order and asynchronous, but the observable effects<br />
(instruction retiring for the CPU) are still done in the original sequence.<br />
<br />
== References ==<br />
See http://lkml.org/lkml/2009/1/4/155 for the first patch in the series.<br />
<br />
Work similar in spirit to this was done previously, but with smaller<br />
scope and apparently not mainlined.<br />
<br />
See [[Threaded Device Probing]]</div>Greywolf82https://elinux.org/index.php?title=Memory_Management&diff=9204Memory Management2009-02-05T17:37:23Z<p>Greywolf82: Mem notify update</p>
<hr />
<div>This page has information about various memory management projects and activities which are of interest to embedded Linux developers.<br />
<br />
== Areas of Interest ==<br />
<br />
Most of these areas have wider reaching implications, but are relatively simpler in the embedded case, largely thanks to not having to contend with swap and things of that nature. Simpler memory management as well as vendors not afraid of deviation from mainline for product programs makes for an excellent playground for experimenting with new things in the memory management and virtual memory space.<br />
<br />
=== Memory Measurement ===<br />
Analyzing the amount of system memory in use and available is trickier than it sounds.<br />
<br />
* See [[Runtime Memory Measurement]] for different methods of measuring and analyzing system memory.<br />
<br />
* See [[Accurate Memory Measurement]] for some different techniques for dealing with inadequacies in current memory measurement systems.<br />
<br />
=== Huge/large/superpages ===<br />
<br />
*This applies to both transparent large page usage as well as the more static usage models, primarily relating to work outside of the hugetlb interface/libhugetlbfs.<br />
*Embedded systems suffer from very small TLBs generally using PAGE_SIZE'd pages (4kB) for coverage. In most cases this places the system under very heavy pressure for any kind of userspace work, and very visibly degrading performance, with most applications taking anywhere from 5-40% of their time on the CPU servicing page faults.<br />
*Preliminary discussion on this subject as well as links to additional information is happening through the wiki here: [http://linux-mm.org/ Huge Pages]<br />
<br />
=== Page cache compression ===<br />
<br />
*This relates to using various compression algorithms for performing run-time compression and decompression of page cache pages, specifically aimed at both reducing memory pressure as well as helping performance in certain workloads.<br />
*More information can be found on the wiki here [http://linux-mm.org/CompressedCaching CompressedCaching] as well as at the [http://linuxcompressed.sourceforge.net SF Compressed Caching] home page.<br />
<br />
=== Reserving (and accessing) the top of memory on startup ===<br />
A quote from Todd's email on how to use the reserved physical memory in "mem=".<br />
<br />
----<br />
<br />
Given that you have a fixed address for your memory, and is already <br />
reserved, the easier way to use it is by calling mmap() over the /dev/ <br />
mem device, use 0 as the start address, and the physical address of <br />
the reserved memory as the offset. The flags could be MAP_WRITE| <br />
MAP_READ. That will return you a pointer on user space for your <br />
memory mapped by the kernel. For example<br />
<br />
If your SDRAM base address is 0x80000000 and your memory is of 64MB, <br />
but you use the cmdline mem=60M to reserve 4MB at the end. Then your <br />
reserved memory will be at 0x83c00000, so all you need to do is<br />
<br />
<pre><br />
int fd;<br />
char *reserved_memory;<br />
<br />
fd = open("/dev/mem",O_RDWR);<br />
reserved_memory = (char *) mmap(0,4*1024*1024,PROT_READ| PROT_WRITE,MAP_SHARED,fd,0x83c00000);<br />
</pre><br />
----<br />
<br />
=== Enhanced Out-Of-Memory (OOM) handling ===<br />
Several technologies have been developed and suggested for improving the handling out-of-memory conditions with Linux systems.<br />
<br />
See http://linux-mm.org/OOM_Killer for information about the OOM killer in the Linux kernel.<br />
<br />
Part of OOM avoidance is for the kernel to have an accurate measure of memory utilization.<br />
See [[Accurate Memory Measurement]] for information on technology in this area.<br />
<br />
Here are some technologies that I know about (these need to be researched and documented better):<br />
* mem_notify patches<br />
** This set of patches provided a mechanism to notify user-space when memory is getting low, allowing for application-based handling of the condition. These patches were submitted in January 2008.<br />
** This patch cannot be applied to versions beyond 2.6.28 because the memory management reclaiming sequence have changed.<br />
** See http://lwn.net/Articles/267013/<br />
* Google per-cgroup OOM handler<br />
** Google posted a Request For Comments (RFC) for OOM handling implemented in a per-cgroup fashion. See http://article.gmane.org/gmane.linux.kernel.mm/28376<br />
* Nokia OOM enhancements<br />
** Maemo application enhancements referenced at: http://lwn.net/Articles/267013/ (search for "killable" in the comments)<br />
<pre><br />
User "oak" writes (commenting on the mem_notify patches):<br />
<br />
Posted Feb 3, 2008 14:02 UTC (Sun) by oak (guest, #2786) [Link]<br />
<br />
...<br />
<br />
I thought the point of the patch is for user-space to be able to do the <br />
memory management in *manageable places* in code. As mentioned earlier, <br />
a lot of user-space code[1] doesn't handle memory allocation failures. And <br />
even if it's supposed to be, it can be hard to verify (test) that the <br />
failures are handled in *all* cases properly. If user-space can get a <br />
pre-notification of a low-memory situation, it can in suitable place in <br />
code free memory so that further allocations will succeed (with higher <br />
propability). <br />
<br />
That also allows doing somehing like what maemo does. If system gets <br />
notified about kernel low memory shortage, it kills processes which have <br />
notified it that they are in "background-killable" state (saved their UI <br />
state, able to restore it and not currently visible to user). I think it <br />
also notifies applications (currently) through D-BUS about low memory <br />
condition. Applications visible to user or otherwise non-background <br />
killable are then supposed to free their caches and/or disable features <br />
that could take a lot of additional memory. If the caches are from heap <br />
instead of memory mapped, it's less likely to help because of heap <br />
fragmentation and it requiring more work/time though.<br />
</pre><br />
* Paul Mundt submitted a patch to CELF for the 2.6.12 kernel which provided low-memory notifications to user space. See [[Accurate_Memory_Measurement#Nokia_out-of-memory_notifier_module]] for more information.<br />
** This module was based on the Linux Security Module system, which has been removed from recent kernels.<br />
<br />
== Additional Resources/Mailing Lists ==<br />
*[http://linux-mm.org LinuxMM] - links to various sub-projects, and acts as a centralized point for discussion relating to memory management topics ([mailto:majordomo@kvack.org linux-mm] mailing list and [http://marc.theaimsgroup.com/?l=linux-mm archives]).<br />
<br />
[[Category:Linux]]</div>Greywolf82https://elinux.org/index.php?title=Asynchronous_function_calls&diff=9119Asynchronous function calls2009-01-20T09:23:56Z<p>Greywolf82: Asynchronous function calls now in mainline</p>
<hr />
<div>In order to make the kernel boot faster, a set of patches was introduced by<br />
Arjan van de Ven in January 2009 <br />
to create infrastructure to allow doing some of the initialization steps<br />
asynchronously. The patches allow overlapping significant portions of the hardware delays<br />
in practice. Asynchronous function calls has been merged in mainline starting from 2.6.29. This code is still a work in progress, though, and, for 2.6.29, it will not be activated in the absence of the ''fastboot'' command-line parameter. <br />
<br />
In order to not change device order and other similar observables, the<br />
patch does NOT do full parallel initialization.<br />
<br />
Rather, it operates more in the way an out of order CPU does; the work may<br />
be done out of order and asynchronous, but the observable effects<br />
(instruction retiring for the CPU) are still done in the original sequence.<br />
<br />
== References ==<br />
See http://lkml.org/lkml/2009/1/4/155 for the first patch in the series.<br />
<br />
Work similar in spirit to this was done previously, but with smaller<br />
scope and apparently not mainlined.<br />
<br />
See [[Threaded Device Probing]]</div>Greywolf82https://elinux.org/index.php?title=Kernel_Instrumentation&diff=8921Kernel Instrumentation2009-01-08T09:00:16Z<p>Greywolf82: /* Boot Tracer */</p>
<hr />
<div>Here is a listing of some instrumentation systems for the kernel:<br />
<br />
== Existing Instrumentation Systems ==<br />
=== TimePegs ===<br />
Andrew Morton's system for measuring intervals between kernel events:<br />
<br />
See http://www.zipworld.com.au/~akpm/linux/timepeg.txt<br />
<br />
Patches at:<br />
<br />
http://www.zip.com.au/~akpm/linux/index.html#timepegs<br />
<br />
=== Printk Times ===<br />
<br />
Produces printk's with extra time data on them. As of kernel 2.6.11 this is part of the mainline kernel enabled by CONFIG_PRINTK_TIME. Previous versions can add it via a very simple patch. It works for bootup time measurements, or other places where you can just jam in a printk or two.<br />
<br />
See [[Printk Times]]<br />
<br />
=== Boot Tracer ===<br />
<br />
Starting from 2.6.28 the kernel has this new feature to optimize the boot time. It records the timings of the initcalls. Its aim is to be parsed by the scripts/bootgraph.pl tool to produce graphics about boot inefficiencies, giving a visual representation of the delays during initcalls. Users need to enable CONFIG_BOOT_TRACER, boot with the "initcall_debug" and "printk.time=1" parameters, and run "dmesg | perl scripts/bootgraph.pl > output.svg" to generate the final data.<br />
<br />
=== Kernel Function Instrumentation (KFI) ===<br />
A system which uses a compiler flag to instrument most of the functions in the kernel. Timing data is recorded at each function entry and exit. The data can be extracted and displayed later with a command-line program.<br />
<br />
The kernel portion of this is available in the CELF tree now.<br />
<br />
Grep for CONFIG_KFI.<br />
<br />
See the page [[Kernel Function Instrumentation]] page for some preliminary notes.<br />
<br />
FIXTHIS - need to isolate this as a patch.<br />
<br />
=== Linux Trace Toolkit ===<br />
See [http://www.opersys.com/LTT/ Linux Trace Toolkit]<br />
<br />
=== Kernel Tracer (in IKD patch) ===<br />
This is part of a general kernel tools package, maintained by Andrea Arcangeli.<br />
<br />
See http://www.kernel.org/pub/linux/kernel/people/andrea/ikd/README<br />
<br />
The ktrace implementation is in the file kernel/debug/profiler.c It was originally written by Ingo Molnar, Richard Henderson and/or Andrea Arcangeli<br />
<br />
It uses the compiler flag -pg to add profiling instrumentation to the kernel.<br />
<br />
=== Function trace in KDB ===<br />
Last year (Jan 2002) Jim Houston sent a patch to the kernel mailing list which provides support compiler-instrumented function calls.<br />
<br />
See http://www.ussg.iu.edu/hypermail/linux/kernel/0201.3/0888.html<br />
<br />
=== ftrace ===<br />
<br />
Ftrace is a simple function tracer which initially came from the -rt patches but was mainlined in 2.6.27. Compiler profiling features are used to insert an instrumentation call that can be overwritten with a NOP sequence to ensure overhead is minimal with tracing disabled. There are a number of tracers in the kernel that use ftrace to trace high level events such as irq enabling/disabling preemption enabling/disabling, scheduler events and branch profiling. <br />
<br />
The interface to access ftrace can be found in /debugfs/tracing, and is documented in Documentation/ftrace.txt.<br />
<br />
=== SystemTap / Kprobes ===<br />
<br />
[http://sourceware.org/systemtap/ SystemTap] is a sophisticated kernel instrumentation tool that can be scripted with it's own language to gather information about a running kernel. It uses the Kprobes infrastructure to implement it's tracing.<br />
<br />
== Notes ==<br />
Some random thoughts on instrumentation:<br />
<br />
*Most instrumentation systems need lots of memory to buffer the data produced<br />
*Some instrumentation systems support filters or triggers to allow for better control over the information saved<br />
*instrumentation systems tend to introduce overhead or otherwise interfere with the thing they are measuring<br />
**instrumentation systems tend to pollute the cache lines for the processor<br />
*There doesn't seem to be a single API to support in-kernel timing instrumentation which is supported on lots of different architectures. This is the main reason for CELF's current project to define an [[Instrumentation API]]</div>Greywolf82https://elinux.org/index.php?title=Kernel_Instrumentation&diff=8920Kernel Instrumentation2009-01-08T08:58:08Z<p>Greywolf82: Add boot tracer kernel feature</p>
<hr />
<div>Here is a listing of some instrumentation systems for the kernel:<br />
<br />
== Existing Instrumentation Systems ==<br />
=== TimePegs ===<br />
Andrew Morton's system for measuring intervals between kernel events:<br />
<br />
See http://www.zipworld.com.au/~akpm/linux/timepeg.txt<br />
<br />
Patches at:<br />
<br />
http://www.zip.com.au/~akpm/linux/index.html#timepegs<br />
<br />
=== Printk Times ===<br />
<br />
Produces printk's with extra time data on them. As of kernel 2.6.11 this is part of the mainline kernel enabled by CONFIG_PRINTK_TIME. Previous versions can add it via a very simple patch. It works for bootup time measurements, or other places where you can just jam in a printk or two.<br />
<br />
See [[Printk Times]]<br />
<br />
=== Boot Tracer ===<br />
<br />
Starting from 2.6.28 the kernel has this new feature to optimaze the boot time. It records the timings of the initcalls. Its aim is to be parsed by the scripts/bootgraph.pl tool to produce graphics about boot inefficiencies, giving a visual representation of the delays during initcalls. Users need to enable CONFIG_BOOT_TRACER, boot with the "initcall_debug" and "printk.time=1" parameters, and run "dmesg | perl scripts/bootgraph.pl > output.svg" to generate the final data.<br />
<br />
=== Kernel Function Instrumentation (KFI) ===<br />
A system which uses a compiler flag to instrument most of the functions in the kernel. Timing data is recorded at each function entry and exit. The data can be extracted and displayed later with a command-line program.<br />
<br />
The kernel portion of this is available in the CELF tree now.<br />
<br />
Grep for CONFIG_KFI.<br />
<br />
See the page [[Kernel Function Instrumentation]] page for some preliminary notes.<br />
<br />
FIXTHIS - need to isolate this as a patch.<br />
<br />
=== Linux Trace Toolkit ===<br />
See [http://www.opersys.com/LTT/ Linux Trace Toolkit]<br />
<br />
=== Kernel Tracer (in IKD patch) ===<br />
This is part of a general kernel tools package, maintained by Andrea Arcangeli.<br />
<br />
See http://www.kernel.org/pub/linux/kernel/people/andrea/ikd/README<br />
<br />
The ktrace implementation is in the file kernel/debug/profiler.c It was originally written by Ingo Molnar, Richard Henderson and/or Andrea Arcangeli<br />
<br />
It uses the compiler flag -pg to add profiling instrumentation to the kernel.<br />
<br />
=== Function trace in KDB ===<br />
Last year (Jan 2002) Jim Houston sent a patch to the kernel mailing list which provides support compiler-instrumented function calls.<br />
<br />
See http://www.ussg.iu.edu/hypermail/linux/kernel/0201.3/0888.html<br />
<br />
=== ftrace ===<br />
<br />
Ftrace is a simple function tracer which initially came from the -rt patches but was mainlined in 2.6.27. Compiler profiling features are used to insert an instrumentation call that can be overwritten with a NOP sequence to ensure overhead is minimal with tracing disabled. There are a number of tracers in the kernel that use ftrace to trace high level events such as irq enabling/disabling preemption enabling/disabling, scheduler events and branch profiling. <br />
<br />
The interface to access ftrace can be found in /debugfs/tracing, and is documented in Documentation/ftrace.txt.<br />
<br />
=== SystemTap / Kprobes ===<br />
<br />
[http://sourceware.org/systemtap/ SystemTap] is a sophisticated kernel instrumentation tool that can be scripted with it's own language to gather information about a running kernel. It uses the Kprobes infrastructure to implement it's tracing.<br />
<br />
== Notes ==<br />
Some random thoughts on instrumentation:<br />
<br />
*Most instrumentation systems need lots of memory to buffer the data produced<br />
*Some instrumentation systems support filters or triggers to allow for better control over the information saved<br />
*instrumentation systems tend to introduce overhead or otherwise interfere with the thing they are measuring<br />
**instrumentation systems tend to pollute the cache lines for the processor<br />
*There doesn't seem to be a single API to support in-kernel timing instrumentation which is supported on lots of different architectures. This is the main reason for CELF's current project to define an [[Instrumentation API]]</div>Greywolf82https://elinux.org/index.php?title=User:Wowgoldgate&diff=8575User:Wowgoldgate2008-12-17T09:30:15Z<p>Greywolf82: removed SPAM</p>
<hr />
<div>Delete this page</div>Greywolf82https://elinux.org/index.php?title=Processors&diff=8452Processors2008-12-09T13:22:28Z<p>Greywolf82: </p>
<hr />
<div>Here is a list of different processor families, with miscellaneous notes for development information:<br />
<br />
See also [[Hardware Hacking]] for a list of systems that include these processors.<br />
<br />
== ARM ==<br />
See [http://www.arm.com ARM website] and the [http://en.wikipedia.org/wiki/ARM_architecture Wikipedia ARM article] for information about the ARM architecture and processor family.<br />
<br />
From the Linux perspective, there are 2 very different kinds of ARM chips:<br />
* ARM processors that include a memory management unit (MMU), and can run standard Linux<br />
* ARM processors without MMU. These can run a modified version of Linux called uClinux ( http://uclinux.org/ ), enabling Linux to run on MMUless platforms or embedded processors with memory protection unit (MPU). These include ARM processors such as ARM7TDMI, ARM1156T2(F)-S or ARM Cortex-R4(F) for instance. <br />
<br />
Please note that because of security considerations for MMU-less processors, it is unwise to <br />
use them when 3rd-party or untrusted code will be running on the device. For locked-down, single<br />
function devices, MMU-less processors may be appropriate. They are usually less expensive than processors<br />
with MMU.<br />
<br />
Some major ARM platforms/SOCs are:<br />
* [[DaVinci]] from [http://www.ti.com/corp/docs/landing/davinci/firstproducts.html Texas Instruments]<br />
* OMAP - by TI<br />
* i.MX - by FreeScale<br />
** Freescale's GIT repository for i.MX Linux support is at: http://opensource.freescale.com<br />
*** Info about this repository, as of April 2007 is at: http://www.spinics.net/lists/arm-kernel/msg39771.html<br />
* [http://www.arm.com/products/DevTools/Hardware_Platforms.html ARM RealView] platforms - by ARM Ltd. <br />
** Linux BSP and resources available at http://www.arm.com/linux with associated [http://www.linux-arm.org/git GIT tree]<br />
* XScale/PXA - by Marvell (formerly Intel) -- has MMU<br />
** PXA255/PXA26x - Cotulla/Dalhart<br />
** PXA27x - Bulverde<br />
** PXA3xx - Monahans family<br />
*** Linux PXA255/PXA26x/PXA27x BSPs are available in mainline kernel. You can find PXA3xx BSP from [http://www.marvell.com/ Marvell]. Marvell team is working hard to get PXA3xx patches accepted by the mainline.<br />
* Orion - by Marvell<br />
** Linux BSP for Orion-2 SoC available on [http://marc.info/?l=linux-arm-kernel&m=117869744222933&w=2 ARM Linux Mailing List].<br />
* Philips LPC21xx series of ARM processors are currently the lowest-cost ARM processors available. But they have no MMU.<br />
* [[JuiceBox]] uses a ARM S3C44B0X. It runs uClinux.<br />
* AT91 - by Atmel<br />
** [http://www.atmel.com/dyn/products/devices.asp?family_id=605#1393 AT91RM9200] - ARM920T based -- has MMU<br />
** [http://www.atmel.com/dyn/products/devices.asp?family_id=605#1739 AT91SAM9 Series] - ARM926EJ-S based -- has MMU<br />
** Linux gateway : [http://www.linux4sam.org www.linux4sam.org]<br />
* Cirrus Logic ([http://arm.cirrus.com/ Linux forum and download site])<br />
** EP73xx - ARM720T based<br />
** EP93xx - ARM920T based<br />
* Samsung System-on-Chip (SystemLSI gtoup)<br />
** S3C2410 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2410], S3C2440 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2440], S3C2443 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2443] - ARM920T<br />
** S3C2416 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2416] - S3C2450 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2450], S3C2412 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2412], S3C2413 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C2413] - ARM926EJS<br />
** S3C6400 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C6400], S3C6410 [http://www.samsung.com/global/business/semiconductor/productInfo.do?fmly_id=229&partnum=S3C6410] - ARM1176EJS<br />
<br />
== MIPS ==<br />
Information about MIPS processor architecture can be found [http://www.mips.com here]. For the Linux port information can be found [http://www.linux-mips.org here].<br />
<br />
Processors based on MIPS architecture include<br />
# [http://www.toshiba.com/taec/Catalog/Family.do?familyid=5 TX System RISC] from Toshiba.<br />
# [http://www.pmc-sierra.com/mips-processors MSP series] of processor from PMC Sierra.<br />
<br />
== SuperH ==<br />
<br />
[[Image:Superh_logo.gif]]<br />
<br />
Built by [http://www.renesas.com/homepage.jsp Renesas Technology] the webpage of record for the SuperH family of microprocessors can be found here: [http://www.renesas.com/fmwk.jsp?cnt=superh_family_landing.jsp&fp=/products/mpumcu/superh_family/ SuperH RISC Engine Family].<br />
<br />
Wikipedia Page: [http://en.wikipedia.org/wiki/SuperH SuperH]<br />
<br />
Linux on SuperH: [http://linux-sh.org/shwiki/FrontPage linux-sh]<br />
<br />
=== Renesas SuperH Overview ===<br />
<br />
SuperH is an embedded RISC developed for high cost-performance, miniaturization, and performance per unit of power consumption (MIPS/W). We are developing CPU cores for a wide range of applications and functions and have many products available. Our product lines include a series with the SH-2 as the CPU core and on-chip large-capacity flash memory and peripheral functions such as timer, serial I/O, and AD converter, and a series with the SH-3 or SH-4 as the CPU core, which achieves high-speed data processing and is equipped with cache and MMU. Additionally, there is lineup of series with the SH2-DSP or SH3-DSP as the CPU core, which have full DSP functions and an emphasis on multimedia and communications processing. Currently available products also have lots of features, such as low power modes, low power consumption, and small size. Various versatile operating systems and development tools have been improved, allowing for more efficient development.<br />
<br />
=== Devices ===<br />
* Sega<br />
** [http://linux-sh.org/shwiki/Dreamcast Dreamcast] - Limited to the machine models that can start by MIL-CD and usage of a Broad Band Adapter is recommended.<br />
* Hitachi ULSI Systems<br />
** [http://linux-sh.org/shwiki/MS7206SE01 MS7206SE01] - SH72060 Solution Engine<br />
** MS7750SE01 - SH7750(sh4) Solution Engine<br />
** MS7709SE01 - SH7709(sh3) Solution Engine<br />
* SuperH, Inc.<br />
** ["MicroDev"]<br />
* HP Jornada<br />
** 525 (SH7709 (sh3))<br />
** 548 (SH7709A (sh3))<br />
** 620LX (SH7709 (sh3))<br />
** 660LX (SH7709 (sh3))<br />
** 680 (SH7709A (sh3))<br />
** 690 (SH7709A (sh3))<br />
* Renesas Technology Corp.<br />
** RTS7751R2D - CE Linux Forum(CELF)Compliant Evaluation Board<br />
* [http://www.shlinux.com Renesas Europe/MPC Data Limited]<br />
** EDOSK7705 - SH7705 sh3<br />
* EDOSK7760 - SH7760 sh4<br />
** EDOSK7751R - SH7751R sh4<br />
** SH7751R SystemH - SH7751R sh<br />
* [http://www.cqpub.co.jp/eda/CqREEK/SH4PCI.HTM CQ Publishing Co.,Ltd.]<br />
** CQ RISC Evaluation Kit(CqREEK)/SH4-PCI with Linux<br />
** [http://www.kmckk.co.jp/eng/ Kyoto Microcomputer Co., Ltd. (KMC or KμC)<br />
** Solution Platform KZP-01 KZP-01[Mainboard] + KZ-SH4RPCI-01[SH4 CPU Board]<br />
* [http://www.si-linux.com/index.html Silicon Linux Co,. Ltd.]<br />
** CAT760 - SH7760<br />
** CAT709 - SH7709S<br />
** CAT68701 - SH7708R For A-one CATBUS[Designed for 68000 board] compliant<br />
* [http://dsn-net.net/product/list_shlinux.html Daisen Electronic Industrial Co., Ltd.]<br />
** SH2000 - SH7709A 118MHz<br />
** SH2002 - SH7709S 200MHz<br />
** SH-500 - SH7709S 118MHz<br />
** SH-1000 - SH7709S 133MHz<br />
** SH-2004 - SH7750R 240MHz<br />
* [http://www.iodata.jp/prod/storage/hdd/index_lanhdd.htm IO-DATA DEVICE, Inc.(Network Attached Storage [NAS] Series)]<br />
** LAN-iCN - NAS Adapter for IODATA HDD with "i-connect" Interface<br />
** LAN-iCN2"] - NAS Adapter for IODATA HDD with "i-connect" Interface<br />
** LANDISK"] - SH4-266MHz[FSB133MHz] RAM64MB UDMA133 USB x2 10/100Base-T<br />
*** HDL-xxxU - LANDISK Series NAS Standard Model<br />
*** HDL-xxxUR - LANDISK with RICOH IPSiO G series print monitor for Windows support <br />
*** HDL-WxxxU - LANDISK with wide body & twin drive support for Heavy storage or RAID1<br />
*** HDL-AV250 - LANDISK with Home Network DLNA guideline support<br />
*** LANTank - LANDISK kit SuperTank(CHALLENGER) Series<br />
**** HDL-WxxxU based twin drive bulk NAS kit. LANTank have a special feature that supported network media server(cf. iTunes etc..).<br />
* [http://www.e-linux.jp/tmm_index.html TOWA MECCS CORPORATION]<br />
** TMM1000 - SH7709<br />
** TMM1100 - (SH7727<br />
** TMM1200 - SH7727<br />
* [http://www.sophia-systems.co.jp/ice/eval_board/index.html Sophia Systems]<br />
** Sophia SH7709A Evaluation Board<br />
** Sophia SH7750 Evaluation Board<br />
** Sophia SH7751 Evaluation Board<br />
* [http://www.movingeye.co.jp/mi6/sh4board.html MovingEye Inc.]<br />
** A3pci7003 - Using SH7750/ART-Linux [Linux with Realtime Extension]<br />
* [http://www.apnet.co.jp/product/ms104/ms104-sh4.html AlphaProject Co., Ltd.]<br />
** MS104-SH4 - SH7750R/PC104(Embedded ISA Bus) with apLinux<br />
* [http://www.interface.co.jp/cpu/ Interface Corporation.]<br />
** MPC-SH02 - SH7750S: ATX Motherboard Style<br />
** PCI-SH02xx"] - SH7750S: PCI-CARD Style<br />
* [http://www.tacinc.jp/ TAC Inc.]<br />
** [http://web.kyoto-inet.or.jp/people/takagaki/T-SH7706/T-SH7706.htm T-SH7706LAN] another name "Mitsuiwa SH3 board" SH-MIN - SH7706A/128MHz Flash512KB SDRAM 8MB 10BASE-T<br />
* [http://www.securecomputing.com/ SecureComputing]/[http://www.snapgear.org/ SnapGear] (older products, check ebay etc, all can netboot and have a debug header)<br />
** [http://www.snapgear.org/ SG530] - SH7751@166MHz RAM16MB FLASH4MB 2x10/100 1xSerial<br />
** [http://www.snapgear.org/ SG550] - SH7751@166MHz RAM16MB FLASH8MB 2x10/100 1xSerial<br />
** [http://www.snapgear.org/ SG570] - SH7751R@240MHz RAM16MB FLASH8MB 3x10/100 1xSerial<br />
** [http://www.snapgear.org/ SG575] - SH7751R@240MHz RAM64MB FLASH16MB 3x10/100 1xSerial<br />
** [http://www.snapgear.org/ SG630] - SH7751@166MHz PCI NIC card RAM16MB FLASH4MB 1x10/100 1xSerial-header<br />
** [http://www.snapgear.org/ SG635] - SH7751R@240MHz PCI NIC card RAM16MB FLASH16MB 1x10/100 1xSerial-header<br />
<br />
== PowerPC ==<br />
For Linux embedded applications requiring Floating Point in a SOC the MPC5200 is hard to beat.<br />
<br />
Freescale's highly integrated, cost-effective [http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MPC5200&fpsp=1&tab=Documentation_Tab MPC5200] is well suited for networking, media, industrial control, and automotive applications. It delivers 760 MIPS with a Floating Point Unit (FPU), hardware Memory Management Unit (MMU) for fast task switching, is packed with I/O, and operates at only one watt. The MPC5200 serves the processing-intensive network media gateway, network access storage, set-top box, audio jukebox automotive, Internet access, industrial automation, image detection/analysis, and electronic/medical instrumentation markets. With its successful foundation in the automotive/telematics market via the mobileGT™ alliance and platforms, all markets can now enjoy extended temperature, automotive qualification, and life cycles typically demanded in that industry. A solid choice of Real Time Operating Systems (RTOS) and development boards with Board Support Packages (BSPs) provides users with a complete and flexible set of solutions.<br />
<br />
Product Highlights<br />
<br />
The MPC5200 is based on a 400 MHz MPC603e PowerPC core with an integrated double precision Floating Point Unit (FPU) that is qualified at -40oC to +85oC. It incorporates a hardware-based memory management unit (MMU) for advanced memory protection schemes, fast task switching and broad RTOS support. The MPC5200 was designed for fast data throughput and processing. The integrated BestComm DMA controller offloads the main MPC603e core from I/O intensive data transfers. An integrated Double Data Rate (DDR) memory controller accelerates data access with an effective memory bus speed of 266 MHz. A high-speed PCI interface backed by the BestComm DMA controller and DDR memory support enables high-speed data transfers in and out of the MPC5200.<br />
<br />
* MPC603e series PowerPC™ processor core<br />
* 0-400 MHz operation at -40oC to +85oC temperature range<br />
* Double Precision Floating Point Unit (FPU)<br />
* Instruction and Data Memory Management Unit (MMU)<br />
* 16K Instruction and 16K Data Caches<br />
* BestComm Intelligent DMA I/O Controller<br />
* SDR and 133 MHz Double Data Rate (DDR) memory interface (266 MHz effective)<br />
* Local Plus interface for flash memory, etc.<br />
* 10/100 Ethernet MAC<br />
* Peripheral Control Interface (PCI) Version 2.2<br />
* ATA/IDE Interface<br />
* USB 1.1 Host (two each. USB 2.0 compatible)<br />
* Programmable Serial Controllers (six)<br />
* Serial Peripheral Interface (SPI)<br />
* I2C (two)<br />
* I2S (up to three)<br />
* CAN 2.0 A/B (two)<br />
* J1850 BDLC-D<br />
* GPIO (up to 56)<br />
* 8 Timers<br />
* 1.5V core, 3.3V external (and 2.5V for DDR memory)<br />
* 272 Pin Plastic Pin Ball Grid Array (PBGA) Package<br />
* AEC-Q100, QS9000/TS-16949 automotive grade available<br />
* Lead (Pb) and lead-free packages<br />
<br />
The DENX Embedded Linux Development Kit (ELDK) provides a complete and powerful software development environment for embedded and real-time systems. It is available for ARM, PowerPC and MIPS processors and consists of:<br />
<br />
* Cross Development Tools (Compiler, Assembler, Linker etc.) to develop software for the target system.<br />
* Native Tools (Shell, commands and libraries) which provide a standard Linux development environment that runs on the target system.<br />
* Firmware that can be easily ported to new boards and processors.<br />
* Linux kernel including the complete source-code with all device drivers, board-support functions etc.<br />
* RTAI (Real Time Application Interface) Extension for systems requiring hard real-time responses.<br />
* SELF (Simple Embedded Linux Framework) as fundament to build your embedded systems on.<br />
<br />
All components of the ELDK are available for free with complete source code under GPL and other Free Software Licenses. Also, detailed instructions to rebuild all the tools and packages from scratch are included.<br />
<br />
The ELDK can be downloaded for free from several mirror sites or ordered on CD-ROM for a nominal charge (99 Euro). To order the CD please contact office@denx.de<br />
<br />
Detailed information about the ELDK is available [http://www.denx.de/wiki/DULG/ELDK here]. <br />
<br />
== XScale ==<br />
CE2110 Media Processor<br />
* [http://www.intel.com/design/celect/2110/ CE2110 Media Processor]<br />
The highly integrated Intel CE 2110 Media Processor helps to simplify the design of consumer electronics products with reduced BOM cost. The integrated Intel XScale® processor core at 1GHz provides processing performance and headroom to deploy new revenue-generating applications. Hardware-based decode of widely used video codecs (MPEG-2, H.264) maximizes system-level performance by enabling the processor core to be used exclusively for applications.<br />
<br />
The Intel CE 2110 Media Processor also includes an Intel® Micro Signal Architecture (Intel® MSA) DSP core for audio codecs, a PowerVR* 2D/3D graphics accelerator, hardware accelerators for encryption and decryption, comprehensive peripheral interfaces, analog and digital input/outputs, and a transport interface for ATSC/DVB input.<br />
<br />
* The Intel CE 2110 Media Processor Development Platform is designed to reduce time-to-market for new applications.<br />
* The Intel CE 2110 Media Processor reference platform provides the foundation for rapid development of new customer designs and product demonstrations.<br />
<br />
== x86 ==<br />
<br />
* Geode from [http://www.amd.com/us-en/ConnectivitySolutions/ProductInformation/0,,50_2330,00.html AMD]<br />
:* AMD Geode GX / CS5535<br />
:* AMD Geode LX / CS5536<br />
<br />
== AVR32 ==<br />
<br />
* AP7000 from [http://www.atmel.com/products/AVR32/ap7.asp Atmel]<br />
<br />
[[Category:NeedsEditing]]<br />
[[Category:Processors| ]]</div>Greywolf82https://elinux.org/index.php?title=User:Greywolf82&diff=8447User:Greywolf822008-12-09T08:18:05Z<p>Greywolf82: /* Background */</p>
<hr />
<div>= Marco Stornelli =<br />
<br />
This is the user page of Marco Stornelli.<br />
<br />
== Background ==<br />
<br />
My first meeting with Linux, has been at university, during the course of "advanced Linux" with Daniel P. Bovet (the author of Understanding the Linux Kernel). I fell in love. Now I've been working on Linux for three years. I'm a researcher in the embedded systems field, in particular, design and implementation of Linux embedded platforms for telecommunication systems. In addition, the scope of my activity is extended to the study of Linux Real-Time and Linux High-availability aspects.<br />
<br />
== My recent eLinux wiki activity ==<br />
<br />
I've been worked on the PRAMFS for a while, I added XIP feature to it and I made some benchmark to test the results [[Pram_Fs]].</div>Greywolf82https://elinux.org/index.php?title=Embedded_Linux_Distributions&diff=8316Embedded Linux Distributions2008-11-28T13:08:26Z<p>Greywolf82: /* Vendor distros */</p>
<hr />
<div>Here is some information about embedded Linux distributions, and kernel configuration and build systems:<br />
<br />
== Vendor distros ==<br />
* Embedded Alley - see http://www.embeddedalley.com/<br />
* [http://www.kaeilos.com KaeilOS embedded linux]<br />
* Lineo Solutions [http://www.lineo.co.jp/eng/products-services/products/ulinux.html uLinux]<br />
* [[MontaVista]] Linux - see http://www.mvista.com/products_services.php<br />
* [[RidgeRun]] Linux - see http://www.ridgerun.com/sdk.shtml<br />
* [[TimeSys]] LinuxLink - see http://www.timesys.com/products/index.htm<br />
* [http://wiki.ubuntu.com/MobileAndEmbedded Ubuntu Mobile]<br />
* Wind River - see http://www.windriver.com/products/linux/<br />
<br />
== Other distros ==<br />
* Snapgear Embedded Linux Distribution - http://www.snapgear.org/<br />
* [[Open Wrt]] - http://openwrt.org/<br />
* Embedded Debian - http://www.emdebian.org/<br />
* Embedded Gentoo - http://www.gentoo.org/proj/en/base/embedded/index.xml<br />
<br />
=== Special purpose embedded Linux distributions ===<br />
* [http://flashlinux.org.uk/ Flash Linux] - a distribution specifically for USB keys and Live CDs<br />
* Eagle Linux - http://www.safedesksolutions.com/eaglelinux/<br />
** An embedded Linux distribution aimed at helping users learn Linux by creating bootable Linux images "virtually from scratch". Eagle Linux 2.3 is currently distributed as a concise, 26-page PDF documenting the creation of a minimalist, network-ready Linux image for bootable CDs, floppies, or flash drives. See description at: http://ct.enews.deviceforge.com/rd/cts?d=207-106-2-28-5560-8662-0-0-0-1<br />
* [http://www.uclinux.org/ uClinux] A distribution targeting at systems without Memory Management Unit<br />
<br />
== Configuration and Build systems ==<br />
* [[Open Embedded]] - System for building full embedded images from scratch<br />
* [http://buildroot.uclibc.org/ buildroot]<br />
** Buildroot is a set of Makefiles and patches that makes it easy generate a cross-compilation toolchain and root filesystem for your target Linux system using the uClibc C library.<br />
* [http://www.pengutronix.de/software/ptxdist/index_en.html PTXdist]<br />
** Kconfig based build system developed by [http://www.pengutronix.de/index_en.html Pengutronix] <br />
** GPL licensed<br />
* [http://www.linuxfromscratch.org/ Linux From Scratch]<br />
* [[Qplus Target Builder]] - Target image builder from ETRI<br />
* LTIB - Linux Target Image Builder (by Stuart Hughes of FreeScale) - see http://savannah.nongnu.org/projects/ltib<br />
<br />
* [http://www.mvista.com/download/fetchdoc.php?docid=342 Building Embedded Userlands] - Presentation by Ned Miljevic & Klaas van Gend at the ELC 2008 which compares different configuration and build systems<br />
<br />
[[Category:Linux]]</div>Greywolf82https://elinux.org/index.php?title=High_Resolution_Timers&diff=8315High Resolution Timers2008-11-28T13:01:14Z<p>Greywolf82: /* Specifications */</p>
<hr />
<div>== Description ==<br />
The objective of the high resolution timers project is to implement the POSIX 1003.1b Section 14 (Clocks and Timers) API in Linux. This includes support for high resolution timers - that is, timers with accuracy better than 1 jiffy.<br />
<br />
When the project started, the POSIX clocks and timers APIs were not supported by Linux. Over time, the clocks and timers APIs have been adopted, and core infrastructure support for high resolution timers has been accepted into the mainline kernel (in 2.6.21). However, as of this writing, not all embedded platforms has support for high resolution timers, <br />
and even when support is present in the kernel code, it can be tricky to configure it for the kernel.<br />
<br />
== Rationale ==<br />
Currently, timers in Linux are only supported at a resolution of 1 jiffy. The length of a jiffy is dependent on the value of HZ in the Linux kernel, and is 1 millisecond on i386 and some other platforms, and 10 milliseconds on most embedded platforms.<br />
<br />
Higher resolution timers are needed to allow the system to wake up and process data at more accurate intervals.<br />
<br />
== Resources ==<br />
=== Projects ===<br />
==== hrtimers - Thomas Gleixner's patch ====<br />
One project to support high resolution timers is Thomas Gleixner's hrtimers.<br />
<br />
Thomas gave a presentation at the Ottawa Linux Symposium, July 2006, presenting the current status of hrtimers. The presentation is here:<br />
[http://www.tglx.de/projects/hrtimers/ols2006-hrtimers.pdf OLS hrtimers]<br />
<br />
As of July 2006, "generic clock sources" was accepted into Linus' mainline kernel tree (2.6.18-rc??). This means it should be appear in the mainline 2.6.18 kernel version, when that is available. hrtimers should soon follow, likely appearing in 2.6.19.<br />
<br />
In February of 2006, James Perkins of WindRiver wrote:<br />
----<br />
ktimers has been obsoleted by hrtimers, and the core of hrtimers was<br />
merged and is present in Linus' 2.6.16-rc2. hrtimers is used as the base<br />
for itimers, nanosleep, and posix-timers. hrtimers are well-described by<br />
Jonathan Corbet at http://lwn.net/Articles/167897/<br />
<br />
Since only the core of hrtimers is in 2.6.16-rc2, the hrtimers generally<br />
use the system timer as their tick source and run at HZ. John Stultz'<br />
generalized time source code has not yet been merged. Thomas Gleixner is<br />
maintaining his git tree and has graciously published patches at<br />
http://www.tglx.de/projects/hrtimers/ that include generalized<br />
clocksource, new timeofday patches, and get you the real "high<br />
resolution" timers for a subset of architectures.<br />
<br />
High-res timers work is experimental and shifting and has been focusing<br />
on getting x86 working first, if this is adequate for you and you can<br />
use 2.6.16 kernels it's recommended, and let us all know of any problems<br />
or improvements. In contrast, the previous implementation that George<br />
Anzinger lead provides a fairly comprehensive set of functionality, back<br />
in the 2.6.8-2.6.10 era, but it isn't an active project at this time.<br />
----<br />
''Note that the current HRT maintainers objected to this characterization.''<br />
<br />
==== HRT - Geoge Anzinger's patch ====<br />
Prior to hrtimers, the main patch which provided high resolution timers was<br />
George Anzinger's patch.The official HRT site for this patch is at:<br />
* [http://sourceforge.net/projects/high-res-timers/ high-res-timers]<br />
<br />
<br />
<br />
== Downloads ==<br />
=== Patch ===<br />
* See [[Patch Archive]]<br />
* Tom Rini has posted some patches for earlier 2.6 kernels at:<br />
** [http://source.mvista.com/~trini/hrt/hrt_07Dec2004_001_2.6.10-rc3.patch trini patches]<br />
<br />
== Utility programs ==<br />
<br />
== How To Use ==<br />
In order to use high resolution timers, you need to verify that the kernel has support for this feature for your<br />
target processor (and board). Also, you need to configure support for it in the Linux kernel.<br />
<br />
Set CONFIG_HIGH_RES_TIMERS=y in your kernel config.<br />
<br />
Compile your kernel and install it on your target board.<br />
<br />
To use the Posix Timers API, see this online resource [http://www.opengroup.org/onlinepubs/009695399/basedefs/time.h.html]<br />
<br />
== How to detect if your timer system supports high resolution ==<br />
Here are several ways you can identify if your system supports high resolution timers.<br />
<br />
* Examine kernel startup messages<br />
Watch the kernel boot messages, or use <tt>dmesg</tt>. If the kernel successfully turns<br />
on the high resolution timer feature, it will print the message<br />
"Switched to high resolution mode on CPU0" (or something similar) during <br />
startup.<br />
<br />
* Examine /proc/timer_list<br />
You can also examine the timer_list, and see whether specific clocks<br />
are listed as supporting high resolution. Here is a dump of /proc/timer_list<br />
on an [[OSK]] (ARM-based) development board, showing the clocks configured<br />
for high resolution.<br />
<br />
** cat /proc/timer_list<br />
<pre>Timer List Version: v0.3<br />
HRTIMER_MAX_CLOCK_BASES: 2<br />
now at 294115539550 nsecs<br />
<br />
cpu: 0<br />
clock 0:<br />
.index: 0<br />
.resolution: 1 nsecs<br />
.get_time: ktime_get_real<br />
.offset: 0 nsecs<br />
active timers:<br />
clock 1:<br />
.index: 1<br />
.resolution: 1 nsecs<br />
.get_time: ktime_get<br />
.offset: 0 nsecs<br />
active timers:<br />
#0: <c1e39e38>, tick_sched_timer, S:01, tick_nohz_restart_sched_tick, swapper/0<br />
# expires at 294117187500 nsecs [in 1647950 nsecs]<br />
#1: <c1e39e38>, it_real_fn, S:01, do_setitimer, syslogd/796<br />
# expires at 1207087219238 nsecs [in 912971679688 nsecs]<br />
.expires_next : 294117187500 nsecs<br />
.hres_active : 1<br />
.nr_events : 1635<br />
.nohz_mode : 2<br />
.idle_tick : 294078125000 nsecs<br />
.tick_stopped : 0<br />
.idle_jiffies : 4294966537<br />
.idle_calls : 2798<br />
.idle_sleeps : 1031<br />
.idle_entrytime : 294105407714 nsecs<br />
.idle_sleeptime : 286135498094 nsecs<br />
.last_jiffies : 4294966541<br />
.next_jiffies : 4294966555<br />
.idle_expires : 294179687500 nsecs<br />
jiffies: 4294966542<br />
<br />
<br />
Tick Device: mode: 1<br />
Clock Event Device: 32k-timer<br />
max_delta_ns: 2147483647<br />
min_delta_ns: 30517<br />
mult: 140737<br />
shift: 32<br />
mode: 3<br />
next_event: 294117187500 nsecs<br />
set_next_event: omap_32k_timer_set_next_event<br />
set_mode: omap_32k_timer_set_mode<br />
event_handler: hrtimer_interrupt<br />
</pre><br />
<br />
Here are some things to check:<br />
<br />
1. Check the resolution reported for your clocks. If your clock supports high resolution, it will have a .resolution value of 1 nsecs. If it does not, then it will have a .resolution value that equals the number of nanoseconds in a jiffy (usually 10000 nsecs, on embedded platforms).<br />
<br />
2. Check the event_handler for the Tick Device. If the event handlers is 'hrtimer_interrupt' then the clock is set up for high resolution handling. If the event handler is 'tick_handle_periodic', then the device is set up for regular tick-based handling.<br />
<br />
3. Check the list of timers, and see if the attribute .hres_active has a value of 1. If so, then the high resolution timer feature is active.<br />
<br />
* Run a test program<br />
You can run a small test program, and actually measure that the timers are returning in<br />
less than the period of a jiffy. If they are, this is the most definitive proof that your kernel<br />
supports high resolution timers.<br />
One example program you can try is [http://rt.wiki.kernel.org/index.php/Cyclictest cyclictest].<br />
Here is a sample command line which will test timers using nanosleep:<br />
** cyclictest -n -p 80 -i 500 -l 5000<br />
This does a test of clock_nanosleep, with priority 80, at 500 microsecond intervals, running<br />
the 5000 iterations of the test.<br />
<br />
== How to validate ==<br />
See above with regard to cyclictest<br />
<br />
== Sample Results ==<br />
[Examples of use with measurement of the effects.]<br />
<br />
== Case Study 1 ==<br />
== Case Study 2 ==<br />
<br />
== Status ==<br />
<br />
*Status: implemented<br />
*Architecture Support:<br />
:(for each arch, one of: unknown, patches apply, compiles, runs, works, accepted)<br />
** i386: works<br />
** ARM: unknown<br />
** PPC: works<br />
** MIPS: unknown<br />
** SH: unknown<br />
<br />
== Future Work/Action Items ==<br />
<br />
Here is a list of things that could be worked on for this feature:<br />
*Documentation<br />
*Testing<br />
<br />
== Old information (for 2.4 kernel) ==<br />
The High Resolution Timers system allows a user space program to be wake up from a timer event with better accuracy, when using the POSIX timer APIs. Without this system, the best accuracy that can be obtained for timer events is 1 jiffy. This depends on the setting of HZ in the kernel. In the 2.4 kernel, HZ was set to 100, which means that the best accuracy you could <br />
get on a timer wakeup in user space was 10 milliseconds.<br />
<br />
Put differently, if you asked for a timer event in 500 microseconds, you would wake up in 10 milliseconds (at least).<br />
<br />
To support this feature on a particular board, you have to add a kernel driver that uses a timer on the system and supports the interface documented in:<br />
<code><br />
include/linux/hrtime.h (in the CELF tree)<br />
</code><br />
Additional documentation about this feature is available in<br />
<code><br />
Documentation/high-res-timers/<br />
</code><br />
<br />
Patches for high-res timers were first presented at the time of kernel version 2.5.47,<br />
in November, 2002. See [http://lwn.net/Articles/14538/ early patches]</div>Greywolf82https://elinux.org/index.php?title=High_Resolution_Timers&diff=8314High Resolution Timers2008-11-28T12:55:53Z<p>Greywolf82: /* How To Use */</p>
<hr />
<div>== Description ==<br />
The objective of the high resolution timers project is to implement the POSIX 1003.1b Section 14 (Clocks and Timers) API in Linux. This includes support for high resolution timers - that is, timers with accuracy better than 1 jiffy.<br />
<br />
When the project started, the POSIX clocks and timers APIs were not supported by Linux. Over time, the clocks and timers APIs have been adopted, and core infrastructure support for high resolution timers has been accepted into the mainline kernel (in 2.6.21). However, as of this writing, not all embedded platforms has support for high resolution timers, <br />
and even when support is present in the kernel code, it can be tricky to configure it for the kernel.<br />
<br />
== Rationale ==<br />
Currently, timers in Linux are only supported at a resolution of 1 jiffy. The length of a jiffy is dependent on the value of HZ in the Linux kernel, and is 1 millisecond on i386 and some other platforms, and 10 milliseconds on most embedded platforms.<br />
<br />
Higher resolution timers are needed to allow the system to wake up and process data at more accurate intervals.<br />
<br />
== Resources ==<br />
=== Projects ===<br />
==== hrtimers - Thomas Gleixner's patch ====<br />
One project to support high resolution timers is Thomas Gleixner's hrtimers.<br />
<br />
Thomas gave a presentation at the Ottawa Linux Symposium, July 2006, presenting the current status of hrtimers. The presentation is here:<br />
[http://www.tglx.de/projects/hrtimers/ols2006-hrtimers.pdf OLS hrtimers]<br />
<br />
As of July 2006, "generic clock sources" was accepted into Linus' mainline kernel tree (2.6.18-rc??). This means it should be appear in the mainline 2.6.18 kernel version, when that is available. hrtimers should soon follow, likely appearing in 2.6.19.<br />
<br />
In February of 2006, James Perkins of WindRiver wrote:<br />
----<br />
ktimers has been obsoleted by hrtimers, and the core of hrtimers was<br />
merged and is present in Linus' 2.6.16-rc2. hrtimers is used as the base<br />
for itimers, nanosleep, and posix-timers. hrtimers are well-described by<br />
Jonathan Corbet at http://lwn.net/Articles/167897/<br />
<br />
Since only the core of hrtimers is in 2.6.16-rc2, the hrtimers generally<br />
use the system timer as their tick source and run at HZ. John Stultz'<br />
generalized time source code has not yet been merged. Thomas Gleixner is<br />
maintaining his git tree and has graciously published patches at<br />
http://www.tglx.de/projects/hrtimers/ that include generalized<br />
clocksource, new timeofday patches, and get you the real "high<br />
resolution" timers for a subset of architectures.<br />
<br />
High-res timers work is experimental and shifting and has been focusing<br />
on getting x86 working first, if this is adequate for you and you can<br />
use 2.6.16 kernels it's recommended, and let us all know of any problems<br />
or improvements. In contrast, the previous implementation that George<br />
Anzinger lead provides a fairly comprehensive set of functionality, back<br />
in the 2.6.8-2.6.10 era, but it isn't an active project at this time.<br />
----<br />
''Note that the current HRT maintainers objected to this characterization.''<br />
<br />
==== HRT - Geoge Anzinger's patch ====<br />
Prior to hrtimers, the main patch which provided high resolution timers was<br />
George Anzinger's patch.The official HRT site for this patch is at:<br />
* [http://sourceforge.net/projects/high-res-timers/ high-res-timers]<br />
<br />
== Specifications ==<br />
*[http://tree.celinuxforum.org/pubwiki/moin.cgi/RtwgPTSpec_5fR2 Rtwg PT Spec_R2] - CELF 1.0 Specification, Section on Posix Timers<br />
* [http://www.uccs.edu/~compsvcs/doc-cdrom/DOCS/HTML/APS33DTE/DOCU_007.HTM POSIX 1003.1b Section 14 (Clocks and Timers) API] - this link is obsolete, but I couldn't fid a replacement source<br />
<br />
== Downloads ==<br />
=== Patch ===<br />
* See [[Patch Archive]]<br />
* Tom Rini has posted some patches for earlier 2.6 kernels at:<br />
** [http://source.mvista.com/~trini/hrt/hrt_07Dec2004_001_2.6.10-rc3.patch trini patches]<br />
<br />
== Utility programs ==<br />
<br />
== How To Use ==<br />
In order to use high resolution timers, you need to verify that the kernel has support for this feature for your<br />
target processor (and board). Also, you need to configure support for it in the Linux kernel.<br />
<br />
Set CONFIG_HIGH_RES_TIMERS=y in your kernel config.<br />
<br />
Compile your kernel and install it on your target board.<br />
<br />
To use the Posix Timers API, see this online resource [http://www.opengroup.org/onlinepubs/009695399/basedefs/time.h.html]<br />
<br />
== How to detect if your timer system supports high resolution ==<br />
Here are several ways you can identify if your system supports high resolution timers.<br />
<br />
* Examine kernel startup messages<br />
Watch the kernel boot messages, or use <tt>dmesg</tt>. If the kernel successfully turns<br />
on the high resolution timer feature, it will print the message<br />
"Switched to high resolution mode on CPU0" (or something similar) during <br />
startup.<br />
<br />
* Examine /proc/timer_list<br />
You can also examine the timer_list, and see whether specific clocks<br />
are listed as supporting high resolution. Here is a dump of /proc/timer_list<br />
on an [[OSK]] (ARM-based) development board, showing the clocks configured<br />
for high resolution.<br />
<br />
** cat /proc/timer_list<br />
<pre>Timer List Version: v0.3<br />
HRTIMER_MAX_CLOCK_BASES: 2<br />
now at 294115539550 nsecs<br />
<br />
cpu: 0<br />
clock 0:<br />
.index: 0<br />
.resolution: 1 nsecs<br />
.get_time: ktime_get_real<br />
.offset: 0 nsecs<br />
active timers:<br />
clock 1:<br />
.index: 1<br />
.resolution: 1 nsecs<br />
.get_time: ktime_get<br />
.offset: 0 nsecs<br />
active timers:<br />
#0: <c1e39e38>, tick_sched_timer, S:01, tick_nohz_restart_sched_tick, swapper/0<br />
# expires at 294117187500 nsecs [in 1647950 nsecs]<br />
#1: <c1e39e38>, it_real_fn, S:01, do_setitimer, syslogd/796<br />
# expires at 1207087219238 nsecs [in 912971679688 nsecs]<br />
.expires_next : 294117187500 nsecs<br />
.hres_active : 1<br />
.nr_events : 1635<br />
.nohz_mode : 2<br />
.idle_tick : 294078125000 nsecs<br />
.tick_stopped : 0<br />
.idle_jiffies : 4294966537<br />
.idle_calls : 2798<br />
.idle_sleeps : 1031<br />
.idle_entrytime : 294105407714 nsecs<br />
.idle_sleeptime : 286135498094 nsecs<br />
.last_jiffies : 4294966541<br />
.next_jiffies : 4294966555<br />
.idle_expires : 294179687500 nsecs<br />
jiffies: 4294966542<br />
<br />
<br />
Tick Device: mode: 1<br />
Clock Event Device: 32k-timer<br />
max_delta_ns: 2147483647<br />
min_delta_ns: 30517<br />
mult: 140737<br />
shift: 32<br />
mode: 3<br />
next_event: 294117187500 nsecs<br />
set_next_event: omap_32k_timer_set_next_event<br />
set_mode: omap_32k_timer_set_mode<br />
event_handler: hrtimer_interrupt<br />
</pre><br />
<br />
Here are some things to check:<br />
<br />
1. Check the resolution reported for your clocks. If your clock supports high resolution, it will have a .resolution value of 1 nsecs. If it does not, then it will have a .resolution value that equals the number of nanoseconds in a jiffy (usually 10000 nsecs, on embedded platforms).<br />
<br />
2. Check the event_handler for the Tick Device. If the event handlers is 'hrtimer_interrupt' then the clock is set up for high resolution handling. If the event handler is 'tick_handle_periodic', then the device is set up for regular tick-based handling.<br />
<br />
3. Check the list of timers, and see if the attribute .hres_active has a value of 1. If so, then the high resolution timer feature is active.<br />
<br />
* Run a test program<br />
You can run a small test program, and actually measure that the timers are returning in<br />
less than the period of a jiffy. If they are, this is the most definitive proof that your kernel<br />
supports high resolution timers.<br />
One example program you can try is [http://rt.wiki.kernel.org/index.php/Cyclictest cyclictest].<br />
Here is a sample command line which will test timers using nanosleep:<br />
** cyclictest -n -p 80 -i 500 -l 5000<br />
This does a test of clock_nanosleep, with priority 80, at 500 microsecond intervals, running<br />
the 5000 iterations of the test.<br />
<br />
== How to validate ==<br />
See above with regard to cyclictest<br />
<br />
== Sample Results ==<br />
[Examples of use with measurement of the effects.]<br />
<br />
== Case Study 1 ==<br />
== Case Study 2 ==<br />
<br />
== Status ==<br />
<br />
*Status: implemented<br />
*Architecture Support:<br />
:(for each arch, one of: unknown, patches apply, compiles, runs, works, accepted)<br />
** i386: works<br />
** ARM: unknown<br />
** PPC: works<br />
** MIPS: unknown<br />
** SH: unknown<br />
<br />
== Future Work/Action Items ==<br />
<br />
Here is a list of things that could be worked on for this feature:<br />
*Documentation<br />
*Testing<br />
<br />
== Old information (for 2.4 kernel) ==<br />
The High Resolution Timers system allows a user space program to be wake up from a timer event with better accuracy, when using the POSIX timer APIs. Without this system, the best accuracy that can be obtained for timer events is 1 jiffy. This depends on the setting of HZ in the kernel. In the 2.4 kernel, HZ was set to 100, which means that the best accuracy you could <br />
get on a timer wakeup in user space was 10 milliseconds.<br />
<br />
Put differently, if you asked for a timer event in 500 microseconds, you would wake up in 10 milliseconds (at least).<br />
<br />
To support this feature on a particular board, you have to add a kernel driver that uses a timer on the system and supports the interface documented in:<br />
<code><br />
include/linux/hrtime.h (in the CELF tree)<br />
</code><br />
Additional documentation about this feature is available in<br />
<code><br />
Documentation/high-res-timers/<br />
</code><br />
<br />
Patches for high-res timers were first presented at the time of kernel version 2.5.47,<br />
in November, 2002. See [http://lwn.net/Articles/14538/ early patches]</div>Greywolf82https://elinux.org/index.php?title=User:Greywolf82&diff=8284User:Greywolf822008-11-26T09:42:36Z<p>Greywolf82: /* Background */</p>
<hr />
<div>= Marco Stornelli =<br />
<br />
This is the user page of Marco Stornelli.<br />
<br />
== Background ==<br />
<br />
My first meeting with Linux, has been at university, during the course of "advanced Linux" with Daniel P. Bovet (the author of Understanding the Linux Kernel). I felt in love. Now I've been working on Linux for three years. I'm a researcher in the embedded systems field, in particular, design and implementation of Linux embedded platforms for telecommunication systems. In addition, the scope of my activity is extended to the study of Linux Real-Time and Linux High-availability aspects.<br />
<br />
== My recent eLinux wiki activity ==<br />
<br />
I've been worked on the PRAMFS for a while, I added XIP feature to it and I made some benchmark to test the results [[Pram_Fs]].</div>Greywolf82https://elinux.org/index.php?title=User:Greywolf82&diff=8283User:Greywolf822008-11-26T09:41:47Z<p>Greywolf82: /* Background */</p>
<hr />
<div>= Marco Stornelli =<br />
<br />
This is the user page of Marco Stornelli.<br />
<br />
== Background ==<br />
<br />
My first meeting with Linux, has been at university, during the course of "advanced Linux" with Daniel P. Bovet (the author of Understanding the Linux Kernel). I felt in love. Now I've been working on Linux for three years. I'm a researcher in the embedded systems field, in particular, design and implementation of Linux embedded platforms for telecommunication systems. In addition, the scope of my activity is extended to the use/study of Linux Real-Time and Linux High-availability aspects.<br />
<br />
== My recent eLinux wiki activity ==<br />
<br />
I've been worked on the PRAMFS for a while, I added XIP feature to it and I made some benchmark to test the results [[Pram_Fs]].</div>Greywolf82https://elinux.org/index.php?title=User:Greywolf82&diff=8261User:Greywolf822008-11-25T15:50:48Z<p>Greywolf82: </p>
<hr />
<div>= Marco Stornelli =<br />
<br />
This is the user page of Marco Stornelli.<br />
<br />
== Background ==<br />
<br />
I've been working on Linux for three years. I'm a researcher in the embedded systems field, in particular, design and implementation of Linux embedded platforms for telecommunication systems. In addition, the scope of my activity is extended to the use/study of Linux Real-Time and Linux High-availability aspects.<br />
<br />
== My recent eLinux wiki activity ==<br />
<br />
I've been worked on the PRAMFS for a while, I added XIP feature to it and I made some benchmark to test the results [[Pram_Fs]].</div>Greywolf82https://elinux.org/index.php?title=Pram_Fs&diff=8210Pram Fs2008-11-24T08:59:34Z<p>Greywolf82: /* Sample Results */</p>
<hr />
<div>== Introduction ==<br />
This page describes the Protectec RAM File System (PRAM FS) feature.<br />
<br />
PRAM FS is a file system that enhances the security of system data in the<br />
presence of kernel bugs or rogue programs.<br />
<br />
The protected RAM file system will ordinarily remain consistent even if kernel data pointers<br />
are corrupted, or if the kernel starts executing unexpectedly in the wrong location.<br />
This is accomplished by making the RAM pages used by PRAM FS non-writable except during<br />
the actual file operations themselves.<br />
<br />
=== Rationale ===<br />
A single bug in the Linux kernel may cause catastrophic damage to a system.<br />
If a product holds irreproducible security keys, financial data, or account<br />
information, then loss of such data could render the product unusable, or worse.<br />
The customer could suffer financial or legal harm (from account theft or<br />
identity theft).<br />
<br />
It is not possible to guarantee with certainty that there are no bugs in the<br />
Linux kernel. However, it is possible to decrease the probability that a bug<br />
in the kernel will cause damage to a particular area of memory or storage. This<br />
protected area can then be used, with greater confidence, to hold sensitive user<br />
or product data.<br />
<br />
== References ==<br />
The home page for the PRAMFS project is at:<br />
http://pramfs.sourceforge.net/<br />
<br />
That site contains a LOT of detailed technical information and more explanation of<br />
the rationale for this feature.<br />
<br />
== Downloads ==<br />
<br />
=== Patch ===<br />
- [Patch for CELF version XXXXXX is *here*]<br />
- [Patch for 2.4.xx is *here*]<br />
- Patch for 2.6.7 is pending... (see [http://tree.celinuxforum.org/pipermail/celinux-dev/2004-September/000197.html celinux-dev archive message 197] for a recent submission to forum)<br />
<br />
=== Utility programs ===<br />
Pram fs can be created and populated using normal Linux filesystem utilities.<br />
<br />
== How To Use ==<br />
See the file <code>Documentation/filesystems/pramfs.txt</code> for instructions on its use (once the patch is applied).<br />
<br />
== Status ==<br />
Pramfs was submitted for consideration for inclusion in the 2.6.4 kernel, in March 2004.<br />
There was a thread of discussion [http://groups.google.com/groups?hl=en&lr=&threadm=1vJLx-4GI-57%40gated-at.bofh.it&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26selm%3D1vJLx-4GI-57%2540gated-at.bofh.it here]<br />
<br />
There were a few, easily answered, concerns raised. But the patch was not accepted into mainstream.<br />
<br />
I talked to Andrew Morton about this in April, 2004, and he said the threshold is high for getting a new filesystem into the<br />
mainline kernel, because each filesystem adds incremental, ongoing, source maintenance overhead.<br />
<br />
== Sample Results ==<br />
Here there are some benchmark results made with bonnie++. The board used was an Atmel ngw100 (avr32 architecture) with ap7000 processor and 32MB of SDRAM.<br />
<br />
*(2.1 KB) [[Media:benchmark_bonnie--_pramfs_noxip.txt|Without XIP]]<br />
*(2.1 KB) [[Media:benchmark_bonnie--_pramfs_xip.txt|With XIP]]<br />
<br />
== Future Work ==<br />
Here is a list of things that could be worked on for this feature:<br />
-<br />
<br />
[[Category:File Systems| ]]</div>Greywolf82https://elinux.org/index.php?title=Pram_Fs&diff=8209Pram Fs2008-11-24T08:58:41Z<p>Greywolf82: /* Sample Results */</p>
<hr />
<div>== Introduction ==<br />
This page describes the Protectec RAM File System (PRAM FS) feature.<br />
<br />
PRAM FS is a file system that enhances the security of system data in the<br />
presence of kernel bugs or rogue programs.<br />
<br />
The protected RAM file system will ordinarily remain consistent even if kernel data pointers<br />
are corrupted, or if the kernel starts executing unexpectedly in the wrong location.<br />
This is accomplished by making the RAM pages used by PRAM FS non-writable except during<br />
the actual file operations themselves.<br />
<br />
=== Rationale ===<br />
A single bug in the Linux kernel may cause catastrophic damage to a system.<br />
If a product holds irreproducible security keys, financial data, or account<br />
information, then loss of such data could render the product unusable, or worse.<br />
The customer could suffer financial or legal harm (from account theft or<br />
identity theft).<br />
<br />
It is not possible to guarantee with certainty that there are no bugs in the<br />
Linux kernel. However, it is possible to decrease the probability that a bug<br />
in the kernel will cause damage to a particular area of memory or storage. This<br />
protected area can then be used, with greater confidence, to hold sensitive user<br />
or product data.<br />
<br />
== References ==<br />
The home page for the PRAMFS project is at:<br />
http://pramfs.sourceforge.net/<br />
<br />
That site contains a LOT of detailed technical information and more explanation of<br />
the rationale for this feature.<br />
<br />
== Downloads ==<br />
<br />
=== Patch ===<br />
- [Patch for CELF version XXXXXX is *here*]<br />
- [Patch for 2.4.xx is *here*]<br />
- Patch for 2.6.7 is pending... (see [http://tree.celinuxforum.org/pipermail/celinux-dev/2004-September/000197.html celinux-dev archive message 197] for a recent submission to forum)<br />
<br />
=== Utility programs ===<br />
Pram fs can be created and populated using normal Linux filesystem utilities.<br />
<br />
== How To Use ==<br />
See the file <code>Documentation/filesystems/pramfs.txt</code> for instructions on its use (once the patch is applied).<br />
<br />
== Status ==<br />
Pramfs was submitted for consideration for inclusion in the 2.6.4 kernel, in March 2004.<br />
There was a thread of discussion [http://groups.google.com/groups?hl=en&lr=&threadm=1vJLx-4GI-57%40gated-at.bofh.it&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26selm%3D1vJLx-4GI-57%2540gated-at.bofh.it here]<br />
<br />
There were a few, easily answered, concerns raised. But the patch was not accepted into mainstream.<br />
<br />
I talked to Andrew Morton about this in April, 2004, and he said the threshold is high for getting a new filesystem into the<br />
mainline kernel, because each filesystem adds incremental, ongoing, source maintenance overhead.<br />
<br />
== Sample Results ==<br />
Here there are some benchmark results made with bonnie++. The board used was an Atmel ngw100 (avr32 architecture) with ap7000 processor.<br />
<br />
*(2.1 KB) [[Media:benchmark_bonnie--_pramfs_noxip.txt|Without XIP]]<br />
*(2.1 KB) [[Media:benchmark_bonnie--_pramfs_xip.txt|With XIP]]<br />
<br />
== Future Work ==<br />
Here is a list of things that could be worked on for this feature:<br />
-<br />
<br />
[[Category:File Systems| ]]</div>Greywolf82https://elinux.org/index.php?title=Pram_Fs&diff=8208Pram Fs2008-11-24T08:50:48Z<p>Greywolf82: /* Sample Results */</p>
<hr />
<div>== Introduction ==<br />
This page describes the Protectec RAM File System (PRAM FS) feature.<br />
<br />
PRAM FS is a file system that enhances the security of system data in the<br />
presence of kernel bugs or rogue programs.<br />
<br />
The protected RAM file system will ordinarily remain consistent even if kernel data pointers<br />
are corrupted, or if the kernel starts executing unexpectedly in the wrong location.<br />
This is accomplished by making the RAM pages used by PRAM FS non-writable except during<br />
the actual file operations themselves.<br />
<br />
=== Rationale ===<br />
A single bug in the Linux kernel may cause catastrophic damage to a system.<br />
If a product holds irreproducible security keys, financial data, or account<br />
information, then loss of such data could render the product unusable, or worse.<br />
The customer could suffer financial or legal harm (from account theft or<br />
identity theft).<br />
<br />
It is not possible to guarantee with certainty that there are no bugs in the<br />
Linux kernel. However, it is possible to decrease the probability that a bug<br />
in the kernel will cause damage to a particular area of memory or storage. This<br />
protected area can then be used, with greater confidence, to hold sensitive user<br />
or product data.<br />
<br />
== References ==<br />
The home page for the PRAMFS project is at:<br />
http://pramfs.sourceforge.net/<br />
<br />
That site contains a LOT of detailed technical information and more explanation of<br />
the rationale for this feature.<br />
<br />
== Downloads ==<br />
<br />
=== Patch ===<br />
- [Patch for CELF version XXXXXX is *here*]<br />
- [Patch for 2.4.xx is *here*]<br />
- Patch for 2.6.7 is pending... (see [http://tree.celinuxforum.org/pipermail/celinux-dev/2004-September/000197.html celinux-dev archive message 197] for a recent submission to forum)<br />
<br />
=== Utility programs ===<br />
Pram fs can be created and populated using normal Linux filesystem utilities.<br />
<br />
== How To Use ==<br />
See the file <code>Documentation/filesystems/pramfs.txt</code> for instructions on its use (once the patch is applied).<br />
<br />
== Status ==<br />
Pramfs was submitted for consideration for inclusion in the 2.6.4 kernel, in March 2004.<br />
There was a thread of discussion [http://groups.google.com/groups?hl=en&lr=&threadm=1vJLx-4GI-57%40gated-at.bofh.it&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26selm%3D1vJLx-4GI-57%2540gated-at.bofh.it here]<br />
<br />
There were a few, easily answered, concerns raised. But the patch was not accepted into mainstream.<br />
<br />
I talked to Andrew Morton about this in April, 2004, and he said the threshold is high for getting a new filesystem into the<br />
mainline kernel, because each filesystem adds incremental, ongoing, source maintenance overhead.<br />
<br />
== Sample Results ==<br />
Here there are some benchmark results made with bonnie++. The board used was an Atmel ngw100 (avr32 architecture) with ap7000 processor.<br />
<br />
*(2.1 KB) [[Media:benchmark_bonnie++_pramfs_noxip.txt|Without XIP]]<br />
*(2.1 KB) [[Media:benchmark_bonnie++_pramfs_xip.txt|With XIP]]<br />
<br />
== Future Work ==<br />
Here is a list of things that could be worked on for this feature:<br />
-<br />
<br />
[[Category:File Systems| ]]</div>Greywolf82https://elinux.org/index.php?title=File:Benchmark_bonnie--_pramfs_xip.txt&diff=8207File:Benchmark bonnie-- pramfs xip.txt2008-11-24T08:45:09Z<p>Greywolf82: </p>
<hr />
<div></div>Greywolf82https://elinux.org/index.php?title=File:Benchmark_bonnie--_pramfs_noxip.txt&diff=8206File:Benchmark bonnie-- pramfs noxip.txt2008-11-24T08:44:34Z<p>Greywolf82: </p>
<hr />
<div></div>Greywolf82https://elinux.org/index.php?title=File_Systems&diff=7331File Systems2008-10-15T12:07:50Z<p>Greywolf82: /* PRAMFS */</p>
<hr />
<div>This page has information about file systems which are of interest for embedded projects.<br />
<br />
== Introduction ==<br />
Most embedded devices use [http://en.wikipedia.org/wiki/Flash_memory flash memory] as storage media.<br />
Also, size and bootup time are very important in many consumer electronics products. Therefore, <br />
special file systems are often used with differrent features, such as enhanced compression, or<br />
the ability to execute files directly from flash.<br />
<br />
=== MTD ===<br />
Note that flash memory may be managed by the Memory Technology Devices (MTD) system of Linux. See the [http://www.linux-mtd.infradead.org/faq/general.html MTD/Flash FAQ] for more information. Most of the <br />
filesystems mentioned here are built on top of the MTD system.<br />
<br />
=== UBI ===<br />
The [http://www.linux-mtd.infradead.org/doc/ubi.html Unsorted Block Images] (UBI) system in the Linux kernel<br />
manages multiple logical volumes on a single flash device.<br />
It provides a mapping from logical blocks to physical erase blocks, via the MTD layer.<br />
UBI provides a flexible partitioning concept which allows for wear-leveling across the whole flash device.<br />
<br />
See the [http://www.linux-mtd.infradead.org/doc/ubi.html UBI] page or<br />
[http://www.linux-mtd.infradead.org/faq/ubi.html UBI FAX and Howto] for more information.<br />
<br />
=== Partitioning ===<br />
The kernel requires at least one "root" file system, onto which<br />
other file systems can be mounted. In non-embedded systems, often only a single <br />
file system is used. However, in order to optimize limited resources (flash, RAM,<br />
processor speed, boot up time), many embedded systems<br />
break the file system into separate parts, and put each part on it's own partition (often in<br />
different kinds of storage.<br />
<br />
For example, a developer may wish to take all the read-only files of the system, and put<br />
them into a compressed, read-only file system in flash. This will consume the least amount<br />
of space on flash, at the cost of some read-time performance (for decompression).<br />
<br />
Another configuration might have executable files stored uncompressed on flash, so that<br />
they can be executed-in-place, which saves RAM and boot-up time (with a potential small<br />
loss of performance).<br />
<br />
For writable data, if the data does not need to be persistent, sometimes a ramdisk<br />
is used. Depending on the performance needs and the RAM limits, the file data may be<br />
compressed or not.<br />
<br />
There is no single standard for interleaving the read-only and read-write portions of the<br />
file system. This depends heavily on the set of embedded applications used for the<br />
project.<br />
<br />
== Embedded Filesystems ==<br />
Here are some filesystems designed for and/or commonly used in embedded devices:<br />
=== JFFS2 ===<br />
* [http://sourceware.org/jffs2/ JFFS2] - The Journalling Flash File System, version 2. This is the most commonly used flash filesystem.<br />
** The maximum size of JFFS2 is 128MB.<br />
** http://sourceforge.net/projects/mtd-mods has some patches by Alexey Korolev for improvements to JFFS2 <br />
*** See the presentation on Alexey's patches at:<br />
<br />
=== CramFS === <br />
*[http://en.wikipedia.org/wiki/Cramfs CRAMFS] - A compressed read-only file system for Linux. The maximum size of CRAMFS is 256MB.<br />
** "Linear Cramfs" is the name of a special feature to use uncompressed file, in a linear block layout with the Cramfs file system. This is useful for storing files which can be executed in-place. For more information on Linear Cramfs, see [[Application XIP]]<br />
<br />
=== SquashFS ===<br />
*[[Squash Fs]] - A (more) compressed read-only file system for Linux. This file system has better compression than JFFS2 or CRAMFS.<br />
<br />
=== YAFFS2 ===<br />
*[http://www.aleph1.co.uk/yaffsoverview YAFFS] - Yet Another Flash File System - a file system designed specifically for NAND flash <br />
** Presentation on YAFFS2 by Wookey at ELC Europe 2007: [http://tree.celinuxforum.org/CelfPubWiki/ELCEurope2007Presentations?action=AttachFile&do=get&target=yaffs.pdf yaffs.pdf]<br />
** Presentation from CELF Jamboree 17 comparing YAFFS and JFFS2 on 2.6.10: [http://tree.celinuxforum.org/CelfPubWiki/JapanTechnicalJamboree17?action=AttachFile&do=view&target=celf_flashfs.pdf celf_flash.pdf]<br />
<br />
==== YAFFS vs. JFFS2 mount time comparisons for 2.6.10 ====<br />
Here are some core results for mount times. (See the Toshiba Jamboree17 presentation for details.)<br />
<br />
* hardware: MIPS, 333 MHZ CPU, with 64 MB NAND Flash.<br />
* kernel: 2.6.10 +EBS patch +YAFFS (20061128 version).<br />
** JFFS2 compression option is disabled.<br />
* Key:<br />
** “Initial”: Time for mounting when the mount is just after launching “flash_eraseall”.<br />
** "1000 files”: Time for mounting after creating 1000 files (one file size is 33554 bytes.)<br />
** “JFFS2+EBS” needs to check EBS, and then it does start to scan the blocks normally. Therefore, “Initial” mount time is a little bit slow.<br />
<br />
{|border="1" cellpadding="5" cellspacing="0"<br />
|-bgcolor="#0090ff"<br />
! !! JFFS2 !! JFFS2+EBS !! YAFFS<br />
|-<br />
| Initial || 0.93 sec || 1.12 sec || 0.27 sec<br />
|-<br />
| 1000 files|| 7.34 sec || 1.06 sec || 2.52 sec<br />
|-<br />
|}<br />
<br />
=== LogFS ===<br />
*[http://logfs.org/logfs/ logfs] - LogFS is a scalable flash filesystem. It is aimed to replace<br />
JFFS2 for most uses, but focuses more on the large devices.<br />
<br />
Matt Mackall writes (in July of 2007):<br />
<br />
LogFS is a filesystem designed to support large volumes on FLASH. It<br />
uses a simple copy-on-write update process to ensure consistency (the<br />
"log" in the name is a historical artifact). It's easily the most<br />
modern and scalable open-source FLASH filesystem available for Linux<br />
and it's well on its way to being accepted in the mainline tree.<br />
<br />
Scott Preece writes:<br />
<br />
The big win for LogFS (in my limited knowledge of it) is that it stores<br />
its tree structure in the media, rather than building it in memory at<br />
mount time. This significantly reduces both startup time and memory<br />
consumption. This becomes more important as the size of the flash device<br />
increases. Read more in LWN (http://lwn.net/Articles/234441) and<br />
linux.com (http://www.linux.com/articles/114295).<br />
<br />
Some newer flash memory, like MLC (multi-level cell), are not well supported.<br />
<br />
LogFS now has it's own mailing list: see http://logfs.org/cgi-bin/mailman/listinfo/logfs<br />
<br />
=== AXFS ===<br />
*[[AXFS]] - Advanced XIP File System<br />
** This file system is designed specifically to support Execute-in-place operations<br />
<br />
=== PRAMFS ===<br />
*[http://pramfs.sourceforge.net/ PRAMFS] - Persistent and protected RAM File System<br />
The Persistent/Protected RAM Special Filesystem (PRAMFS) is a full-featured read/write filesystem that has been designed to work with fast I/O memory, and if the memory is non-volatile, the filesystem will be persistent. In addition, it has Execute-in-place support.<br />
<br />
=== NFS ===<br />
Due to space constraints on embedded devices, it is common during development to use<br />
a network file system for the root filesystem for the target. This allows the target to<br />
have a very large area where full-size binaries and lots of development tools can be placed<br />
during development. One drawback to this approach is that the system will need to<br />
be re-configured with local file systems (and most likely re-tested) for final<br />
product shipment, at some time during the development cycle.<br />
<br />
An NFS client can be built into the Linux kernel, and the kernel<br />
can be configured to use NFS as the root filesystem. This requires support for networking,<br />
and mechanisms for specifying the IP address for the target, and the path to the filesystem<br />
on the NFS host. Also, the host must be configured to run an NFS server. Often, the host<br />
also provides the required address and path information to the target board by running<br />
a DHCP server.<br />
<br />
See the the file Documentation/nfsroot.txt in the Linux kernel source for more information<br />
about mounting an NFS root filesystem with the kernel.<br />
<br />
== Mounting the root filesystem ==<br />
The root filesystem is mounted by the kernel, using a kernel command line option.<br />
Other file systems are mounted from user space, usually by init scripts or an <br />
init program, using the 'mount' command.<br />
<br />
The following are examples of command lines used for mounting a root filesystem<br />
with Linux:<br />
<br />
* Use the first partition on the first IDE hard drive:<br />
** root=/dev/hda1<br />
* or in later kernels:<br />
** root=/dev/sda1<br />
<br />
* Use NFS root filesystem (kernel config must support this)<br />
**root=/dev/nfs<br />
<br />
(Usually you need to add some other arguments to make sure<br />
the kernel IP address gets configured, or to specify the<br />
host NFS path.)<br />
<br />
* Use flash device partition 2:<br />
** root=/dev/mtdblock2<br />
<br />
[FIXTHIS - should probably mention initrd's here somewhere]<br />
<br />
== Special-purpose Filesystems ==<br />
=== ABISS ===<br />
The Active Block I/O Scheduling System is a file system designed to be able to provide real-time <br />
features for file system I/O activities.<br />
<br />
See [http://abiss.sourceforge.net/ ABISS]<br />
<br />
<br />
=== UnionFS ===<br />
Sometimes it is handy to be able to overlay file systems on top of each other.<br />
For example, it can be useful in embedded products to use a compressed read-only<br />
file system, mounted "underneath" a read/write file system. This give the<br />
appearance of a full read-write file system, while still retaining the<br />
space savings of the compressed file system, for those files that won't<br />
change during the life of the product.<br />
<br />
UnionFS is a project to provide such a system (providing a "union" of multiple<br />
file systems).<br />
<br />
See http://www.filesystems.org/project-unionfs.html<br />
<br />
See also union mounts, which are described at http://lkml.org/lkml/2007/6/20/18<br />
(and also in Documentation/union-mounts.txt in the kernel source tree - or will be, when this feature<br />
is merged.)<br />
<br />
== Other projects ==<br />
=== Multi-media file systems ===<br />
* XPRESS file system - [See OLS 2006 proceedings, presentation by Joo-Young Hwang]<br />
** I found out at ELC 2007 that this FS project was recently suspended internally at Samsung<br />
<br />
<br />
[[Category:File Systems| ]]</div>Greywolf82https://elinux.org/index.php?title=File_Systems&diff=7330File Systems2008-10-15T12:06:40Z<p>Greywolf82: /* Embedded Filesystems */</p>
<hr />
<div>This page has information about file systems which are of interest for embedded projects.<br />
<br />
== Introduction ==<br />
Most embedded devices use [http://en.wikipedia.org/wiki/Flash_memory flash memory] as storage media.<br />
Also, size and bootup time are very important in many consumer electronics products. Therefore, <br />
special file systems are often used with differrent features, such as enhanced compression, or<br />
the ability to execute files directly from flash.<br />
<br />
=== MTD ===<br />
Note that flash memory may be managed by the Memory Technology Devices (MTD) system of Linux. See the [http://www.linux-mtd.infradead.org/faq/general.html MTD/Flash FAQ] for more information. Most of the <br />
filesystems mentioned here are built on top of the MTD system.<br />
<br />
=== UBI ===<br />
The [http://www.linux-mtd.infradead.org/doc/ubi.html Unsorted Block Images] (UBI) system in the Linux kernel<br />
manages multiple logical volumes on a single flash device.<br />
It provides a mapping from logical blocks to physical erase blocks, via the MTD layer.<br />
UBI provides a flexible partitioning concept which allows for wear-leveling across the whole flash device.<br />
<br />
See the [http://www.linux-mtd.infradead.org/doc/ubi.html UBI] page or<br />
[http://www.linux-mtd.infradead.org/faq/ubi.html UBI FAX and Howto] for more information.<br />
<br />
=== Partitioning ===<br />
The kernel requires at least one "root" file system, onto which<br />
other file systems can be mounted. In non-embedded systems, often only a single <br />
file system is used. However, in order to optimize limited resources (flash, RAM,<br />
processor speed, boot up time), many embedded systems<br />
break the file system into separate parts, and put each part on it's own partition (often in<br />
different kinds of storage.<br />
<br />
For example, a developer may wish to take all the read-only files of the system, and put<br />
them into a compressed, read-only file system in flash. This will consume the least amount<br />
of space on flash, at the cost of some read-time performance (for decompression).<br />
<br />
Another configuration might have executable files stored uncompressed on flash, so that<br />
they can be executed-in-place, which saves RAM and boot-up time (with a potential small<br />
loss of performance).<br />
<br />
For writable data, if the data does not need to be persistent, sometimes a ramdisk<br />
is used. Depending on the performance needs and the RAM limits, the file data may be<br />
compressed or not.<br />
<br />
There is no single standard for interleaving the read-only and read-write portions of the<br />
file system. This depends heavily on the set of embedded applications used for the<br />
project.<br />
<br />
== Embedded Filesystems ==<br />
Here are some filesystems designed for and/or commonly used in embedded devices:<br />
=== JFFS2 ===<br />
* [http://sourceware.org/jffs2/ JFFS2] - The Journalling Flash File System, version 2. This is the most commonly used flash filesystem.<br />
** The maximum size of JFFS2 is 128MB.<br />
** http://sourceforge.net/projects/mtd-mods has some patches by Alexey Korolev for improvements to JFFS2 <br />
*** See the presentation on Alexey's patches at:<br />
<br />
=== CramFS === <br />
*[http://en.wikipedia.org/wiki/Cramfs CRAMFS] - A compressed read-only file system for Linux. The maximum size of CRAMFS is 256MB.<br />
** "Linear Cramfs" is the name of a special feature to use uncompressed file, in a linear block layout with the Cramfs file system. This is useful for storing files which can be executed in-place. For more information on Linear Cramfs, see [[Application XIP]]<br />
<br />
=== SquashFS ===<br />
*[[Squash Fs]] - A (more) compressed read-only file system for Linux. This file system has better compression than JFFS2 or CRAMFS.<br />
<br />
=== YAFFS2 ===<br />
*[http://www.aleph1.co.uk/yaffsoverview YAFFS] - Yet Another Flash File System - a file system designed specifically for NAND flash <br />
** Presentation on YAFFS2 by Wookey at ELC Europe 2007: [http://tree.celinuxforum.org/CelfPubWiki/ELCEurope2007Presentations?action=AttachFile&do=get&target=yaffs.pdf yaffs.pdf]<br />
** Presentation from CELF Jamboree 17 comparing YAFFS and JFFS2 on 2.6.10: [http://tree.celinuxforum.org/CelfPubWiki/JapanTechnicalJamboree17?action=AttachFile&do=view&target=celf_flashfs.pdf celf_flash.pdf]<br />
<br />
==== YAFFS vs. JFFS2 mount time comparisons for 2.6.10 ====<br />
Here are some core results for mount times. (See the Toshiba Jamboree17 presentation for details.)<br />
<br />
* hardware: MIPS, 333 MHZ CPU, with 64 MB NAND Flash.<br />
* kernel: 2.6.10 +EBS patch +YAFFS (20061128 version).<br />
** JFFS2 compression option is disabled.<br />
* Key:<br />
** “Initial”: Time for mounting when the mount is just after launching “flash_eraseall”.<br />
** "1000 files”: Time for mounting after creating 1000 files (one file size is 33554 bytes.)<br />
** “JFFS2+EBS” needs to check EBS, and then it does start to scan the blocks normally. Therefore, “Initial” mount time is a little bit slow.<br />
<br />
{|border="1" cellpadding="5" cellspacing="0"<br />
|-bgcolor="#0090ff"<br />
! !! JFFS2 !! JFFS2+EBS !! YAFFS<br />
|-<br />
| Initial || 0.93 sec || 1.12 sec || 0.27 sec<br />
|-<br />
| 1000 files|| 7.34 sec || 1.06 sec || 2.52 sec<br />
|-<br />
|}<br />
<br />
=== LogFS ===<br />
*[http://logfs.org/logfs/ logfs] - LogFS is a scalable flash filesystem. It is aimed to replace<br />
JFFS2 for most uses, but focuses more on the large devices.<br />
<br />
Matt Mackall writes (in July of 2007):<br />
<br />
LogFS is a filesystem designed to support large volumes on FLASH. It<br />
uses a simple copy-on-write update process to ensure consistency (the<br />
"log" in the name is a historical artifact). It's easily the most<br />
modern and scalable open-source FLASH filesystem available for Linux<br />
and it's well on its way to being accepted in the mainline tree.<br />
<br />
Scott Preece writes:<br />
<br />
The big win for LogFS (in my limited knowledge of it) is that it stores<br />
its tree structure in the media, rather than building it in memory at<br />
mount time. This significantly reduces both startup time and memory<br />
consumption. This becomes more important as the size of the flash device<br />
increases. Read more in LWN (http://lwn.net/Articles/234441) and<br />
linux.com (http://www.linux.com/articles/114295).<br />
<br />
Some newer flash memory, like MLC (multi-level cell), are not well supported.<br />
<br />
LogFS now has it's own mailing list: see http://logfs.org/cgi-bin/mailman/listinfo/logfs<br />
<br />
=== AXFS ===<br />
*[[AXFS]] - Advanced XIP File System<br />
** This file system is designed specifically to support Execute-in-place operations<br />
<br />
=== PRAMFS ===<br />
*[http://pramfs.sourceforge.net/ PRAMFS] - Persistent and protected RAM File System<br />
The Persistent/Protected RAM Special Filesystem (PRAMFS) is a full-featured read/write filesystem that has been designed to work with fast I/O memory, and if the memory is non-volatile, the filesystem will be persistent.<br />
<br />
=== NFS ===<br />
Due to space constraints on embedded devices, it is common during development to use<br />
a network file system for the root filesystem for the target. This allows the target to<br />
have a very large area where full-size binaries and lots of development tools can be placed<br />
during development. One drawback to this approach is that the system will need to<br />
be re-configured with local file systems (and most likely re-tested) for final<br />
product shipment, at some time during the development cycle.<br />
<br />
An NFS client can be built into the Linux kernel, and the kernel<br />
can be configured to use NFS as the root filesystem. This requires support for networking,<br />
and mechanisms for specifying the IP address for the target, and the path to the filesystem<br />
on the NFS host. Also, the host must be configured to run an NFS server. Often, the host<br />
also provides the required address and path information to the target board by running<br />
a DHCP server.<br />
<br />
See the the file Documentation/nfsroot.txt in the Linux kernel source for more information<br />
about mounting an NFS root filesystem with the kernel.<br />
<br />
== Mounting the root filesystem ==<br />
The root filesystem is mounted by the kernel, using a kernel command line option.<br />
Other file systems are mounted from user space, usually by init scripts or an <br />
init program, using the 'mount' command.<br />
<br />
The following are examples of command lines used for mounting a root filesystem<br />
with Linux:<br />
<br />
* Use the first partition on the first IDE hard drive:<br />
** root=/dev/hda1<br />
* or in later kernels:<br />
** root=/dev/sda1<br />
<br />
* Use NFS root filesystem (kernel config must support this)<br />
**root=/dev/nfs<br />
<br />
(Usually you need to add some other arguments to make sure<br />
the kernel IP address gets configured, or to specify the<br />
host NFS path.)<br />
<br />
* Use flash device partition 2:<br />
** root=/dev/mtdblock2<br />
<br />
[FIXTHIS - should probably mention initrd's here somewhere]<br />
<br />
== Special-purpose Filesystems ==<br />
=== ABISS ===<br />
The Active Block I/O Scheduling System is a file system designed to be able to provide real-time <br />
features for file system I/O activities.<br />
<br />
See [http://abiss.sourceforge.net/ ABISS]<br />
<br />
<br />
=== UnionFS ===<br />
Sometimes it is handy to be able to overlay file systems on top of each other.<br />
For example, it can be useful in embedded products to use a compressed read-only<br />
file system, mounted "underneath" a read/write file system. This give the<br />
appearance of a full read-write file system, while still retaining the<br />
space savings of the compressed file system, for those files that won't<br />
change during the life of the product.<br />
<br />
UnionFS is a project to provide such a system (providing a "union" of multiple<br />
file systems).<br />
<br />
See http://www.filesystems.org/project-unionfs.html<br />
<br />
See also union mounts, which are described at http://lkml.org/lkml/2007/6/20/18<br />
(and also in Documentation/union-mounts.txt in the kernel source tree - or will be, when this feature<br />
is merged.)<br />
<br />
== Other projects ==<br />
=== Multi-media file systems ===<br />
* XPRESS file system - [See OLS 2006 proceedings, presentation by Joo-Young Hwang]<br />
** I found out at ELC 2007 that this FS project was recently suspended internally at Samsung<br />
<br />
<br />
[[Category:File Systems| ]]</div>Greywolf82