Order One Study

Table Of Contents:

/ Abstract

 * Linux 2.6O(1)2.6NPTL(Native POSIX Thread Library)
 * For the purpose of examining the behavior of the O(1) scheduler which was migrated from the Kernel 2.6, we have tested the real-time thread response time experiment. We have used the NPTL (Native POSIX Thread Library) which also was migrated from the Kernel 2.6.
 * English translation is not completed yet. Please refer to THIS material for your reference.
 * We are looking for your comments or suggestions. To "celinux-dev@tree.celinuxforum.org" please. / celinux-dev@tree.celinuxforum.org
 * Comment about Order One Study (in Japanese)

Used Platform

 * Target platform


 * NFS server siderootfs for R2D)



/ Virtual Device Driver

 * procprocread
 * create new virtual device entry in /proc file.
 * When "write to /proc" event occurs, create a new thread that blocks the same device file using "read"

WRITE / Write Task

 * writeREAD
 * Do "write" to virtual device file create a trigger for "read" task wakeup.

READ / Read Task

 * read
 * Do "read" to virtual device file to block.

/ Load Tasks

 * Rival tasks those have same scheduling priority.
 * Rival tasks those have same scheduling priority.

/ Measurement

 * READWRITEread, write1
 * WRITE16mswrite
 * READreadread
 * WRITEwriteREADread
 * writeREAD
 * 1usnanosleep012481632
 * READWRITE(non-RTRT(1SCHED_RR2
 * Non-RT
 * Each "read" and "write" task execute read or write 10,000 times to the virtual device file.
 * "write" task execute write in every 16msec.
 * "Read" task issues next read right after returns from read execution.
 * Measure the elapsed time between write issues in "write" task and return of read execution issued in "read" task.
 * And also measure the elapsed time between the timing of write command accepted in virtual device file and the timing of "read" task start.
 * While measurement, execute some load tasks collaterally that execute 1us nanosleep repeatedly. Numbers of load tasks varies as 0, 1, 2, 4, 8, 16, 32.
 * In this test execute "read" and "write" task as normal thread (non-Realtime) thread and RT-thread (priority=1, round-robin mode = SCHED_RR)
 * Load tasks were always execute as a non-realtime task.

/ Thread switch time

 * WRITEwriteREADread
 * The elapsed time between write issues in "write" task and return of read execution issued in "read" task.

/ Thread wakeup time

 * writeREADread
 * The elapsed time between the timing of write command accepted in virtual device file and the timing of "read" task start.


 * RTNon-RT
 * Non-RT
 * Non-RTRT
 * Non-RTRT
 * Both thread takes longer time in 1st execution, than 2nd time or later.
 * RT thread can achieve almost constant response time regardless of numbers of load tasks.
 * Non-RT task's response time increase relative to load task number.
 * Initial several response time of Non-RT task almost same as RT task, but after once it takes some long period, long period and short period repeated in every two times.

/ consideration (more like guess)
*Non-RT
 * Non-RTRT
 * readREADreadREAD
 * Non-RTRT
 * readREADreadREAD
 * readREAD
 * 1st execution takes longer response time.
 * Paging access to non-referenced space like library code (that is interrupt disabled period) blocks scheduler execution.
 * Non-RT task response time is similar to that of RT task.
 * "Read" task blocked by read gains higher priority than other load task(s). Then "read" task returns from read assigned into higher priority queue and dispatched immediately rather than other load task(s) that has less seeped time.
 * Non-RT task's response time increase relative to load task number.
 * "Read" task returned from read enter into the tail of queue list of other load task(s). So "read" task will dispatch after all other load task execution was done.

TODO

 * Non-RT
 * RT
 * 2.4, 2.6.X
 * 2.4, 2.6.X
 * 2.4, 2.6.X
 * 2.4, 2.6.X
 * Trace the dynamic priority transition of non-RT task.
 * Measure the effect of kernel pre-emption period when using RT task.
 * Adopt high-resolution timer count for measuring response time.
 * Compare the test result of other architecture.
 * Adopt realistic load task model.
 * compare the test result of 2.4 and 2.6 kernel
 * Measure thread wake up time from hardware interrupt event.

/ Test Program Source Code

 * (Virtual Device Driver) ---> [[Media:irqhook.c]]
 * (Task) ---> [[Media:ihooktest.c]]

/ Result of the Experiment

 * CVS
 * CSV File Columns : (Column1: Frequency) (Column2: Thread Switching Time) (Column3: Thread Wakeup Time)