18-09-2014, 04:15 PM
Swapping Strategy to Improve I/O Performance of Mobile Embedded Systems Using Compressed File Systems Project Report
Swapping Strategy.pdf (Size: 234.71 KB / Downloads: 11)
Abstract
Compressed file systems are suitable for mobile
embedded systems with small capacity of storage
systems because the contents of files are stored in a
compressed form to save the space of storage in the
compressed file systems. Therefore a data should be
decompressed before it is accessed by an application
program. That is a computational overhead of
compressed file systems. Furthermore, the mobile
embedded systems exploit demand paging mechanism
to cut down their cost and size as well as the
compressed file systems. And also, to extend a main
memory space, the mobile embedded systems use
“swapping” mechanism which stores the data evicted
from a main memory in a swap area, and serves the
data when the application requests it again. In this
paper, we propose the new swapping strategy for the
mobile embedded systems using the compressed file
systems, which aims to keep the decompressed data of
compressed file systems in the swap space and serves
them directly from the swap space if necessary. This
strategy reduces several copying overhead and
decompressing operation of compressed file systems.
As a result, it could improve the I/O performance of
mobile embedded systems. Trace-driven simulations
show that the proposed strategy performs better than
existing swapping mechanism in terms of the total I/O
performance, page fault ratio, and page fault latency.
Introduction
Typical mobile embedded systems such as a cellular
phone, a portable multimedia players, and a digital
music player contain DRAM, NOR flash and NAND
flash memory. In the mobile devices, DRAM is used
for a main memory, NOR flash memory is used for a
program code, and NAND flash memory is used for user data [12, 13]. Because the mobile embedded
systems contain three kinds of memories, it is difficult
to cut down the cost of hardware and reduce the size of
the mobile embedded devices. In order to cut down the
cost and size of mobile systems, it has been attempted
to eliminate NOR flash memory from mobile
embedded systems. If the mobile embedded systems
might not have NOR flash memory, the application
program code needs to be copied from NAND flash
memory to the main memory during running the
application. This processing mechanism is called
"shadowing." The shadowing mechanism shows the
best performance at runtime because the whole
program codes reside in the main memory. However
this mechanism needs a longer loading time since the
whole program code should be copied to the main
memory. Besides, the main memory should become
large because the application codes such as mobile
games become large in recent years. To address the
weakness of the shadowing mechanism, “demand
paging” is exploited for the mobile embedded systems.
Demand paging is a virtual memory technique that
code or data is loaded from the secondary storage only
when it is needed by a process [5, 6]. Thus, it requires
Related Works
This section analyzes demand paging and swapping
mechanisms, and also describes the characteristics of
compressed file systems
Demand Paging and Swapping
Mobile embedded devices such as a digital music
player and a cellular phone contain DRAM, NOR flash
and NAND flash memory to store a program code and
user data. In these devices, DRAM is used for a main
memory, NOR flash memory is stored a program code,
and NAND flash memory is used to store user data. In
this memory architecture, the application programs are
executed by XIP (eXecute In Place) mechanism, which
could execute the program in NOR flash memory
without copying the program code into the main
memory [13]. Therefore, a loading time is shorter
when the program is executed. However, because there
are three kinds of memories in mobile embedded
devices, it is difficult to cut down the cost of devices
and reduce the size of the mobile devices. To cut down
the cost and decrease the size of devices, it has been
attempted to eliminate NOR flash memory from
mobile embedded systems. If there is no NOR flash
memory in the mobile embedded systems, we should
look into the alternative to the XIP mechanism, and
thus, we could exploits “shadowing” mechanism to
execute the application in the mobile devices. In the
shadowing mechanism, the application program code
needs to be copied from NAND flash memory to the
main memory before running the application. The
shadowing shows the best performance at runtime
because the whole program codes reside in the main
memory. However this mechanism incurs a longer
loading time since the whole program code should be
copied to the main memory. Besides, the main memory
Characteristics of Compressed File Systems
Mobile embedded systems use the compressed file
systems such as CramFS and SquashFS due to an
obvious benefit for saving space [3, 10-11]. However,
the compressed file systems have unacceptable
overheads, which are the decompression overhead and
extra buffer overhead. In general file systems such as
Ext2 and Ext3 [2], when an application program
Selective Discardable Page Swapping Scheme
Mobile embedded systems using demand paging
exploit a “swapping” mechanism to extend a limited
main memory space. The swap area consists of a
sequence of page slots, which is used to store a page
evicted from the main memory.
When the new pages are loaded by a program, we
should evict old pages from the main memory due to
its limited capacity. During evicting the pages, some of
pages are just discarded from the main memory
because their original pages are stored in the secondary
storage. This page is called “discardable page” which
includes a program image and data file.
In contrast with the discardable page, some pages
should be stored in the secondary storage. This page is
called “swappable page.” There are three kinds of
swappable pages that should be stored the swap area
[5]:
Anonymous pages that belong to an
anonymous memory region of process, User
mode stack or heap.
Dirty pages that belong to a private memory
mapping of a process.
Shared Pages that belong to an IPC shared
memory region.
Swap Area Management
In the proposed strategy, the mobile embedded
systems could store the discardable and swappable
pages in the swap area, but the capacity of swap area is
also limited in the mobile embedded system. For this
reason, we should consider a new way that maintains
the swap area efficiently.
When a new page is stored in the swap area, we
should select a victim page to be evicted from swap
area to make a new swap space if the swap area is full
Conclusion
The mobile embedded systems using compressed
file systems have a heavy overhead from extra copying
and decompressing operations and it significantly
affects to the I/O performance. To overcome, we
proposed the new swapping strategy, SCF, Swapping
strategy for Compressed File systems. Unlike the
typical swap systems, we swap out the decompressed
data pages as it is; do not discard them. At certain later
time, when we need to access one of those pages, we
simply copy it from the swap area. There are no extra
operations for copying or for decompression.
Furthermore, we propose the swap area management to
increase the hit ratio within the limited swap area.
Finally, we estimated the effectiveness of the new
swapping strategy with several evaluations. As a result,
the newly proposed swapping strategy performs better
than the typical system in terms of the total I/O time,
cache hit ratio, and page fault latency.