31-08-2016, 11:59 AM
Swapping Strategy to Improve I/O Performance of Mobile Embedded Systems
Using Compressed File Systems
1451715917-SwappingStrategytoImproveIO.doc (Size: 962 KB / Downloads: 8)
Abstract
Compressed file systems are suitable for mobile embedded systems with small capacity of storage systems because the contents of files are stored in a compressed form to save the space of storage in the compressed file systems. Therefore a data should be decompressed before it is accessed by an application program. That is a computational overhead of compressed file systems.
Furthermore, the mobile embedded systems exploit demand paging mechanism to cut down their cost and size as well as the compressed file systems. And also, to extend a main memory space, the mobile embedded systems use “swapping” mechanism which stores the data evicted from a main memory in a swap area, and serves the data when the application requests it again.
In this paper, we propose the new swapping strategy for the mobile embedded systems using the compressed file Systems, which aims to keep the decompressed data of compressed file systems in the swap space and serves them directly from the swap space if necessary. This strategy reduces several copying overhead and decompressing operation of compressed file systems.
As a result, it could improve the I/O performance of mobile embedded systems. Trace-driven simulations show that the proposed strategy performs better than existing swapping mechanism in terms of the total I/O performance, page fault ratio, and page fault latency.
Introduction
Typical mobile embedded systems such as a cellular phone, a portable multimedia players, and a digital music player contain DRAM, NOR flash and NAND flash memory. In the mobile devices, DRAM is used for a main memory, NOR flash memory is used for a program code, and NAND flash memory is used for user data [12, 13]. Because the mobile embedded systems contain three kinds of memories, it is difficult to cut down the cost of hardware and reduce the size of the mobile embedded devices. In order to cut down the cost and size of mobile systems, it has been attempted to eliminate NOR flash memory from mobile embedded systems. If the mobile embedded systems might not have NOR needs flash memory, the application program code to be copied from NAND flash memory to the main memory during running the application. This processing mechanism is called "shadowing." The shadowing mechanism shows the best performance at runtime because the whole program codes reside in the main memory. However this mechanism needs a longer loading time since the whole program code should be copied to the main memory.
Besides, the main memory should become large because the application codes such as mobile games become large in recent years. To address the weakness of the shadowing mechanism, “demand paging” is exploited for the mobile embedded systems. Demand paging is a virtual memory technique that code or data is loaded from the secondary storage only when it is needed by a process [5, 6]. Thus, it requires a less main memory capacity and a shorter loading time than the shadowing mechanism. And also, the mobile embedded systems using demand paging could exploit a “swapping” mechanism to extend a limited main memory space. When the new pages are loaded by a process, we evict pages from the main memory due to the limited capacity of main memory, and then, we store some of pages evicted from main memory into the secondary storage if necessary. Furthermore, the mobile embedded systems use the compressed file systems to minimize a footprint of program and data in a storage system [1, 3, 8, 10-11]. However, the compressed file systems have a critical drawback, which is a large I/O overhead that includes several copying and decompressing overhead [4]. When reading a data in the secondary storage, the system should copy some compressed data pages from the secondary storage into the main memory, and then copy them again to other buffer space. After copy the compressed pages into the buffer, we could extract a page from them and read the decompressed page.
Therefore, we should do two copy operations and one decompressing operation to get the only one data page. It is very large I/O overhead, which significantly affects the I/O performance of mobile embedded systems. In this paper, we aim to improve the I/O performance of mobile system using the compressed file system. In order to improve the I/O performance, we propose a new swapping strategy, which aims to store a data involved in the compress file systems unlike a conventional swapping scheme. Because the read cost for the compressed data is very expensive, we keep the decompressed data in a swap area. And, we try to read the data directly from the swap area instead of the compressed data in the secondary storage when the data is needed. It could increase the I/O performance of mobile embedded systems with the compressed file systems. The remainder of this paper is organized as follows. Section 2 analyzes demand paging and swapping mechanism, and also describes the characteristics of compressed file systems. Section 3 presents a novel swapping strategy for mobile embedded systems with compressed file systems. Then, performance evaluation results of the proposed swapping mechanism are given in Section 4. Finally, Section 5 concludes the paper.
Figure1. Mobile embedded systems
The compressed file systems such as CramFS and SquashFS are used for saving the space cost in the mobile embedded systems.
1. Related Works
This section analyzes demand paging and swapping mechanisms, and also describes the characteristics of compressed file systems.
1.1 Demand Paging and Swapping
Mobile embedded devices such as a digital music player and a cellular phone contain DRAM, NOR flash and NAND flash memory to store a program code and user data. In these devices, DRAM is used for a main memory, NOR flash memory is stored a program code, and NAND flash memory is used to store user data. In this memory architecture, the application programs are executed by XIP (eXecute In Place) mechanism, which could execute the program in NOR flash memory without copying the program code into the main memory [13]. Therefore, a loading time is shorter when the program is executed. However, because there are three kinds of memories in mobile embedded devices, it is difficult to cut down the cost of devices and reduce the size of the mobile devices. To cut down the cost and decrease the size of devices, it has been attempted to eliminate NOR flash memory from mobile embedded systems. If there is no NOR flash memory in the mobile embedded systems, we should look into the alternative to the XIP mechanism, and thus, we could exploits “shadowing” mechanism to execute the application in the mobile devices. In the shadowing mechanism, the application program code needs to be copied from NAND flash memory to the main memory before running the application. The shadowing shows the best performance at runtime because the whole program codes reside in the main memory. However this mechanism incurs a longer loading time since the whole program code should be copied to the main memory. Besides, the main memory should become large because the application codes such as mobile games become large in recent years. To address the copy overhead of the shadowing, “demand paging” is exploited for the mobile embedded systems.
Demand paging is a virtual memory technique that code or data is loaded from the secondary storage only when it is needed by a process [5]. Thus, it requires a less main memory capacity and a shorter loading time than the shadowing mechanism. Furthermore, the embedded systems using demand paging could exploit a “swapping” mechanism to extend a limited main memory space. When the data are evicted from the main memory due to the limited capacity of main memory, the swapping mechanism stores the evicted data into a secondary storage, and then serves the stored data when they are needed by the program. However, the pages to be stored in the swap area are only anonymous pages that contain a heap or stack data. The pages that contain a program image or data file are not stored in the swap area because there are original pages in the secondary storage. Figure 3 shows the demand paging and swapping mechanisms for mobile embedded systems in details.
1.2 Characteristics of Compressed File Systems
Mobile embedded systems use the compressed file systems such as CramFS and SquashFS due to an obvious benefit for saving space [3, 10-11]. However, the compressed file systems have unacceptable overheads, which are the decompression overhead and extra buffer overhead. In general file systems such as Ext2 and Ext3 [2], when an application program requests a data page, it is just copied from a secondary storage to a main memory and then the page read from the storage is accessed by the application program, as shown in Fig. 4 (a). In contrast with the general file systems, the contents of files are stored in a compressed form to save the space of storage in the compressed file systems. Therefore a data page should be decompressed before it is accessed by an application program. This is the decompression overhead of compressed file systems, which is computationally expensive. Fig. 4(b) illustrates the decompression overhead in details.
The second overhead of compressed file systems is the extra buffer space overhead. The compressed file systems should use the extra space to extract a decompressed page from compressed pages. When an application program requests a data page, a system just copies only one compressed page to a main memory, and then tries to decompress it. Unfortunately, we cannot decompress the compressed page when the compressed page lies on the boundary of two pages. In such a case, we should copy multiple pages from the storage into the main memory. For example, CramFS copies four pages into the main memory. However, these pages are not located in a contiguous memory area. Thus, in order to extract a data page from compressed pages, we should copy them to a contiguous memory area, which is an intermediate buffer in the compressed file systems. After copying the compressed pages into the intermediate buffer, we can extract the data page from the compressed pages. This mechanism needs more the extra space and extra copying overheads than the general file systems. Fig. 5 shows the extra space overhead of compressed file systems.