18-10-2016, 03:04 PM
1459687470-DOSTP05.docx (Size: 320.35 KB / Downloads: 5)
AFS PROTOTYPE : VICE SERVERS
Vice servers are used to devote projection for every single client computer. It reserve data in local file systems and contains system information and directory hierarchy.
VENUS CLIENTS
Venus client approaches and cache whole files. It carries on open reserve on close. It has name files with full path names along with name resolution and accomplish by servers.
Qualitative Proof
Close or open persistency works well enough for completion with 4.2 BSD file system semantics. The remote file access is observed dimmer than local but still much finer than for timesharing system.It was observed that it was liable on application workload (CPU vs. I/O bound) and is really dim for programs applying stat system call and demands to be analyzed by server even if file is in local cache.Server operation and administration is hard Process/client develops overloaded context switching & paging embedded location database creates moving files difficult. But shortage of quotas calls for load- balancing solution.Consider every day usage on deployed system but also draws on Andrew benchmark Simulates program development in five phases
MakeDir: create target tree.
Copy: populate target tree with files.
ScanDir: analyzes status of files in target tree (but don’t read).
ReadAll: read files in target tree.
Make: compile and link files in target tree Corresponds to so-called load unit Approximates load generated by five real-world users.
Qualitative conclusions
Clients have huge cache hit ratios 81% for file cache, 82% for status cache. Servers see mostly cache validations, status requests 90% of all operations Only 6% of operations involve file transfers carry to store ratio is 2:1. Servers have high CPU utilization Up to 75% over 5 minute period. Caused by context switches and pathname traversal Require better load balancing Move users between servers.
Amending Performance
Cache management Cache directories and symbolic links as well considerdirectory modifications at once. Not valid cache entries through callbacks from server. Name resolution classify files by FID instead of pathname.32 bit volume number, 32 bit vnode number, 32 bit “uniquifier”.Locate volume through replicated volume location database. Communication and server process structure. Use user-level threads with integrated RPC package.
Further on Performance
Low-level storage illustration approaches the files directly through their inodes. It requires summation of new system calls
Improving Manageability
Problems with archetype of native disk partitions are not appropriatefor organizational units. Embedded location enlightenment creates moving trees tough. Quotas cannot be applied. Server replication has ill-defined consistency semantics. Backups may be conflicting and are tough to restore. One hypothesis to rule them all: the volume. A logical accumulation of files, organized as limited subtree. May increase or decrease, but has upper size limit (quota).Resides within a partition typically allocated per user or project.
More on Manageability
Cloning amounts as the central mechanism. Clone is anaccordant copy-on-write snapshot. Amounts are moved by repeatedly cloning source volume. Later clones are incremented by Read-only replication is implemented by cloning volumes. Also used for withdrawingsoftware.Backups are implemented by cloning volumes Read-only tree in user’s home provides yesterday’s snapshot
Editorial: this is the single most useful feature of AFS…
AFS in Action
Bare the file with pathname P on a workstation (client) Kernel considering that it access back to Venus (in userland) One of Venus’ user-level threads that walks components D of P. If D is in cache and has callback, use it. If D is in cache and has no callback, validate with server.
Constructs the callback as side-effect. If D is not in cache, fetch from server. Same thread also caches file F. Similarly to directory access.If file is modified, write back to Vice on close.
Close/Open Consistency
Writes immediately visible to all processes on client Once file is closed, writes become visible anywhere But not for currently open instances All other operations become visible immediately and across the network No explicit locking is performed Consistent with 4.2BSD semantics.
Comparison with NFS
NFS runs on a network of trusted workstations Caches inodes and individual file pages on clients Provides ad-hoc “consistency”: files flushed every 30sec Also close-to-open consistency on modern implementations Represents a mature, well-tuned production system But also has some stability problems under high load Unreliable datagram transport, recovery at user-level
NFS v3 (1996) adds support for TCP
Overall Running Time
Summary
Our experience with volumes as a data structuring mechanism has been entirely positive. Volumes provide a level of Operational Transparency that is not sup- ported by any other file system we are aware of. From an operational standpoint, the system is a flat space of named volumes. The file system hierarchy is constructed out of volumes but is orthogonal to it. The ability to associate disk usage quotas with volumes and the ease with which volumes may be moved between servers have proved to be of considerable value in actual operation of the system. The backup mechanism is simple and efficient and seldom disrupts normal user activities. These observations lead us to conclude that the volume abstraction, or something similar to it, is indispen- sible in a large distributed file system.