File Systems
A file is a named collection of related information that normally resides on a secondary storage device such as a disk or a tape. A file is named for the convenience of its human users and is referred is referred to by its name. A directory is a node containing information about files. Files are either unstructured, record structured or tree structured. File types may be regular files, directories, character special files, block special files, ASCII files, or binary files. Files can be accessed either seque
Summary
A file is a named collection of related information that normally resides on a secondary storage device such as a disk or a tape. A file is named for the convenience of its human users and is referred is referred to by its name. A directory is a node containing information about files. Files are either unstructured, record structured or tree structured. File types may be regular files, directories, character special files, block special files, ASCII files, or binary files. Files can be accessed either seque
Things to Remember
- A file is a named collection of related information that normally resides on a secondary storage device such as a disk or a tape.
- A file is named for the convenience of its human users and is referred is referred to by its name.
- A directory is a node containing information about files.
- Files are either unstructured, record structured or tree structured.
- File types may be regular files, directories, character special files, block special files, ASCII files, or binary files.
- Files can be accessed either sequentially or directly.
- A file system may be allocated by contiguous allocation or linked allocation.
- The simplest allocation scheme is to store each file as a contiguous run of disk blocks.
- To create a new file in linked allocation method, we simply create a new entry in the directory; with linked allocation, each directory entry has a pointer to the first disk block of the file.
- Directories may be implemented either in a linear list or hash table.
- The hash table takes a value computed from the file name and returns a pointer to the file name in the linear list thus decreasing search time.
- If a file system is irrecoverably lost due to hardware or software failure, restoring all the information will be difficult, time-consuming and in many cases, impossible. The best way to ensure reliability by making duplicate copies of the files.
- Backups to tape are generally made to handle one of two potential problems: Recover from disaster and recover from stupidity.
- Two strategies can be used for dumping a disk to a tape: Physical dump and logical dump.
- The most common technique used to reduce disk accesses is the block cache or buffer cache.
- If the block is not in the cache, it is first read into the cache, and then copied to wherever it is needed.
MCQs
No MCQs found.
Subjective Questions
Q1: Write a short note on Chlorpropamide.
Type: Short
Difficulty: Easy
Chlorpropamide is a drug in the sulfonylurea class used to treat type 2 diabetes mellitus . It is a long-acting 1st generation sulfonylurea. It has more side effects than other sulfonylurea and its use is no longer recommended.
Mechanism of actions
It exerts both pancreatic and extra -pancreatic action. It rapidly increases insulin released but not its biosynthesis. It also reduced hepatic gluconeogenesis and glycogenolysis. Glucose uptake is increased in liver and utilization increased in skeletal muscles.
Indications
Non-insulin dependent diabetes especially obese
Dose
Adult: initially 250mg/day as a single dose or divided doses.
Contraindications
Diabetes ketoacidosis
Diabetes coma
Severe trauma
Hypoglycemia
Renal, thyroid, or hepatic impairment
Adverse effects
Severe sustained hypoglycemia leading to irreversible brain damage.
Cholestatic jaundice
Agranulocytosis
Aplastic anemia
Hemolytic anemia
Epigastric pain
Nausea, weakness, vomiting, hypertension
Nursing implementation
Monitor therapeutic effectiveness: Indicated by HbA 1c levels >7%.
Monitor blood and urine glucose to determine an effectiveness of glycemic control.
Lab tests: Periodic fasting and postprandial blood glucose; HbA1c every 3 months; baseline and periodic hematologic and hepatic studies are advisable, particularly in patients receiving high doses. A CBC should be performed if symptoms of anemia appear.
Report dizziness, shortness of breath, malaise, fatigue.
Monitor input and output ratio and pattern: Infrequently, chlorpropamide produces an antidiuretic effect, with resulting severe hyponatremia, edema, and water intoxication. If fluid intake far exceeds output and edema develops (weight gain), report to the physician.
Report hypoglycemic episodes to a physician. Because chlorpropamide has a long half-life, hypoglycemia can be severe although onset is not as fast or as dramatic as with the use of insulin.
Report any of the following immediately to physician: skin eruptions, malaise, fever, or photosensitivity.
Immediately report these symptoms to a physician. A change to another hypoglycemic agent may be indicated.
Do not self-dose with OTC drugs unless approved or prescribed by the physician.
Do not breastfed while using this drug.
Videos
No videos found.

File Systems
Files
A file is a named collection of related information that normally resides on a secondary storage device such as a disk or a tape. Files are logical units of information created by processes. The part of the OS dealing with files (how they are structured, named, accessed, used, protected, implemented and managed) is known as a file system. Commonly, files represent programs and data. The data files may be numeric, alphanumeric or binary.
File Naming
When a process creates a file, a file name is given which continues to exist and can be accessed by other processes even after the process terminates. A file is named for the convenience of its human users and is referred is referred to by its name. A file name is a string of characters which may be digits or special characters. Some systems differentiate between lowercase and uppercase characters such as UNIX. Recent systems support as long as 255 characters as a file name. The OS supports two part file names separated by a period. The part following the period is known as an extension that resembles the file type.
.png)
File Structure
Unstructured
- It consists of an unstructured sequence of bytes or words.
- OS does not know what is in the file and any meaning must be imposed by the user level program.
- It provides maximum flexibility as the user can put anything they want and name them any way that is convenient.
- It is used by both UNIX and Windows.
Record Structured
- In this structure, a file is a sequence of fixed-length records, each with some internal structure.
- Each read operation returns one record and writes operation overwrites one record.
- This structure is used by many old mainframe systems.
Tree Structured
- In a tree structure, the file consists of records not necessarily of the same length.
- Each contains a key field in a fixed position in the record sorted on the key to allowing rapid searching.
- It is used in large mainframe systems for commercial data processing.
.png)
File Types
- Regular files: It contains user information and is generally ASCII or binary.
- Directories: They are the system files for maintaining the structure of the file system.
- Character Special Files: They are the files related to I/O and used to model serial I/O devices such as terminals, printers, and networks.
- Block Special Files: They are used to model disks.
- ASCII Files: It consists of a line of text where each line is terminated either by a carriage return or by line feed character or both.
- Binary Files: It consists of a sequence of bytes only and has some internal structure known to programs that use them.
Access Methods
Sequential Access
- It reads all bytes or records from the beginning.
- It cannot skip around but can rewind.
- It was convenient when the medium was a magnetic tape.
Direct Access
- It reads bytes or records in any order.
- It is essential for database systems.
- It can be used for immediate access to a large amount of data.
File Attributes
In addition to the name and data, all other information associated with the file is called a file attribute. Some also call them metadata. It may vary from system to system. Some of them are listed in the figure below.
.png)
File Operations
Some of the most common system calls related to files are listed below.
- Create: If disk space is available, it creates an empty file.
- Delete: It deletes files to free up the disk space.
- Open: It opens a file.
- Close: When all access is finished, the file should be closed to free up the internal table space.
- Read: It reads data from a file.
- Write: It writes data to a file.
- Append: It adds data to the end of a file.
- Seek: It repositions the file pointer to a specific place in the file.
- Get: It returns file attributes for processing.
- Set: It is used to set the user-settable attributes when files are changed.
- Rename: It is used to rename a file.
Directories
A directory is a node containing information about files. Directories have different structures.
Directory Structures
Single Level Directory
- All files are stored in the same directory.
- It is easy to support and understand but difficult to manage a lot of files and to manage different users.
- Different users may accidently use the same name for their files and the latter file will overwrite the previous one.
- Its advantage is simplicity and ability to locate files quickly since there is only one place to look.
.png)
Two-Level Directory
- Different users have their own separate directory.
- It is used on a multi-user computer and a simple network computer.
- It has a problem when users want to cooperate on some task and access one another's file.
- It also causes a problem when a single user has a large number of files.
.png)
Hierarchical Level Directory
- This structure is the generalization of two-level structure to a tree of arbitrary height.
- It allows the user to create their own subdirectories and to organize their files.
- It also allows the sharing of the directory to different users.
- This structure is mainly used nowadays to organize the file system.
.png)
Path Names
Absolute Path Name
- Path name started from root directory to the file
- Path is separated by ‘/’ in UNIX, ‘\’ in Windows and ‘>’ in MULTICS
Relative Path Name
- It is used in conjunction with the concept of working directory.
- A user can designate one directory as the current working directory in which all path names not beginning at the root directory are taken relative to the working directory.
Directory Operations
- Create: A directory is created.
- Delete: A directory is deleted.
- OpenDir: Directories can be read.
- CloseDir: It closes the directory to free up some internal table space.
- Rename: Rename a directory.
- Link: It allows a file to appear in more than one directory.
- Unlink: A directory entry is removed.
File system implementation
Contiguous Allocation Method
- The simplest allocation scheme is to store each file as a contiguous run of disk blocks.
- Each file occupies a set of contiguous blocks on the disk.
- Disk address defines a linear ordering on the disk.
- A file is defined by the disk address and length in blocks.
- On a disk with 1 kb blocks, a 50 kb file is allocated in 50 consecutive blocks. With 2 kb blocks, a 50 kb file would be allocated in 25 consecutive blocks.
- Both sequential and direct access is supported by contiguous allocation.
.png)
Linked Allocation Method
- Each file is a linked list of disk blocks; the disk block may be scattered anywhere on the disk (Chapter 11, n.d.).
- Each block contains a pointer to the next block of the same file.
- To create a new file, we simply create a new entry in the directory; with linked allocation, each directory entry has a pointer to the first disk block of the file.
- Every disk can be used in this allocation, unlike contiguous allocation.
- No space is lost to disk fragmentation except for internal fragmentation.
.png)
Directory Implementation
Linear List
- It uses the linear list of the file names with a pointer to the data blocks.
- It requires a linear search to find a particular entry.
- It is simple to implement but is time-consuming.
- Searching a file is slow in a linear list.
Hash table
- It consists of a linear list with a hash table.
- The hash table takes a value computed from the file name and returns a pointer to the file name in the linear list thus decreasing search time.
- If that key is already used, a linked list is created.
- It is of fixed size and dependence of the hash function on that size.
File System Reliability
There are some problems in maintaining a file system. There may be a chance of the file system being corrupted by system failures or errors in system’s software. If a file system is irrecoverably lost due to hardware or software failure, restoring all the information will be difficult,time-consuming and in many cases, impossible. The best way to ensure reliability by making duplicate copies of the files.
Backups
Most people do not think making backups of their files is worth the time and effort—until one fine day their disk abruptly dies, at which time most of them undergo a deathbed conversion. Companies, however, (usually) well understand the value of their data and generally do a backup at least once a day, usually to tape (Tanenbaum, 2013).
Backups to tape are generally made to handle one of two potential problems:
- Recover from disaster: Due to a disk crash, fire, flood, or other natural calamities.
- Recover from stupidity: accidental removal of files that is required later (may be temporarily in recycle bin).
Periodic Dump
All files in the system are copied to another device, usually a magnetic tape. This is done regularly and periodically. It may be wasteful to backup files that have not been changed since the last backup.
Incremental Dump
It is wasteful to back up files that have not changed since the last backup, which leads to the idea of incremental dumps. The simplest form of incremental dumping is to make a complete dump (backup) periodically, say weekly or monthly, and to make a daily dump of only those files that have been modified since the last full dump. While this scheme minimizes dumping time, it makes recovery more complicated because first the most recent full dump has to be restored, followed by all the incremental dumps in reverse order (Tanenbaum, 2013).
Compression
With immense data, compression seems to be feasible but with many compression algorithms, a single bad spot on the backup tape can make the entire data unreadable.
Two strategies can be used for dumping a disk to a tape. They are:
Physical Dump
A physical dump starts at block 0 of the disk, writes all the disk blocks onto the output tape in order, and stops when it has copied the last one. Such a program is so simple mat it can probably be made 100% bug-free, something that can probably not be said about any other useful program (Tanenbaum, 2013). It is simple and has great speed. It has some issues as well. There is no value in backing up unused disk blocks. Dumping bad block also creates endless disk read errors during the dumping process.
Logical Dump
A logical dump starts at one or more specified directories and recursively dumps all files and directories found there that have changed since some given base date. Thus in a logical dump, the dump tape gets a series of carefully identified directories and files, which makes it easy to restore a specific file or directory upon request.
File System Performance
Access to disk is much slower that access to memory, so, many systems have been designed with various optimizations to improve the performance.
Caching
The most common technique used to reduce disk accesses is the block cache or buffer cache. Various algorithms can be used to manage the cache, but a common one is to check all read requests to see if the needed block is in the cache. If it is, the read request can be satisfied without a disk access. If the block is not in the cache, it is first read into the cache, and then copied to wherever it is needed. Subsequent requests for the same block can be satisfied from the cache (Tanenbaum, 2013).
Block Read Ahead
A second technique for improving perceived file system performance is to try to get blocks into the cache before they are needed to increase the hit rate. When the file system is asked to produce block k in a file, it does that, but when it is finished, it makes a sneaky check in the cache to see if block k +1 is already there. If it is not, it schedules a read for block k +1 in the hope that when it is needed, it will have already arrived in the cache (Tanenbaum, 2013).
Reducing Disk Arm Motion
The disk arm motion can be reduced by putting blocks that are likely to be accessed in sequence close to each other. When blocks lie in the same cylinder, the disk arm does not require moving and only the R/W head moves. Hence, data access is faster when the disk arm motion is reduced.
References
Chapter 11. (n.d.). Retrieved from http://groups.engin.umd.umich.edu/CIS/course.des/cis450/mcfadyen/chp11.htm
Tanenbaum, A. S. (2013). Modern Operating Systems. Delhi: PHI Learning Private Limited.
Lesson
Memory Management
Subject
Operating System
Grade
IT
Recent Notes
No recent notes.
Related Notes
No related notes.