17-06-2014, 02:16 PM
File (Text/Video) Compression And Decompression
1370242266-finalpr120507110612phpapp01.docx (Size: 1.22 MB / Downloads: 18)
INTRODUCTION
Since their introduction, social network sites (SNSs) such as MySpace, Facebook, Cyworld, and Bebo have attracted millions of users, many of whom have integrated these sites into their daily practices. As of this writing, there are hundreds of SNSs, with various technological affordances, supporting a wide range of interests and practices. While their key technological features are fairly consistent, the cultures that emerge around SNSs are varied. Most sites support the maintenance of pre-existing social networks, but others help strangers connect based on shared interests, political views, or activities. Some sites cater to diverse audiences, while others attract people based on common language or shared racial, sexual, religious, or nationality-based identities. Sites also vary in the extent to which they incorporate new information and communication tools, such as mobile connectivity, blogging, and photo/video-sharing
Scholars from disparate fields have examined SNSs in order to understand the practices, implications, culture, and meaning of the sites, as well as users' engagement with them. This special theme section of the Journal of Computer-Mediated Communication brings together a unique collection of articles that analyze a wide spectrum of social network sites using various methodological techniques, theoretical traditions, and analytic approaches. By collecting these articles in this issue, our goal is to showcase some of the interdisciplinary scholarship around these sites
The purpose of this introduction is to provide a conceptual, historical, and scholarly context for the articles in this collection. We begin by defining what constitutes a social network site and then present one perspective on the historical development of SNSs, drawing from personal interviews and public accounts of sites and their changes over time. Following this, we review recent scholarship on SNSs and attempt to contextualize and highlight key works. We conclude with a description of the articles included in this special section and suggestions for future research
PURPOSE
A social networking service is an online service, platform, or site that focuses on facilitating the building of social networks or social relations among people who, for example, share interests, activities, backgrounds, or real-life connections. A social network service consists of a representation of each user (often a profile), his/her social links, and a variety of additional services. Most social network services are web-based and provide means for users to interact over the Internet, such as e-mail and instant messaging. Online community services are sometimes considered as a social network service, though in a broader sense, social network service usually means an individual-centered service whereas online community services are group-centered. Social networking sites allow users to share ideas, activities, events, and interests within their individual networks.
Social networking sites are not only for you to communicate or interact with other people globally but, this is also one effective way for business promotion. A lot of business minded people these days are now doing business online and use these social networking sites to respond to customer queries. It isn't just a social media site used to socialize with your friends but also, represents a huge pool of information from day to dayliving.
TECHNOLOGIES USED:
• JAVA : Programming Interface
JAVA
Java is a small, simple, safe, object oriented, interpreted or dynamically optimized, byte coded, architectural, garbage collected, multithreaded programming language with a strongly typed exception-handling for writing distributed and dynamically extensible programs.
Java is an object oriented programming language. Java is a high-level, third generation language like C, FORTRAN, Small talk, Pearl and many others. You can use java to write computer applications that crunch numbers, process words, play games, store data or do any of the thousands of other things computer software can do.
Special programs called applets that can be downloaded from the internet and played safely within a web browser. Java a supports this application and the follow features make it one of the best programming languages.
• It is simple and object oriented.
• It helps to create user friendly interfaces.
• It is very dynamic.
TOOLS USED:
NETBEANS 7.0
Current versions
Net Beans IDE 6.0 introduced support for developing IDE modules and rich
client applications based on the Net Beans platform, a Java Swing GUI builder
(formerly known as "Project Matisse"), improved CVS support, Web Logic 9
and JBoss 4 support, and many editor enhancements. Net Beans 6 is available in
official repositories of major Linux distributions.
Net Beans IDE 6.5, released in November 2008, extended the existing Java
EE features (including Java Persistence support, EJB 3 and JAX-WS).
Additionally, the Net Beans Enterprise Pack supports development of Java EE 5
enterprise applications, including SOA visual design tools, XML schema tools, web
services orchestration (for BPEL), and UML modeling. The Net Beans IDE Bundle
for C/C++ supports C/C++ and FORTRAN development
Net Beans IDE 6.8 is the first IDE to provide complete support of Java EE 6 and
the Glass Fish Enterprise Server v3. Developers hosting their open-source
projects on kenai.com additionally benefit from instant messaging and issue
tracking integration and navigation right in the IDE, support for web application
development with PHP 5.3 and the Symfony framework, and improved code
completion, layouting, hints and navigation in JavaFX projects.
Net Beans IDE 6.9, released in June 2010, added support for OSGi, Spring
Framework 3.0, Java EE dependency injection (JSR-299), Z end
Framework for PHP, and easier code navigation (such as "Is
Overridden/Implemented" annotations), formatting, hints, and refactoring across
several languages.
NetBeans IDE 7.0 was released in April 2011. On August 1, 2011, the NetBeans
Team released NetBeans IDE 7.0.1, which has full support for the official release
of the Java SE 7 platform.
MODULE DESCRIPTION
There are four modules in this project:
Compression
This module helps us to compress a file or folder. The compressed file will have a extension that has been given at the development time. We can send the compressed file over the internet so that users having this software can decompress it.
Decompression
This is the reverse process of file compression. Here we can decompress the compressed file and get the original file.
View files in the compressed file
Here we can view the list of files inside our compressed file. We can view the files before decompressing and decide to decompress or not
Set icon and extension
This is additional feature in our project. We can set our own extension to the compressed file. More than that we can specify the style of icon for the compressed file. Users will also be given a option to change the icon as per their preference.
Algorithm Description .
To avoid a college assignment
The domain name of this website is from my uncle’s algorithm. In nerd circles, his algorithm is pretty well known. Often college computer science textbooks will refer to the algorithm as an example when teaching programming techniques. I wanted to keep the domain name in the family so I had to pay some domain squatter for the rights to it.
Back in the early 1950’s, one of my uncle’s professors challenged him to come up with an algorithm that would calculate the most efficient way to represent data, minimizing the amount of memory required to store that information. It is a simple question, but one without an obvious solution. In fact, my uncle took the challenge from his professor to get out of taking the final. He wasn’t told that no one had solved the problem yet.
I’ve written a simple program to demonstrate Huffman Coding in Java. Because I have this web site, several times a year I receive a frantic e-mail from a college student stating, basically, “I have a homework assignment to code the Huffman Algorithm and it is due next week. I am too lazy or clueless to do the work myself, so can you just send me the source code so I can pass it off as my own.” I don’t normally accommodate them, but perhaps this will help them do their own homework
A little of bit of background
Computers store information in zeros and ones: binary “off”s and “on”s. The standard way of storing characters on a computer is to give each character a sequence of 8 bits (or “binary digits”) which can be 0’s or 1’s. This allows for 256 possible characters (because 2 to the 8th power is 256). For example, the letter “A” is given the unique code of 01000001. Unicode allocates 16 bits per character and it handles even non-Roman alphabets. It is simply easier for computers to handle characters when they all are the same size. The more bits you allow per character the more characters you can support in your alphabet.
But when you make every character the same size, it can waste space. In written text, all characters are not created equal. The letter “e” is pretty common in English text, but rarely does one see a “Z.” But since it is possible to encounter both in text, each has to be assigned a unique sequence of bits. But if “e” was a 7-bit sequence and “Z” was 9 bits then, on average, a message would be slightly smaller than otherwise because there would be more short sequences than long sequences. You could compound the savings by adjusting the size of every character and by more than 1 bit.
Even before computers, Samuel Morse took this into account when assigning letters to his code. The very common letter “E” is the short sequence of “•” and the uncommon letter “Q” is the longer sequence of “— — • —.” He came up with Morse code by looking at the natural distribution of letters in the English alphabet and guessing from there. Morse code isn’t perfect because some common letters have longer codes than less common ones. For example the letter “O,” which is a long “— — —,” is more common than the letter “I,” which is the shorter code “• •.” If these two assignments where swapped, then it would be slightly quicker, on average, to transmit Morse code. Huffman Coding is a methodical way for determining how to best assign zeros and ones. It was one of the first algorithms for the computer age. By the way, Morse code is not really a binary code because it puts pauses between letters and words. If we were to put some bits between each letter to represent pauses, it wouldn’t result in the shortest messages possible
This adjusting of the codes is called compression and sometimes the computational effort in compressing data (for storage) and later uncompressing it (for use) is worth the trouble. The more space a text file takes up makes it slower to transmit from one computer to another. Other types of files, which have even more variability than the English language, compress even better than text. Uncompressed sound (.WAV) and image (.BMP) files are usually at least ten times as big as their compressed equivalents (.MP3 and .JPG respectively). Web pages would take ten times as long to download if we didn’t take advantage of data compression. Fax pages would take longer to transmit. You get the idea. All of these compressed formats take advantage of Huffman Coding.
Again, the trick is to choose a short sequence of bits for representing common items (letters, sounds, colors, whatever) and a longer sequence for the items that are encountered less often. When you average everything out, a message will require less space if you come up with good encoding dictionary
Mixing art and computer science
You cannot just start assigning letters to unique sequences of 0’s and 1’s because there is a possibility of ambiguity if you do not do it right. For example, the four most common letters of the English alphabet are “E,” “T,” “O,” and “A.” You cannot just assign 0 to “E,” 1 to “T,” 00 to “O,” 01 to “A,” because if you encounter “…01…” in a message, you could not tell if the original message contained “A” or the sequence “ET.” The code for a letter cannot be the same as the front part of a different letter. To avoid this ambiguity, we need a way of organizing the letters and their codes that prevents this. A good way of representing this information is something computer programmers call a binary tree.
Alexander Calder is an American artist who builds mobiles and really likes the colors red and black. One of his larger works hangs from the East building atrium at the National Gallery, but he had made several similar to it. The mobile hangs from a single point in the middle of a pole. It slowly sways as the air circulates in the room. On each end of the pole you’ll see either a weighted paddle or a connection to the middle of another pole. Similarly, those lower poles have things hanging off of them too. At the lowest levels, all the poles have weights on their ends
Programmers would look at this mobile and think of a binary tree, a common structure for storing program data. This is because every mobile pole has exactly two ends. For the sake of this algorithm, one end of the pole is considered “0” while the end is “1.” The weights at the ends of the poles will have letters associated with them. If an inchworm were to travel from the top of the mobile to a letter, it would walk down multiple poles, sometimes encountering the “0” and sometimes the “1.” The sequence of binary digits to the letter ends up corresponding to the encoding of that letter.
FEASIBILITY STUDY
Feasibility study is made to see if the project on completion will serve the purpose of the organization for the amount of work, effort and the time that spend on it. Feasibility study lets the developer foresee the future of the project and the usefulness. A feasibility study of a system proposal is according to its workability, which is the impact on the organization, ability to meet their user needs and effective use of resources. Thus when a new application is proposed it normally goes through a feasibility study before it is approved for development.
The document provide the feasibility of the project that is being designed and lists various areas that were considered very carefully during the feasibility study of this project such as Technical, Economic and Operational feasibilities. The following are its features
TECHNICAL FEASIBILITY
The system must be evaluated from the technical point of view first. The assessment of this feasibility must be based on an outline design of the system requirement in the terms of input, output, programs and procedures. Having identified an outline system, the investigation must go on to suggest the type of equipment, required method developing the system, of running the system once it has been designed.
Technical issues raised during the investigation are:
Does the existing technology sufficient for the suggested one?
Can the system expand if developed?
The project should be developed such that the necessary functions and performance are achieved within the constraints. The project is developed within latest technology. Through the technology may become obsolete after some period of time, due to the fact that never version of same software supports older versions, the system may still be used. So there are minimal constraints involved with this project. The system has been developed using Java the project is technically feasible for development
We as Analysts have identified the existing computer systems (hardware & software) of the concerned department and have determined whether these technical resources are sufficient for the proposed system or not. We have found out thus, that the project is technically very much feasible. The hardware and software requirements are:
ECONOMIC FEASIBILITY
The developing system must be justified by cost and benefit. Criteria to ensure that effort is concentrated on project, which will give best, return at the earliest. One of the factors, which affect the development of a new system, is the cost it would require.
The following are some of the important financial questions asked during preliminary investigation:
The costs conduct a full system investigation.
The cost of the hardware and software.
The benefits in the form of reduced costs or fewer costly errors.
Since the system is developed as part of project work, there is no manual cost to spend for the proposed system. Also all the resources are already available, it give an indication of the system is economically possible for development
WHAT IS TESTING?
Software testing is a specialized discipline in the process of software development.
• Testing is the process of demonstrating that errors are not present.
• The purpose of testing is to show that a program performs its intended functions correctly.
• Testing is the process of establishing confidence that a program does what it is supposed to do.
Levels of Testing
There are three levels of testing:
• Unit Testing
Unit testing is the process of taking a module and running it in isolation from the rest of the software product by using prepared test cases and comparing actual results with the results predicted by the specifications and design of the Module. As we use waterfall model for designing our software thus we perform unit testing side by side after coding every individual module.
Integration Testing
We perform integration testing using bottom up integration and we get positive Results in test.
• System Testing
This type of testing is done when the system is ready to execute with full functionality.
• Acceptance Testing
This type of testing covers all the test cases applied by the customer and comprises of two main parts
1. Alpha Testing
2. Beta Testing
• Functional Testing
Functional testing also known as black box testing is performed on our project. Here we test the functionality of our program. In functional testing we observe the output for certain input values and it produces positive results.
CONCLUSION
The project FileZip is completed, satisfying the required design specifications. The system provides a user-friendly interface. The software is developed with modular approach. All modules in the system have been tested with valid data and invalid data and everything work successfully. Thus the system has fulfilled all the objectives identified and is able to replace the existing system. The constraints are met and overcome successfully. The system is designed as like it was decided in the design phase. The system is very user friendly and will reduce time consumption. This software has a user-friendly screen that enables the user to use without any inconvenience. The user need not depend on third party software’s like winzip, winrar, Stuff etc. The software can be used to compress files and they can be decompressed when the need arises. The application has been tested with live data and has provided a successful result. Hence the software has proved to work efficiently.