You are on page 1of 44

1.

INTRODUCTION
With increasingly urgent need for reliable security, biometrics is being spotlighted as the authentication method for the next generation. Among numerous biometric technologies, fingerprint authentication has been in use for the longest time and bears more advantages than other biometric technologies do. Fingerprint authentication is possibly the most sophisticated method of all biometric technologies and has been thoroughly verified through various applications. Fingerprint authentication has particularly proved its high efficiency and further enhanced the technology in criminal investigation for more than a century. Even features such as a persons gait, face, or signature may change with passage of time and may be fabricated or imitated. However, a fingerprint is completely unique to an individual and stayed unchanged for lifetime. This exclusivity demonstrates that fingerprint authentication is far more accurate and efficient than any other methods of authentication.

Also, a fingerprint may be taken and digitalized by relatively compact and cheap devices and takes only a small capacity to store a large database of information. With these strengths, fingerprint authentication has long been a major part of the security market and continues to be more competitive than others in todays world. Biometric technology uses computerized methods to identify a person by their unique physical or behavioral characteristics. Developments and uses have increased with demand to match concerns over international, business and personal security. Biometrics is more personal than a passport photo or Pin, using traits such as fingerprints, face or eye "maps" as key identifying features. However, there are concerns about the storing of biometric data and its possible misuse. Using fingerprints is the oldest method of identification. In the digital world, the fingerprint is electronically read

by a sensor plate. The corrugated ridges of the skin are non-continuous and form a pattern that has distinguishing features, or minutiae. The minutiae can be plotted and joined up to form a template that can be stored and compared against fingerprints in the future. Some readings may be affected by fingerprints that have been damaged through injury and some sensors may not be able to read fingers that are too wet or too dry.

1.1. DESIGN
The design of this system consists of the following important parameters 1. 2. 3. 4. Scanning- using DSP Processor Searching- based on the principle of GOOGLE SEARCH Networking- all the election booths are connected in a network Data transfer using telephone lines.

1.2. SUMMARY OF DESIGN


The only pre-requisite for the use of this finger print scanner is a personal identification card. We hope that this system proves to be efficient and enables the people to be smarter in choosing their leaders. One of the major issues of current voter ID card is that the card can still be misused as the card can be duplicated if the details of the voter are known by creating new cards and replacing the photo with fraudulent photo. Alternatively the supervisor in the voting booth in concurrence with one of the parties can allow multiple voting to the same person as the electronic voting machine currently available cannot judge the same. The goal of this project is to enroll each citizens finger print into the Biometrics based Voting machine for casting their vote and eliminate this fraud by developing a Biometrics based Finger print access voting system wherein the same person cannot be allowed for voting again for that particular election. A supervisor must initialize the voting machine by a validation code in order to load the election information into the voting machine. Information about the election is stored on a file on the server. Once the supervisor confirms the ballot, voting may begin. Each voter has to enter their ID and place his/her finger on the scanner at the start of the voting process to prevent double voting. Then the voter can view each screen of

the ballot and cast their vote. When the voter indicates that they have finished voting, the database is updated to prevent re-voting. At the end of the election, the supervisor enters the validation code and the machine displays a screen for each contest showing the title of the contest and the number of votes for each alternative, including no vote.

1.3. DESCRIPTION OF THE VOTING MACHINE

Fig1.1. Block Diagram for Whole Process

The detailed description of each and every internal unit in the VOTING SYSTEM is given below. It can be divided in to the following main categories. 1. 2. Finger Print Scanner Finger Print Sensor

1.4. EXTRACTION OF THUMB IMPRESSION


This is one of the oldest forms of biometric techniques which involves mapping of the pattern of the fingerprint of the individual and then comparing the ridges, furrows, within the template. The fingerprint given to the device is first searched at the coarse level in the database and then finer comparisons are made to get the result.

BIOMETRIC CAPTURE

IMAGE PROCESS

1010 0110 1101

IMAGE

LIVE UPDATE

TEMPLATE EXTRACT

1010 0110 1101

BIOMETRIC MATCHING

98%
MATCHING SCORE

STORAGE DEVICE

STORED TEMPLATE

1.5. FEATURE EXTRACTION AND COMPARISON 1.5.1. Scanning and Processing


The finger print is scanned and this scanned image is in the form of analog form. As we know all the matter in the system is in the digital format, this scanned analog form is converted into the digital form. This is done by using A/D converter. It is interfaced with the parallel DSP chip for the better performance. This is further processed for the matching etc.

Finger Print Scanner

1.5.2. Transfer of Processed Data to the Hard-Disk


The image of the finger print processed is to be stored for the further calculations and retrievals. So this image is stored in the hard disk. The data is fetched whenever needed and processed. The BIOS language of the HARD DISK is stored in SDRAM which is also interfaced in parallel with the chip. This helps the chip to transfer the image to the HARD DISK for further process. The image transferred to the HARD DISK is compared with that of the DATA BASE.

Processing Finger Print

1.6. ADVANTAGES
1. 2. The system is highly reliable and secure. The cost of maintenance is less and easy when compared to the present systems. 3. 4. Fraud and rigging and other illegal practices can be avoided. Result can be obtained immediately without errors.

2. REQUIREMENTS ANALYSIS
Requirements analysis, also called requirements engineering, is the process of determining user expectations for a new or modified product. These features, called requirements, must be quantifiable, relevant and detailed. In software engineering, such requirements are often called functional specifications. Requirements analysis is an important aspect of project management. Requirements analysis involves frequent communication with system users to determine specific feature expectations, resolution of conflict or ambiguity in requirements as demanded by the various users or groups of users, avoidance of feature creep and documentation of all aspects of the project development process from start to finish. Energy should be directed towards ensuring that the final system or product conforms to client needs rather than attempting to mold user expectations to fit the requirements. Requirements analysis is a team effort that demands a combination of hardware, software and human factors engineering expertise as well as skills in dealing with people.

2.1. REQUIREMENTS DETERMINATION


Requirements determination is the process to explain the critical phase in the software development life cycle. It is not much concerned with technical. The main intent is in determining the stake holders requirements. These are categorized into functional and non functional requirements.

2.2. PROBLEM SPECIFICATION


Problem specification is an approach, a set of concepts, to be used when gathering requirements and creating specifications for computer software. Its basic philosophy is strikingly different from other software requirements methods in insisting that: The best way to approach requirements analysis is through a process of parallel, not hierarchical, decomposition of user requirements. User requirements are about relationships in the operational context; not about functions that the software system must perform.

2.3. FEASIBILITY STUDY


It is an investigation into a proposed plan or project to determine whether and how it can be successfully and profitably carried out. Frequently used in project management, a feasibility study may examine alternative methods of reaching objectives or be used to define or redefine the proposed project. The information gathered must be sufficient to make a decision on whether to go ahead with the project, or to enable an investor to decide whether to commit finances to it. This will normally require analysis of technical, financial, and market issues, including an estimate of resources required in terms of materials, time, personnel, and finance, and the expected return on investment . It is a measure of how beneficial or practical the development of an information system will be to an organization. The aspects that include in this are: Technical Feasibility Operational Feasibility

2.3.1. Technical Feasibility:


Technical feasibility refers to the ability of the process to take advantage of the current state of the technology in pursuing further improvement. The technical capability of the personnel as well as the capability of the available technology should be considered. The assessment is based on an outline design of system requirements in terms of Input, Processes, Output, Fields, Programs, and Procedures. This can be qualified in terms of volumes of data, trends, frequency of updating, and other areas in order to give an introduction to the technical system.

2.3.2. Operational Feasibility:


Operational feasibility determines if the proposed system satisfied user objectives and can be fitted into the current system operation. The present system, Intra mailing system, can be justified as operationally feasible based on the following grounds. 1. The methods of processing and presentation are completely accepted to the clients since they can meet all the requirements. 2. 3. The clients have been involved in the planning and development of system. The proposed system will not cause any problem under any circumstances.

2.4. SOFTWARE REQUIREMENT SPECIFICATIONS


A software requirements specification (SRS) is a comprehensive description of the intended purpose and environment for software under development. The SRS fully describes what the software will do and how it will be expected to perform. An SRS minimizes the time and effort required by developers to achieve desired goals and also minimizes the development cost. A good SRS defines how an application will interact with system hardware, other programs and human users in a wide variety of realworld situations. Parameters such as operating speed, response time, availability, portability, maintainability, footprint, security and speed of recovery from adverse events are evaluated. Methods

2.4.1. Software Requirements Specifications for Biometric Voting Machine:


The main purpose of the biometric voting machine is to conduct the elections without fraud. But the pre requirements of the system must be taken care. The voters are provided with the ID cards. These can be misused if the details are known to all. They can create the duplicate cards with the same details and attend the polling. Alternatively the supervisor in the voting booth in concurrence with one of the parties can allow multiple voting to the same person as the electronic voting machine currently available cannot judge the same. The main advantage of this machine is it does no allow the same person to vote. This uses the finger print and matches it with the stored image. Hence, the finger prints of all the eligible voters must be stored into the memory for the matching of pattern. If the pattern matches the vote is counted as legible. If a voter uses his chance once, he cannot vote second time. This can be identified by the system and detects the voter as fraud. Thus, we can create a fool proof voting system.

Each voter has to enter their ID and place his/her finger on the scanner at the start of the voting process. Once the validation is complete, the voter can view each screen of the ballot and cast their vote.

Application Scope/High Level Requirements


Biometric Voting Machine has following Functions:

Administrator
The Election Commission appoints an authorized person known as Admin. The Admin looks after the voters registration process. He has all the privileges to add, modify or delete a voter.

Voter Registration
The voters visit the Admin office to get registered. Voter registration includes personal information of the voter along with their thumb impression. Any modifications will be done by the Admin at the same time.

Serial Communication
The biometric device is connected to the PC through the serial Port using UF (Uni-Finger) protocol. The thumb impression of the voter is taken as input from Biometric device through serial communication.

Voter Details
Whenever the thumb impression is recognized, the details of the corresponding voter are displayed on the screen.

Voting Process
In this module the voter casts his vote by choosing any one of the party by giving the number allotted to that particular party as input through the keypad. And whenever the voter casts his vote, the corresponding details are marked in the database.

2.5. HARD WARE REQUIREMENTS

1.

Pentium IV

2.

256MB RAM

2.6. SOFTWARE REQUIREMENTS

1. 2. 3. 4. 5.

JDK 1.5 or above Oracle 9i Tomcat 6.0.14 IE 6.0 or above or Mozilla Windows or Linux

3. TECHNOLOGY OVERVIEW
3.1. JAVA PLATFORM:
Fundamentally, Java is a programming language that allows people to write applets and executable applications. In a grander sense, Java is a platform, a full suite of tools and classes that allow a programmer to create dynamic applications for the web, for small devices like cell phones and PDAs, and for personal computers. Java was introduced by Sun Microsystems in 1995 and instantly created a new sense of the interactive possibilities of the Web. Both of the major Web browsers include a Java virtual machine. Almost all major operating system developers (IBM, Microsoft, and others) have added Java compilers as part of their product offerings. In its eleven-year lifespan Java has evolved tremendously. It has spawned Servlet technology, component technology like JavaBeans, JavaServerFaces, and a whole host of tools. Despite all of these mysterious and complex offshoots, the core fundamentals of Java have remained relatively the same. Java is a programming language expressly designed for use in the distributed environment of the Internet. It was designed to have the "look and feel" of the C++ language, but it is simpler to use than C++ and enforces an object-oriented programming model. Java can be used to create complete applications that may run on a single computer or be distributed among servers and clients in a network. It can also be used to build a small application module or applet for use as part of a Web page. Applets make it possible for a Web page user to interact with the page.

Advantages of Java:
Java is object oriented programming language and easy to learn. Java is interpreted. Development of the application is fast. The compile-link-loadtest-debug cycle is superseded. Applications developed are portable across multiple platforms. Without any modifications they run on multiple operating systems and hardware architectures. Because of java run time system manages the memory, Applications are robust. By using Multithreading concept, Interactive graphical applications have high performance because multiple concurrent threads of activity in your application are supported by multi threading built into java environment.

Applications are adaptable to changing environments because you can dynamically download code modules from anywhere on the network. Security is high. End users can trust that applications developed are secure even though they are downloaded from internet; the java run time system has built-in protection against viruses and intruders.

3.2. GUI PROGRAMMING: AWT:


The Abstract Window Toolkit (AWT) is Java's original platform-independent windowing, graphics, and user-interface widget toolkit. The AWT is now part of the Java Foundation Classes (JFC) the standard API for providing a graphical user interface (GUI) for a Java program. The AWT provides, among other things:

A basic set of GUI widgets such as buttons, text boxes, and menus The core of the GUI event subsystem The interface between the native windowing system and the Java application Several layout managers A java.awt.datatransfer package for use with the Clipboard and Drag and Drop The interface to input devices such as mice and keyboards The AWT Native Interface, which enables rendering libraries compiled to native code to draw directly to an AWT Canvas object drawing surface.

Access to the system tray on supporting systems The ability to launch some desktop applications such as web browsers and email clients from a Java application

SWING:
Swing is a widget toolkit for Java. It is part of Sun Microsystems' Java Foundation Classes (JFC) an API for providing a graphical user interface (GUI) for Java programs. Swing was developed to provide a more sophisticated set of GUI components than the earlier Abstract Window Toolkit. Swing provides a native look and feel that emulates the look and feel of several platforms, and also supports a pluggable look and feel that allows applications to have a look and feel unrelated to the underlying platform.

Swing is a platform-independent, Model-View-Controller GUI framework for Java. It follows a single-threaded programming model, and possesses the following traits:

Platform independence: Swing is platform independent both in terms of its expression (Java) and its implementation (non-native universal rendering of widgets).

Component-Oriented: Swing is a component-based framework. The distinction between objects and components is a fairly subtle point: concisely, a component is a well-behaved object with a known/specified characteristic pattern of behavior. Swing objects asynchronously fire events, have "bound" properties, and respond to a well-known set of commands (specific to the component.) Specifically, Swing components are Java Beans components, compliant with the Java Beans Component Architecture specifications.

3.3. UNIFINGER PROTOCOL:


The UniFinger modules are stand-alone fingerprint systems ideal for embedded system applications where biometric security is needed. The UniFinger modules provide complete fingerprint solutions by incorporating fingerprint sensor interface and embedded fingerprint recognition algorithm into a half business card sized module. The modules are designed for manufacturers searching for an inexpensive, reliable and easyto-integrate biometric system. It supports wide range of fingerprint sensor interoperability giving you a freedom to select suitable sensor that most fits to your application. Furthermore, the fingerprint data for enrollment and verification are compatible among different sensors, even if they are based on different technologies. This feature of unification presents application manufacturers and system integrators with much more flexibility than ever before. In addition to these features, the miniature sized UniFinger module has a state-of-the-art low power design making it a perfect match in a wide range of applications from battery operated mobile equipments to network based security systems. The UniFinger stands ready to meet your requirements and adapt to your applications.

Advantages of fingerprint biometrics:


Each and every one of our ten fingerprints is unique, different from one another and from those of every other person. Even identical twins have unique fingerprints. Unlike passwords, PIN codes, and smartcards that we depend upon today for identification, our fingerprints are impossible to lose or forget, and they can never be stolen. We have ten fingerprints as opposed to one voice, one face or two eyes. Fingerprints have been used for centuries for identification, and we have a substantial body of real world data upon which to base our claim of the uniqueness of each fingerprint. Iris scanning, for instance, is an entirely new science for which there is little or no real world data.

3.3.1. The Basics of Fingerprint Identification Ridges The skin on the inside surfaces of our hands, fingers, feet, and toes is ridged or covered with concentric raised patterns. These ridges are called friction ridges and they serve the useful function of making it easier to grasp and hold onto objects and surfaces without slippage. It is the many differences in the way friction ridges are patterned, broken, and forked which make ridged skin areas, including fingerprints, unique.

3.3.2. Fingerprint Identification Fingerprints are extremely complex. In order to read and classify them, certain defining characteristics are used, many of which have been established by law enforcement agencies as they have created and maintained larger and larger databases of prints. Even though biometrics companies like DigitalPersona do not save images of fingerprints and do not use the same manual process to analyze them, many of the methodologies that have been established over the years in law enforcement are useful for digital algorithms as well. 3.3.3. Global Versus Local Features We make use of two types of fingerprint characteristics for use in identification of individuals: Global Features and Local Features. Global Features are those characteristics that you can see with the naked eye. Global Features include: Basic Ridge Patterns Pattern Area Core Area Delta Type Lines Ridge Count The Local Features are also known as Minutia Points. They are the tiny, unique characteristics of fingerprint ridges that are used for positive identification. It is possible for two or more individuals to have identical global features but still have different and unique fingerprints because they have local features minutia points - that are different from those of others.

3.3.4. Global Features Pattern Area The Pattern Area is the part of the fingerprint that contains all the global features. Fingerprints can be read and classified based on the information in the Pattern Area. Certain minutia points that are used for final identification might be outside the Pattern Area. One significant difference between DigitalPersonas fingerprint recognition algorithm and those of competing companies is that DigitalPersona uses the entire fingerprint for analysis and identification, not just the Pattern Area. While other companies devices require users to line up their fingerprints on the fingerprint reader, DigitalPersona acquires a greater amount of information over the entire fingerprint, and can obtain enough information to "read" a print even if only part of the print is placed on the fingerprint reader. Core Point -- The Core Point, located at the approximate center of the finger impression, is used as a reference point for reading and classifying the print. Type Lines Type Lines are the two innermost ridges that start parallel, diverge, and surround or tend to surround the pattern area. When there is a definite break in a type line, the ridge immediately outside that line is considered to be its continuation.

Delta The Delta is the point on the first bifurcation, abrupt ending ridge, meeting of two ridges, dot, fragmentary ridge, or any point upon a ridge at or nearest the center of divergence of two type lines, located at or directly in front of their point of divergence. It is a definite fixed point used to facilitate ridge counting and tracing. Ridge Count The Ridge Count is most commonly the number of ridges between the Delta and the Core. To establish the ridge count, an imaginary DigitalPersona Fingerprint Recognition line is drawn from the Delta to the Core and each ridge that touches this line is counted. 3.3.5. Basic Ridge Patterns Over the years those who work with fingerprints have defined groupings of prints based on patterns in the fingerprint ridges. This categorization makes it easier to search large databases of fingerprints and identify individuals. The basic ridge patterns are not sufficient for identification but they help narrow down the search. Certain products base identification on "optical correlation" of global ridge patterns, or matching one fingerprint pattern image to another. DigitalPersona believes that positive identification must be based on verification of minutia points in addition to global features. The new digital paradigm for fingerprint identification uses many elements of the categorization process that has been in place for years, as well as some newer concepts for understanding and categorizing global features. In addition to defining ridge patterns, DigitalPersona has determined that there are certain ways that ridges can flow around on a fingerprint, and that the constraints on flow behavior can be exploited for identification. The DigitalPersona Recognition Engine makes use of the characteristics of global ridge patterns and flow characteristics to identify individuals. There are a number of basic ridge pattern groupings which have been defined. Three of the most common are loop, arch, and whorl.

1. LOOP The loop is the most common type of fingerprint pattern and accounts for about 65% of all prints. 2. ARCH The Arch pattern is a more open curve than the Loop. There are two types of arch patterns the Plain Arch and the Tented Arch. 3. WHORL Whorl patterns occur in about 30% of all fingerprints and are defined by at least one ridge that makes a complete circle. 3.3.6. Packet Protocol In the packet protocol of UniFinger, 1 packet is 13-byte long and its structure is as follows

Fig 3.1. Structure of a packet 1. Start code: 1byte. Indicates the beginning of a packet. It always should be 0x40. 2. Command: 1byte. Refer to the Command Table in a later chapter of this document. 3. Param: 4bytes.Indicates user ID or system parameters. 4. Size: 4bytes. Indicates the size of binary data following the command packet such as fingerprint templates or images. 5. Flag/Error: 1byte. Indicates flag data in the request command sent to the module, and error code in the response command received from the module, respectively. 6. Checksum: 1byte. Checks the validity of a packet. Checksum is a remainder of the sum of each field, from the Start code to Flag/Error, divided by 256 (0x100).

7. End code: 1byte. Indicates the end of a packet. It always should be 0x0A. It is also used as a code indicating the end of a binary data such as fingerprint templates.

3.4. JDBC:
JDBC is Java application programming interface that allows the Java programmers to access database management system from Java code. It was developed by JavaSoft, a subsidiary of Sun Microsystems.

Definition Java Database Connectivity in short called as JDBC. It is a java API which enables the java programs to execute SQL statements. It is an application programming interface that defines how a java programmer can access the database in tabular format from Java code using a set of standard interfaces and classes written in the Java programming language. JDBC has been developed under the Java Community Process that allows multiple implementations to exist and be used by the same application. JDBC provides methods for querying and updating the data in Relational Database Management system such as SQL, Oracle etc. The Java application programming interface provides a mechanism for dynamically loading the correct Java packages and drivers and registering them with the JDBC Driver Manager that is used as a connection factory for creating JDBC connections which supports creating and executing statements such as SQL INSERT, UPDATE and DELETE. Driver Manager is the backbone of the jdbc architecture. Generally all Relational Database Management System supports SQL and we all know that Java is platform independent, so JDBC makes it possible to write a single database application that can run on different platforms and interact with different Database Management Systems. Java Database Connectivity is similar to Open Database Connectivity (ODBC) which is used for accessing and managing database, but the difference is that JDBC is designed specifically for Java programs, whereas ODBC is not depended upon any language.

In short JDBC helps the programmers to write java applications that manage these three programming activities:

1. It helps us to connect to a data source, like a database. 2. It helps us in sending queries and updating statements to the database and 3. Retrieving and processing the results received from the database in terms of answering to your query.

DBMS:
A database management system (DBMS) is a complex set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMS are categorized according to their data structures or types, some time DBMS is also known as Data base Manager. It is a set of prewritten programs that are used to store, update and retrieve a Database. A DBMS includes: 1. A modeling language to define the schema of each database hosted in the DBMS, according to the DBMS data model. 2. The four most common types of organizations are the hierarchical, network, relational and object models. Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost). 3. The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS

SQL:
SQL is a standardized query language for requesting information from a database. The original version called SEQUEL (structured English query language) was designed by an IBM research center in 1974 and 1975. SQL was first introduced as a commercial database system in 1979 by Oracle Corporation. Historically, SQL has been the favorite query language for database management systems running on minicomputers and mainframes. Increasingly, however, SQL is being supported by PC database systems because it supports distributed databases (databases that are spread out over several computer systems). This enables several users on a localarea network to access the same database simultaneously. Although there are different dialects of SQL, it is nevertheless the closest thing to a standard query language that currently exists. In 1986, ANSI approved a rudimentary version of SQL as the official standard, but most versions of SQL since then have included many extensions to the ANSI standard. In 1991, ANSI updated the standard. The new standard is known as SAG SQL. Originally designed as a declarative query and data manipulation language, variations of SQL have been created by SQL database management system (DBMS) vendors that add procedural constructs, control-of-flow statements, user-defined data types, and various other language extensions. Common criticisms of SQL include a perceived lack of cross-platform portability between vendors, inappropriate handling of missing data, and unnecessarily complex and occasionally ambiguous language grammar and semantics. The SQL language is sub-divided into several language elements, including: Statements which may have a persistent effect on schemas and data, or which

may control transactions, program flow, connections, sessions, or diagnostics.

Queries which retrieve data based on specific criteria.

Expressions which can produce either scalar values or tables consisting of

columns and rows of data.

Predicates which specify conditions that can be evaluated to SQL three-valued

logic (3VL) Boolean truth values and which are used to limit the effects of statements and queries, or to change program flow.

Clauses which are (in some cases optional) constituent components of statements

and queries.

Whitespace is generally ignored in SQL statements and queries, making it easier

to format SQL code for readability.

SQL statements also include the semicolon (";") statement terminator. Though not

required on every platform, it is defined as a standard part of the SQL grammar.

3.5. MYSQL:
MySQL is an open source RDBMS that relies on SQL for processing the data in the database. MySQL provides APIs for the languages C, C++, Eiffel, Java, Perl, PHP and Python. In addition, OLE DB and ODBC providers exist for MySQL data connection in the Microsoft environment. A MySQL .NET Native Provider is also available, which allows native MySQL to .NET access without the need for OLE DB. MySQL is most commonly used for Web applications and for embedded applications and has become a popular alternative to proprietary database systems because of its speed and reliability. MySQL can run on UNIX, Windows and Mac OS. MySQL is developed, supported and marketed by MySQL AB. The database is available for free under the terms of the GNU General Public License (GPL) or for a fee to those who do not wish to be bound by the terms of the GPL.

4. SYSTEM ANALYSIS
4.1 SOFTWARE PARADIGM:
Web based Systems are sought to be created and deployed in a very short period of time which needs the use of a faster Software Development Paradigms. So we took the RAD Model.

Rapid Application Development (RAD):

RAD (Rapid Application Development) is a concept that products can be developed faster and of higher quality through:

Gathering requirements using workshops or focus groups Prototyping and early, reiterative user testing of designs The re-use of software components A rigidly paced schedule that defers design improvements to the next product version

Less formality in reviews and other team communication.

Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use. The most popular object-oriented programming languages, C++ and Java, are offered in visual programming packages often described as providing rapid application development.

Rapid Application Development Model

4.2 NORMALIZATION:
Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored. Higher degrees of normalization typically involve more tables and create the need for a larger number of joins, which can reduce performance. Accordingly, more highly normalized tables are typically used in database applications involving many isolated transactions (e.g. an Automated teller machine), while less normalized tables tend to be used in database applications that need to map complex relationships between data entities and data attributes (e.g. a reporting application, or a full-text search application). Database theory describes a table's degree of normalization in terms of normal forms of successively higher degrees of strictness. A table in third normal form (3NF),

for example, is consequently in second normal form (2NF) as well; but the reverse is not necessarily the case. Although the normal forms are often defined informally in terms of the characteristics of tables, rigorous definitions of the normal forms are concerned with the characteristics of mathematical constructs known as relations. Whenever information is represented relationally, it is meaningful to consider the extent to which the representation is normalized.

4.3. DATA DICTIONARY


A data dictionary is a collection of all the data tables created after clearly understanding the data requirements of the project. Below are the database tables which are normalized to avoid any anomalies during the course of data entry, retrieval and updation.

DATA DICTIONARY FOR BIOMETRIC VOTING MACHINE:

Admin
Field Name ID PASSWORD Null? PRIMARY NOT NULL INT(5) VARCHAR2(30) Type

Users
Field Name ID NAME AGE ADDRESS STATUS Null? PRIMARY NOT NULL NULL NULL NULL INT(5) VARCHAR2(30) VARCHAR2(3) VARCHAR2(50) VARCHAR2(5) Type

Candidates
Field Name ID NAME PARTY ADDRESS VOTECOUNT NULL NULL Null? PRIMARY INT(5) VARCHAR2(30) VARCHAR2(3) VARCHAR2(50) VARCHAR2(5) Type

SYSTEM DESIGN
5.1 OVERVIEW OF UML
The Unified Modeling Language is commonly used to visualize and construct systems which are software intensive. Because software has become much more complex in recent years, developers are finding it more challenging to build complex applications within short time periods. Even when they do, these software applications are often filled with bugs, and it can take programmers weeks to find and fix them. This is time that has been wasted, since an approach could have been used which would have reduced the number of bugs before the application was completed.

However, it should be emphasized that UML is not limited simply modeling software. It can also be used to build models for system engineering, business processes, and organization structures. A special language called Systems Modeling Language was designed to handle systems which were defined within UML 2.0. The Unified Modeling Language is important for a number of reasons. First, it has been used as a catalyst for the advancement of technologies which are model driven, and some of these include Model Driven Development and Model Driven Architecture. UML is Very Proficient in Projects that Require Modeling. Characteristics of UML It must be emphasized that UML is an extensible language. It borrows many concepts from the object oriented approach.

When UML was created, one of the goals of the developers was to create a language that could support every object oriented approach. Some of the features which UML supports includes time analysis, data analysis, object oriented structure design, and state charts. With all these features, UML became the program of choice for professionals who needed to solve various engineering challenges.

UML diagrams represent three different views of a system model:

FUNCTIONAL REQUIREMENTS VIEW


It emphasizes the functional requirements of the system from the user's point of view. It includes use case diagrams.

STATIC STRUCTURAL VIEW


It emphasizes the static structure of the system using objects, attributes, operations, and relationships. It includes class diagrams and composite structure diagrams.

DYNAMIC BEHAVIORAL VIEW


It emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. It includes sequence diagrams, activity diagrams and state machine diagrams. UML models can be exchanged among UML tools by using the XMI interchange format.

CLASS DIAGRAM:
In the Unified Modeling Language (UML), a class diagram is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes.

Fig 5.1. Class Diagram

OBJECT DIAGRAM:
In the Unified Modeling Language (UML), an object diagram is a diagram that shows a complete or partial view of the structure of a modeled system at a specific time. This snapshot focuses on some particular set of object instances and attributes, and the links between the instances. A correlated set of object diagrams provides insight into how an arbitrary view of a system is expected to evolve over time. Object diagrams are more concrete than class diagrams, and are often used to provide examples, or act as test cases for the class diagrams. Only those aspects of a model that are of current interest need be shown on an object diagram.

COMPONENT DIAGRAM:
In the Unified Modeling Language, a component diagram depicts how a software system is split up into physical components and shows the dependencies among these components. Physical components could be, for example, files, headers, link libraries, modules, executables, or packages. Component diagrams can be used to model and document any systems architecture.

DEPLOYMENT DIAGRAM:
In the Unified Modeling Language, a deployment diagram serves to model the hardware used in system implementations, the components deployed on the hardware, and the associations between those components. The elements used in deployment diagrams are nodes (shown as a cube), components (shown as a rectangular box, with two rectangles protruding from the left side) and associations. In UML 2.0 components are not placed in nodes. Instead artifacts and nodes are placed in nodes. An artifact is something like a file, program, library, or data base constructed or modified in a project. These artifacts implement collections of components. The inner nodes indicate execution environments rather than hardware. Examples of execution environments include language interpreters, operating systems, and servlet / EJB containers.

ACTIVITY DIAGRAM:
In the Unified Modeling Language, an activity diagram represents the business and operational step-by-step workflows of components in a system. An Activity Diagram shows the overall flow of control.

USE CASE DIAGRAM:


In the Unified Modeling Language, a use case diagram is a sub class of behavioral diagrams. The UML defines a graphical notation for representing use cases called the Use case model. UML does not define standards for the written format to describe use cases, and thus many people have the misapprehension that this graphical notation defines the nature of a use case; however, a graphical notation can only give the simplest overview of a use case or set of use cases. Use case diagrams are often confused with use cases. While the two concepts are related, use cases are far more detailed than use case diagrams.

5.2. UML DIAGRAMS OF BIOMETRIC VOTING MACHINE: 5.2.1. USE CASE DIAGRAM:

login

Enter the candidate details and voter details

Data store in smart card

Fig 5.1. Use Case Diagram For Admins

Fig 5.2. Use Case Diagram For Users

5.2.2. CLASS DIAGRAM:

Fig 5.3. CLASS DIAGRAM

Fig 5.5. CLASS DIAGRAM

6. TESTING
INTRODUCTION Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing. Testing is generally 4 types 1.Unit Testing 2.Integration Testing 3.System Testing 4.Acceptance Testing

6.1 UNIT TESTING During coding phase this testing is essential for the verification of each and every module of the code. Each module of the code is tested.

6.2. INTEGRATION TESTING Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.

6.3. SYSTEM TESTING System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing is a more limiting type of testing; it seeks to detect defects both within the "interassemblages" and also within the system as a whole.

6.4 ACCEPTANCE TESTING Acceptance testing generally involves running a suite of tests on the completed system. Each individual test, known as a case, exercises a particular operating condition of the user's environment or feature of the system, and will result in a pass or fail Boolean outcome.

6.5 BLACK BOX TESTING Black box testing takes an external perspective of the test object to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. There is no knowledge of the test object's internal structure.

6.6 WHITE BOX TESTING White box testing (a.k.a. clear box testing, glass box testing or structural testing) uses an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.

6.7 TEST PLAN The test plan defines the objectives and scope of the testing effort, and identifies the methodology that your team will use to conduct tests. It also identifies the hardware, software, and tools required for testing and the features and functions that will be tested. A well-rounded test plan notes any risk factors that jeopardize testing and includes a testing schedule.

Test Case Specification for System Testing We specify all test cases that are used for system testing. First, the different conditions that need to be tested, along with the test cases used for testing those conditions and the expected outs are given. Then the data files used for testing are given. The test cases are specified with respect to these data files. The test cases have been selected using the functional approach. The goal is to test the different functional requirements, as specified in the requirement document. Test cases have been selected for both valid and invalid inputs.

Test Cases and Test Criterion Testing is a crucial step in software development. Fundamental theorem of testing is that if a testing criterion is valid and reliable. Having proper test cases is central to successful testing. The goal during selecting test case is to ensure that if there is an error or fault in the program. There are two desirable properties for a testing criterion: reliability and validity. A criterion is valid if for any error in the program there is some set satisfying the criterion that will reveal the error. The goal of test case selection is to select the test cases such that the maximum no. of faults detected by minimum no. of test cases.

Top down and Bottom up Approaches When testing a large program it is necessary to test parts of the program first before testing the entire program. We assume that a system is a hierarchy of modules. In Top-down approach, we start by testing the root of the hierarchy, and incrementally add modules which it calls and then test the new combined system. This requires 'stubs' which stimulates a module. The Bottom-up approach starts from the bottom of the hierarchy. First the modules, which have no subordinates, are tested. Then these modules are combined with higher level modules for testing. This requires drivers to set up the appropriate environment and invoke the module.

7. CONCLUSION
The project on Biometric Voting Machine is successfully completed. With this we can conduct the elections without any fraud by the illegal voters. Only the valid persons can choose their leader by overcoming rigging and other mal practices. A person can use his vote eligibility by using it perfectly once. The voters are given assurance that no other person misuses their right. With this example the use of Biometrics is explained .There is great demand for the fast, accurate authentication that biometric systems can provide.

All the system analysis details are satisfied by the machine and thoroughly tested from the first phase of the development of the product.

8. BIBLIOGRAPHY AND REFERENCES

BIBILOGRAPHY
S.NO 1. 2. 3. AUTHOR Dietel & Dietel Roger S Pressman Herbert Schildt PAPER/BOOK Java How to program Software Engineering Java Complete Reference PUBLISHER Pearson Education Tata Mcgraw Hills Tata Mcgraw Hills YEAR 2000 2005 2002

REFERENCES

1. 2. 3. 4. 5. 6.

www.webdeveloper.com www.wdlv.com www.w3schools.com www.forum.sun.com www.javaguru.com www.expertsexchange.com

You might also like