Professional Documents
Culture Documents
INTRODUCTION
With increasingly urgent need for reliable security, biometrics is being spotlighted as the authentication method for the next generation. Among numerous biometric technologies, fingerprint authentication has been in use for the longest time and bears more advantages than other biometric technologies do. Fingerprint authentication is possibly the most sophisticated method of all biometric technologies and has been thoroughly verified through various applications. Fingerprint authentication has particularly proved its high efficiency and further enhanced the technology in criminal investigation for more than a century. Even features such as a persons gait, face, or signature may change with passage of time and may be fabricated or imitated. However, a fingerprint is completely unique to an individual and stayed unchanged for lifetime. This exclusivity demonstrates that fingerprint authentication is far more accurate and efficient than any other methods of authentication.
Also, a fingerprint may be taken and digitalized by relatively compact and cheap devices and takes only a small capacity to store a large database of information. With these strengths, fingerprint authentication has long been a major part of the security market and continues to be more competitive than others in todays world. Biometric technology uses computerized methods to identify a person by their unique physical or behavioral characteristics. Developments and uses have increased with demand to match concerns over international, business and personal security. Biometrics is more personal than a passport photo or Pin, using traits such as fingerprints, face or eye "maps" as key identifying features. However, there are concerns about the storing of biometric data and its possible misuse. Using fingerprints is the oldest method of identification. In the digital world, the fingerprint is electronically read
by a sensor plate. The corrugated ridges of the skin are non-continuous and form a pattern that has distinguishing features, or minutiae. The minutiae can be plotted and joined up to form a template that can be stored and compared against fingerprints in the future. Some readings may be affected by fingerprints that have been damaged through injury and some sensors may not be able to read fingers that are too wet or too dry.
1.1. DESIGN
The design of this system consists of the following important parameters 1. 2. 3. 4. Scanning- using DSP Processor Searching- based on the principle of GOOGLE SEARCH Networking- all the election booths are connected in a network Data transfer using telephone lines.
the ballot and cast their vote. When the voter indicates that they have finished voting, the database is updated to prevent re-voting. At the end of the election, the supervisor enters the validation code and the machine displays a screen for each contest showing the title of the contest and the number of votes for each alternative, including no vote.
The detailed description of each and every internal unit in the VOTING SYSTEM is given below. It can be divided in to the following main categories. 1. 2. Finger Print Scanner Finger Print Sensor
BIOMETRIC CAPTURE
IMAGE PROCESS
IMAGE
LIVE UPDATE
TEMPLATE EXTRACT
BIOMETRIC MATCHING
98%
MATCHING SCORE
STORAGE DEVICE
STORED TEMPLATE
1.6. ADVANTAGES
1. 2. The system is highly reliable and secure. The cost of maintenance is less and easy when compared to the present systems. 3. 4. Fraud and rigging and other illegal practices can be avoided. Result can be obtained immediately without errors.
2. REQUIREMENTS ANALYSIS
Requirements analysis, also called requirements engineering, is the process of determining user expectations for a new or modified product. These features, called requirements, must be quantifiable, relevant and detailed. In software engineering, such requirements are often called functional specifications. Requirements analysis is an important aspect of project management. Requirements analysis involves frequent communication with system users to determine specific feature expectations, resolution of conflict or ambiguity in requirements as demanded by the various users or groups of users, avoidance of feature creep and documentation of all aspects of the project development process from start to finish. Energy should be directed towards ensuring that the final system or product conforms to client needs rather than attempting to mold user expectations to fit the requirements. Requirements analysis is a team effort that demands a combination of hardware, software and human factors engineering expertise as well as skills in dealing with people.
Each voter has to enter their ID and place his/her finger on the scanner at the start of the voting process. Once the validation is complete, the voter can view each screen of the ballot and cast their vote.
Administrator
The Election Commission appoints an authorized person known as Admin. The Admin looks after the voters registration process. He has all the privileges to add, modify or delete a voter.
Voter Registration
The voters visit the Admin office to get registered. Voter registration includes personal information of the voter along with their thumb impression. Any modifications will be done by the Admin at the same time.
Serial Communication
The biometric device is connected to the PC through the serial Port using UF (Uni-Finger) protocol. The thumb impression of the voter is taken as input from Biometric device through serial communication.
Voter Details
Whenever the thumb impression is recognized, the details of the corresponding voter are displayed on the screen.
Voting Process
In this module the voter casts his vote by choosing any one of the party by giving the number allotted to that particular party as input through the keypad. And whenever the voter casts his vote, the corresponding details are marked in the database.
1.
Pentium IV
2.
256MB RAM
1. 2. 3. 4. 5.
JDK 1.5 or above Oracle 9i Tomcat 6.0.14 IE 6.0 or above or Mozilla Windows or Linux
3. TECHNOLOGY OVERVIEW
3.1. JAVA PLATFORM:
Fundamentally, Java is a programming language that allows people to write applets and executable applications. In a grander sense, Java is a platform, a full suite of tools and classes that allow a programmer to create dynamic applications for the web, for small devices like cell phones and PDAs, and for personal computers. Java was introduced by Sun Microsystems in 1995 and instantly created a new sense of the interactive possibilities of the Web. Both of the major Web browsers include a Java virtual machine. Almost all major operating system developers (IBM, Microsoft, and others) have added Java compilers as part of their product offerings. In its eleven-year lifespan Java has evolved tremendously. It has spawned Servlet technology, component technology like JavaBeans, JavaServerFaces, and a whole host of tools. Despite all of these mysterious and complex offshoots, the core fundamentals of Java have remained relatively the same. Java is a programming language expressly designed for use in the distributed environment of the Internet. It was designed to have the "look and feel" of the C++ language, but it is simpler to use than C++ and enforces an object-oriented programming model. Java can be used to create complete applications that may run on a single computer or be distributed among servers and clients in a network. It can also be used to build a small application module or applet for use as part of a Web page. Applets make it possible for a Web page user to interact with the page.
Advantages of Java:
Java is object oriented programming language and easy to learn. Java is interpreted. Development of the application is fast. The compile-link-loadtest-debug cycle is superseded. Applications developed are portable across multiple platforms. Without any modifications they run on multiple operating systems and hardware architectures. Because of java run time system manages the memory, Applications are robust. By using Multithreading concept, Interactive graphical applications have high performance because multiple concurrent threads of activity in your application are supported by multi threading built into java environment.
Applications are adaptable to changing environments because you can dynamically download code modules from anywhere on the network. Security is high. End users can trust that applications developed are secure even though they are downloaded from internet; the java run time system has built-in protection against viruses and intruders.
A basic set of GUI widgets such as buttons, text boxes, and menus The core of the GUI event subsystem The interface between the native windowing system and the Java application Several layout managers A java.awt.datatransfer package for use with the Clipboard and Drag and Drop The interface to input devices such as mice and keyboards The AWT Native Interface, which enables rendering libraries compiled to native code to draw directly to an AWT Canvas object drawing surface.
Access to the system tray on supporting systems The ability to launch some desktop applications such as web browsers and email clients from a Java application
SWING:
Swing is a widget toolkit for Java. It is part of Sun Microsystems' Java Foundation Classes (JFC) an API for providing a graphical user interface (GUI) for Java programs. Swing was developed to provide a more sophisticated set of GUI components than the earlier Abstract Window Toolkit. Swing provides a native look and feel that emulates the look and feel of several platforms, and also supports a pluggable look and feel that allows applications to have a look and feel unrelated to the underlying platform.
Swing is a platform-independent, Model-View-Controller GUI framework for Java. It follows a single-threaded programming model, and possesses the following traits:
Platform independence: Swing is platform independent both in terms of its expression (Java) and its implementation (non-native universal rendering of widgets).
Component-Oriented: Swing is a component-based framework. The distinction between objects and components is a fairly subtle point: concisely, a component is a well-behaved object with a known/specified characteristic pattern of behavior. Swing objects asynchronously fire events, have "bound" properties, and respond to a well-known set of commands (specific to the component.) Specifically, Swing components are Java Beans components, compliant with the Java Beans Component Architecture specifications.
3.3.1. The Basics of Fingerprint Identification Ridges The skin on the inside surfaces of our hands, fingers, feet, and toes is ridged or covered with concentric raised patterns. These ridges are called friction ridges and they serve the useful function of making it easier to grasp and hold onto objects and surfaces without slippage. It is the many differences in the way friction ridges are patterned, broken, and forked which make ridged skin areas, including fingerprints, unique.
3.3.2. Fingerprint Identification Fingerprints are extremely complex. In order to read and classify them, certain defining characteristics are used, many of which have been established by law enforcement agencies as they have created and maintained larger and larger databases of prints. Even though biometrics companies like DigitalPersona do not save images of fingerprints and do not use the same manual process to analyze them, many of the methodologies that have been established over the years in law enforcement are useful for digital algorithms as well. 3.3.3. Global Versus Local Features We make use of two types of fingerprint characteristics for use in identification of individuals: Global Features and Local Features. Global Features are those characteristics that you can see with the naked eye. Global Features include: Basic Ridge Patterns Pattern Area Core Area Delta Type Lines Ridge Count The Local Features are also known as Minutia Points. They are the tiny, unique characteristics of fingerprint ridges that are used for positive identification. It is possible for two or more individuals to have identical global features but still have different and unique fingerprints because they have local features minutia points - that are different from those of others.
3.3.4. Global Features Pattern Area The Pattern Area is the part of the fingerprint that contains all the global features. Fingerprints can be read and classified based on the information in the Pattern Area. Certain minutia points that are used for final identification might be outside the Pattern Area. One significant difference between DigitalPersonas fingerprint recognition algorithm and those of competing companies is that DigitalPersona uses the entire fingerprint for analysis and identification, not just the Pattern Area. While other companies devices require users to line up their fingerprints on the fingerprint reader, DigitalPersona acquires a greater amount of information over the entire fingerprint, and can obtain enough information to "read" a print even if only part of the print is placed on the fingerprint reader. Core Point -- The Core Point, located at the approximate center of the finger impression, is used as a reference point for reading and classifying the print. Type Lines Type Lines are the two innermost ridges that start parallel, diverge, and surround or tend to surround the pattern area. When there is a definite break in a type line, the ridge immediately outside that line is considered to be its continuation.
Delta The Delta is the point on the first bifurcation, abrupt ending ridge, meeting of two ridges, dot, fragmentary ridge, or any point upon a ridge at or nearest the center of divergence of two type lines, located at or directly in front of their point of divergence. It is a definite fixed point used to facilitate ridge counting and tracing. Ridge Count The Ridge Count is most commonly the number of ridges between the Delta and the Core. To establish the ridge count, an imaginary DigitalPersona Fingerprint Recognition line is drawn from the Delta to the Core and each ridge that touches this line is counted. 3.3.5. Basic Ridge Patterns Over the years those who work with fingerprints have defined groupings of prints based on patterns in the fingerprint ridges. This categorization makes it easier to search large databases of fingerprints and identify individuals. The basic ridge patterns are not sufficient for identification but they help narrow down the search. Certain products base identification on "optical correlation" of global ridge patterns, or matching one fingerprint pattern image to another. DigitalPersona believes that positive identification must be based on verification of minutia points in addition to global features. The new digital paradigm for fingerprint identification uses many elements of the categorization process that has been in place for years, as well as some newer concepts for understanding and categorizing global features. In addition to defining ridge patterns, DigitalPersona has determined that there are certain ways that ridges can flow around on a fingerprint, and that the constraints on flow behavior can be exploited for identification. The DigitalPersona Recognition Engine makes use of the characteristics of global ridge patterns and flow characteristics to identify individuals. There are a number of basic ridge pattern groupings which have been defined. Three of the most common are loop, arch, and whorl.
1. LOOP The loop is the most common type of fingerprint pattern and accounts for about 65% of all prints. 2. ARCH The Arch pattern is a more open curve than the Loop. There are two types of arch patterns the Plain Arch and the Tented Arch. 3. WHORL Whorl patterns occur in about 30% of all fingerprints and are defined by at least one ridge that makes a complete circle. 3.3.6. Packet Protocol In the packet protocol of UniFinger, 1 packet is 13-byte long and its structure is as follows
Fig 3.1. Structure of a packet 1. Start code: 1byte. Indicates the beginning of a packet. It always should be 0x40. 2. Command: 1byte. Refer to the Command Table in a later chapter of this document. 3. Param: 4bytes.Indicates user ID or system parameters. 4. Size: 4bytes. Indicates the size of binary data following the command packet such as fingerprint templates or images. 5. Flag/Error: 1byte. Indicates flag data in the request command sent to the module, and error code in the response command received from the module, respectively. 6. Checksum: 1byte. Checks the validity of a packet. Checksum is a remainder of the sum of each field, from the Start code to Flag/Error, divided by 256 (0x100).
7. End code: 1byte. Indicates the end of a packet. It always should be 0x0A. It is also used as a code indicating the end of a binary data such as fingerprint templates.
3.4. JDBC:
JDBC is Java application programming interface that allows the Java programmers to access database management system from Java code. It was developed by JavaSoft, a subsidiary of Sun Microsystems.
Definition Java Database Connectivity in short called as JDBC. It is a java API which enables the java programs to execute SQL statements. It is an application programming interface that defines how a java programmer can access the database in tabular format from Java code using a set of standard interfaces and classes written in the Java programming language. JDBC has been developed under the Java Community Process that allows multiple implementations to exist and be used by the same application. JDBC provides methods for querying and updating the data in Relational Database Management system such as SQL, Oracle etc. The Java application programming interface provides a mechanism for dynamically loading the correct Java packages and drivers and registering them with the JDBC Driver Manager that is used as a connection factory for creating JDBC connections which supports creating and executing statements such as SQL INSERT, UPDATE and DELETE. Driver Manager is the backbone of the jdbc architecture. Generally all Relational Database Management System supports SQL and we all know that Java is platform independent, so JDBC makes it possible to write a single database application that can run on different platforms and interact with different Database Management Systems. Java Database Connectivity is similar to Open Database Connectivity (ODBC) which is used for accessing and managing database, but the difference is that JDBC is designed specifically for Java programs, whereas ODBC is not depended upon any language.
In short JDBC helps the programmers to write java applications that manage these three programming activities:
1. It helps us to connect to a data source, like a database. 2. It helps us in sending queries and updating statements to the database and 3. Retrieving and processing the results received from the database in terms of answering to your query.
DBMS:
A database management system (DBMS) is a complex set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMS are categorized according to their data structures or types, some time DBMS is also known as Data base Manager. It is a set of prewritten programs that are used to store, update and retrieve a Database. A DBMS includes: 1. A modeling language to define the schema of each database hosted in the DBMS, according to the DBMS data model. 2. The four most common types of organizations are the hierarchical, network, relational and object models. Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost). 3. The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS
SQL:
SQL is a standardized query language for requesting information from a database. The original version called SEQUEL (structured English query language) was designed by an IBM research center in 1974 and 1975. SQL was first introduced as a commercial database system in 1979 by Oracle Corporation. Historically, SQL has been the favorite query language for database management systems running on minicomputers and mainframes. Increasingly, however, SQL is being supported by PC database systems because it supports distributed databases (databases that are spread out over several computer systems). This enables several users on a localarea network to access the same database simultaneously. Although there are different dialects of SQL, it is nevertheless the closest thing to a standard query language that currently exists. In 1986, ANSI approved a rudimentary version of SQL as the official standard, but most versions of SQL since then have included many extensions to the ANSI standard. In 1991, ANSI updated the standard. The new standard is known as SAG SQL. Originally designed as a declarative query and data manipulation language, variations of SQL have been created by SQL database management system (DBMS) vendors that add procedural constructs, control-of-flow statements, user-defined data types, and various other language extensions. Common criticisms of SQL include a perceived lack of cross-platform portability between vendors, inappropriate handling of missing data, and unnecessarily complex and occasionally ambiguous language grammar and semantics. The SQL language is sub-divided into several language elements, including: Statements which may have a persistent effect on schemas and data, or which
logic (3VL) Boolean truth values and which are used to limit the effects of statements and queries, or to change program flow.
Clauses which are (in some cases optional) constituent components of statements
and queries.
SQL statements also include the semicolon (";") statement terminator. Though not
3.5. MYSQL:
MySQL is an open source RDBMS that relies on SQL for processing the data in the database. MySQL provides APIs for the languages C, C++, Eiffel, Java, Perl, PHP and Python. In addition, OLE DB and ODBC providers exist for MySQL data connection in the Microsoft environment. A MySQL .NET Native Provider is also available, which allows native MySQL to .NET access without the need for OLE DB. MySQL is most commonly used for Web applications and for embedded applications and has become a popular alternative to proprietary database systems because of its speed and reliability. MySQL can run on UNIX, Windows and Mac OS. MySQL is developed, supported and marketed by MySQL AB. The database is available for free under the terms of the GNU General Public License (GPL) or for a fee to those who do not wish to be bound by the terms of the GPL.
4. SYSTEM ANALYSIS
4.1 SOFTWARE PARADIGM:
Web based Systems are sought to be created and deployed in a very short period of time which needs the use of a faster Software Development Paradigms. So we took the RAD Model.
RAD (Rapid Application Development) is a concept that products can be developed faster and of higher quality through:
Gathering requirements using workshops or focus groups Prototyping and early, reiterative user testing of designs The re-use of software components A rigidly paced schedule that defers design improvements to the next product version
Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use. The most popular object-oriented programming languages, C++ and Java, are offered in visual programming packages often described as providing rapid application development.
4.2 NORMALIZATION:
Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored. Higher degrees of normalization typically involve more tables and create the need for a larger number of joins, which can reduce performance. Accordingly, more highly normalized tables are typically used in database applications involving many isolated transactions (e.g. an Automated teller machine), while less normalized tables tend to be used in database applications that need to map complex relationships between data entities and data attributes (e.g. a reporting application, or a full-text search application). Database theory describes a table's degree of normalization in terms of normal forms of successively higher degrees of strictness. A table in third normal form (3NF),
for example, is consequently in second normal form (2NF) as well; but the reverse is not necessarily the case. Although the normal forms are often defined informally in terms of the characteristics of tables, rigorous definitions of the normal forms are concerned with the characteristics of mathematical constructs known as relations. Whenever information is represented relationally, it is meaningful to consider the extent to which the representation is normalized.
Admin
Field Name ID PASSWORD Null? PRIMARY NOT NULL INT(5) VARCHAR2(30) Type
Users
Field Name ID NAME AGE ADDRESS STATUS Null? PRIMARY NOT NULL NULL NULL NULL INT(5) VARCHAR2(30) VARCHAR2(3) VARCHAR2(50) VARCHAR2(5) Type
Candidates
Field Name ID NAME PARTY ADDRESS VOTECOUNT NULL NULL Null? PRIMARY INT(5) VARCHAR2(30) VARCHAR2(3) VARCHAR2(50) VARCHAR2(5) Type
SYSTEM DESIGN
5.1 OVERVIEW OF UML
The Unified Modeling Language is commonly used to visualize and construct systems which are software intensive. Because software has become much more complex in recent years, developers are finding it more challenging to build complex applications within short time periods. Even when they do, these software applications are often filled with bugs, and it can take programmers weeks to find and fix them. This is time that has been wasted, since an approach could have been used which would have reduced the number of bugs before the application was completed.
However, it should be emphasized that UML is not limited simply modeling software. It can also be used to build models for system engineering, business processes, and organization structures. A special language called Systems Modeling Language was designed to handle systems which were defined within UML 2.0. The Unified Modeling Language is important for a number of reasons. First, it has been used as a catalyst for the advancement of technologies which are model driven, and some of these include Model Driven Development and Model Driven Architecture. UML is Very Proficient in Projects that Require Modeling. Characteristics of UML It must be emphasized that UML is an extensible language. It borrows many concepts from the object oriented approach.
When UML was created, one of the goals of the developers was to create a language that could support every object oriented approach. Some of the features which UML supports includes time analysis, data analysis, object oriented structure design, and state charts. With all these features, UML became the program of choice for professionals who needed to solve various engineering challenges.
CLASS DIAGRAM:
In the Unified Modeling Language (UML), a class diagram is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes.
OBJECT DIAGRAM:
In the Unified Modeling Language (UML), an object diagram is a diagram that shows a complete or partial view of the structure of a modeled system at a specific time. This snapshot focuses on some particular set of object instances and attributes, and the links between the instances. A correlated set of object diagrams provides insight into how an arbitrary view of a system is expected to evolve over time. Object diagrams are more concrete than class diagrams, and are often used to provide examples, or act as test cases for the class diagrams. Only those aspects of a model that are of current interest need be shown on an object diagram.
COMPONENT DIAGRAM:
In the Unified Modeling Language, a component diagram depicts how a software system is split up into physical components and shows the dependencies among these components. Physical components could be, for example, files, headers, link libraries, modules, executables, or packages. Component diagrams can be used to model and document any systems architecture.
DEPLOYMENT DIAGRAM:
In the Unified Modeling Language, a deployment diagram serves to model the hardware used in system implementations, the components deployed on the hardware, and the associations between those components. The elements used in deployment diagrams are nodes (shown as a cube), components (shown as a rectangular box, with two rectangles protruding from the left side) and associations. In UML 2.0 components are not placed in nodes. Instead artifacts and nodes are placed in nodes. An artifact is something like a file, program, library, or data base constructed or modified in a project. These artifacts implement collections of components. The inner nodes indicate execution environments rather than hardware. Examples of execution environments include language interpreters, operating systems, and servlet / EJB containers.
ACTIVITY DIAGRAM:
In the Unified Modeling Language, an activity diagram represents the business and operational step-by-step workflows of components in a system. An Activity Diagram shows the overall flow of control.
5.2. UML DIAGRAMS OF BIOMETRIC VOTING MACHINE: 5.2.1. USE CASE DIAGRAM:
login
6. TESTING
INTRODUCTION Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing. Testing is generally 4 types 1.Unit Testing 2.Integration Testing 3.System Testing 4.Acceptance Testing
6.1 UNIT TESTING During coding phase this testing is essential for the verification of each and every module of the code. Each module of the code is tested.
6.2. INTEGRATION TESTING Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
6.3. SYSTEM TESTING System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing is a more limiting type of testing; it seeks to detect defects both within the "interassemblages" and also within the system as a whole.
6.4 ACCEPTANCE TESTING Acceptance testing generally involves running a suite of tests on the completed system. Each individual test, known as a case, exercises a particular operating condition of the user's environment or feature of the system, and will result in a pass or fail Boolean outcome.
6.5 BLACK BOX TESTING Black box testing takes an external perspective of the test object to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. There is no knowledge of the test object's internal structure.
6.6 WHITE BOX TESTING White box testing (a.k.a. clear box testing, glass box testing or structural testing) uses an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.
6.7 TEST PLAN The test plan defines the objectives and scope of the testing effort, and identifies the methodology that your team will use to conduct tests. It also identifies the hardware, software, and tools required for testing and the features and functions that will be tested. A well-rounded test plan notes any risk factors that jeopardize testing and includes a testing schedule.
Test Case Specification for System Testing We specify all test cases that are used for system testing. First, the different conditions that need to be tested, along with the test cases used for testing those conditions and the expected outs are given. Then the data files used for testing are given. The test cases are specified with respect to these data files. The test cases have been selected using the functional approach. The goal is to test the different functional requirements, as specified in the requirement document. Test cases have been selected for both valid and invalid inputs.
Test Cases and Test Criterion Testing is a crucial step in software development. Fundamental theorem of testing is that if a testing criterion is valid and reliable. Having proper test cases is central to successful testing. The goal during selecting test case is to ensure that if there is an error or fault in the program. There are two desirable properties for a testing criterion: reliability and validity. A criterion is valid if for any error in the program there is some set satisfying the criterion that will reveal the error. The goal of test case selection is to select the test cases such that the maximum no. of faults detected by minimum no. of test cases.
Top down and Bottom up Approaches When testing a large program it is necessary to test parts of the program first before testing the entire program. We assume that a system is a hierarchy of modules. In Top-down approach, we start by testing the root of the hierarchy, and incrementally add modules which it calls and then test the new combined system. This requires 'stubs' which stimulates a module. The Bottom-up approach starts from the bottom of the hierarchy. First the modules, which have no subordinates, are tested. Then these modules are combined with higher level modules for testing. This requires drivers to set up the appropriate environment and invoke the module.
7. CONCLUSION
The project on Biometric Voting Machine is successfully completed. With this we can conduct the elections without any fraud by the illegal voters. Only the valid persons can choose their leader by overcoming rigging and other mal practices. A person can use his vote eligibility by using it perfectly once. The voters are given assurance that no other person misuses their right. With this example the use of Biometrics is explained .There is great demand for the fast, accurate authentication that biometric systems can provide.
All the system analysis details are satisfied by the machine and thoroughly tested from the first phase of the development of the product.
BIBILOGRAPHY
S.NO 1. 2. 3. AUTHOR Dietel & Dietel Roger S Pressman Herbert Schildt PAPER/BOOK Java How to program Software Engineering Java Complete Reference PUBLISHER Pearson Education Tata Mcgraw Hills Tata Mcgraw Hills YEAR 2000 2005 2002
REFERENCES
1. 2. 3. 4. 5. 6.