Professional Documents
Culture Documents
Marks – 40
Answer:
Definition of Pointer
“A pointer is a variable that can hold the address of the variables, structures and functions that
are used in the program. It contains only the memory location of the variable rather than its
containts”.
3. Function names.
Advantages of Pointers:
Pointer Operators:
To declare and refer a pointer variable, provides two special operators & and *.
& (ampersand ): This Operator gives the memory Address of the variable.
i Memory address
100 1004
&i it points the memory address i.e. 1004
Example:
Example: Program to assign the pointer values. (using operator & and *)
#include< iostream.h>
#include<conio.h>
main( )
2
clrscr ( );
y = 10;
output
Address of y = 65555
Value of y = 10
Address of y = 65555
Value of y = 10
Answers:
3
The process of assigning a value to an attribute is called binding. When a value is assigned to
an attribute. that attribute is said to be bound to the value. Depending on the semantics of the
programming language, and on the attribute in question. The binding may be done statically by
the compiler or dynamically at run-time. For example. in Java the type of a variable is
determined at ‘compile time-static binding’. On the other hand, the value of a variable is
usually not determined until ‘run-time dynamic binding’.
1. It exports a type.
3. Operations of the interface are the one and only access mechanism to the type's data
structure.
As ADTs provide an abstract view to describe properties of sets of entities, their use is
independent from a particular programming language. We therefore introduce a notation here.
Each ADT description consists of two parts:
Data: This part describes the structure of the data used in the ADT in an informal way.
Operations: This part describes valid operations for this ADT, hence, it describes its interface.
We use the special operation constructor to describe the actions which are to be performed
once an entity of this ADT is created and destructor to describe the actions which are to be
performed once an entity is destroyed. For each operation the provided arguments as well as
preconditions and post conditions are given.
3. Discuss the STACK operation with Suitable example and Show how to implement
stack operations of integer in C by using array.
Answer:
Stack: A stack is a linear list of elements for which all insertions and deletions (usually
accesses) are made at only one end of the list.
They are also called as LIFO lists (Last Input First Output).
4
4) Pop(S): Pops an element from the top of the stack on to the output(printing on to the output
console isn't necessary though in which case we can define another function Top(S) which
gives the top element of the stack).
5
}
}while(choice!=4);
return 0;
}
#define maxsize 100
int stack[maxsize];
int stacktop=0;
void push(int val)
{
if(stacktop<maxsize)
stack[stacktop++]=val;
else
printf("Stack Overflow");
}
int pop()
{
int a;
if(stacktop>0)
{
stacktop--;
a=stack[stacktop];
return a;
}
else
{
printf("Stack is Empty");
return -1;
}
}
void display()
{
int i=0;
if(stacktop>0)
{
printf("\tElements are:");
while(i<stacktop)
{
6
printf("\t%d",stack[i++]);
}
printf("\n");
}
else
printf("\tStack is Empty\n");
4. Explain how stacks are useful in evaluation of arithmetic expressions with example.
Answer:
Answer:
i Dijkstra Algorithm
ii Bellman-Ford Algorithm
Answer:
Dijkstra's algorithm
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959, [1] is a
graph search algorithm that solves the single-source shortest path problem for a graph with
non negative edge path costs, outputting a shortest path tree. This algorithm is often used in
routing.
For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e.
the shortest path) between that vertex and every other vertex. It can also be used for finding
costs of shortest paths from a single vertex to a single destination vertex by stopping the
algorithm once the shortest path to the destination vertex has been determined. For example,
if the vertices of the graph represent cities and edge path costs represent driving distances
between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the
shortest route between one city and all other cities. As a result, the shortest path first is widely
used in network routing protocols, most notably IS-IS and OSPF (Open Shortest Path First).
8
Description of the algorithm
Suppose you create a knotted web of strings, with each knot corresponding to a node, and the
strings corresponding to the edges of the web: the length of each string is proportional to the
weight of each edge. Now you compress the web into a small pile without making any knots or
tangles in it. You then grab your starting knot and pull straight up. As new knots start to come
up with the original, you can measure the straight up-down distance to these knots: this must
be the shortest distance from the starting node to the destination node. The acts of "pulling up"
and "measuring" must be abstracted for the computer, but the general idea of the algorithm is
the same: you have two sets, one of knots that are on the table, and another of knots that are
in the air. Every step of the algorithm, you take the closest knot from the table and pull it into
the air, and mark it with its length. If any knots are left on the table when you're done, you mark
them with the distance infinity.
Or, using a street map, suppose you're marking over the streets (tracing the street with a
marker) in a certain order, until you have a route marked in from the starting point to the
destination. The order is conceptually simple: from all the street intersections of the already
marked routes, find the closest unmarked intersection - closest to the starting point (the
"greedy" part). It's the whole marked route to the intersection, plus the street to the new,
unmarked intersection. Mark that street to that intersection, draw an arrow with the direction,
then repeat. Never mark to any intersection twice. When you get to the destination, follow the
arrows backwards. There will be only one path back against the arrows, the shortest one.
According to Robert Sedgewick, "Negative weights are not merely a mathematical curiosity;
[they] arise in a natural way when we reduce other problems to shortest-paths problems", and
he gives the specific example of a reduction from the NP-complete Hamilton path problem to
the shortest paths problem with general weights. If a graph contains a cycle of total negative
weight then arbitrarily low weights are achievable and so there's no solution; Bellman-Ford
detects this case.
9
If the graph does contain a cycle of negative weights, Bellman-Ford can only detect this;
Bellman-Ford cannot find the shortest path that does not repeat any vertex in such a graph.
This problem is at least as hard as the NP-complete longest path problem.
1) Algorithm
2) Proof of correctness
3) Applications in routing
4) Implementation
5) Yen's improvement
6) References
7) External links
Bellman-Ford is in its basic structure very similar to Dijkstra's algorithm, but instead of greedily
selecting the minimum-weight node not yet processed to relax, it simply relaxes all the edges,
and does this |V| − 1 times, where |V| is the number of vertices in the graph. The repetitions
allow minimum distances to accurately propagate throughout the graph, since, in the absence
of negative cycles, the shortest path can only visit each node at most once. Unlike the greedy
approach, which depends on certain structural assumptions derived from positive weights, this
straightforward approach extends to the general case. Bellman–Ford runs in O(|V|·|E|) time,
where |V| and |E| are the number of vertices and edges respectively.
10
MT0033– 02
Marks – 40
1. Explain the process of insertion and deletion of an element in an binary search tree
with appropriate example
Answers:
We've seen how to search a BST to determine if a particular node exists, but we've yet to look
at how to add a new node. When adding a new node we can't arbitrarily add the new node;
rather, we have to add the new node such that the binary search tree property is maintained.
When inserting a new node we will always insert the new node as a leaf node. The only
challenge, then, is finding the node in the BST, which will become this new node's parent. Like
with the searching algorithm, we'll be making comparisons between a node c and the node to
be inserted, n. We'll also need to keep track of c's parent node. Initially, c is the BST root and
parent is a null reference. Locating the new parent node is accomplished by using the
following algorithm:
Step 1: If c is a null reference, then parent will be the parent of n. If n's value is less than
parent's value, then n will be parent's new left child; otherwise n will be parent's new right
child.
Step 3: If c's value equals n's value, then the user is attempting to insert a duplicate node.
Either simply discard the new node, or raise an exception. (Note that the nodes' values in a
BST must be unique.)
Step 4: If n's value is less than c's value, then n must end up in c's left subtree. Let parent
equal c and c equal c's left child, and return to step 1.
Step 5: If n's value is greater than c's value, then n must end up in c's right subtree. Let parent
equal c and c equal c's right child, and return to step 1.
This algorithm terminates when the appropriate leaf is found, which attaches the new node to
the BST by making the new node an appropriate child of parent. There's one special case you
have to worry about with the insert algorithm: if the BST does not contain a root, then there
parent will be null, so the step of adding the new node as a child of parent is bypassed;
furthermore, in this case the BST's root must be assigned to the new node. Deleting Nodes
from a BST.
11
Deleting nodes from a BST is slightly more difficult than inserting a node because deleting a
node that has children requires that some other node be chosen to replace the hole created by
the deleted node. If the node to replace this hole is not chosen with care, the binary search
tree property may be violated. For example, consider the BST in Figure 1.6. If the node 150 is
deleted, some node must be moved to the hole created by node 150's deletion. If we arbitrarily
choose to move, say node 92 there, the BST property is deleted since 92's new left subtree
will have nodes 95 and 111, both of which are greater than 92 and thereby violating the binary
search tree property. The first step in the algorithm to delete a node is to first locate the node
to delete. This can be done using the searching algorithm discussed earlier, and therefore has
a log2 n running time. Next, a node from the BST must be selected to take the deleted node's
position.
Case 1: If the node being deleted has no right child, then the node's left child can be used as
the replacement. The binary search tree property is maintained because we know that the
deleted node's left subtree itself maintains the binary search tree property, and that the values
in the left subtree are all less than or all greater than the deleted node's parent, depending on
whether the deleted node is a left or right child. Therefore, replacing the deleted node with its
left subtree will maintain the binary search tree property.
Case 2: If the deleted node's right child has no left child, then the deleted node's right child can
replace the deleted node. The binary search tree property is maintained because the deleted
node's right child is greater than all nodes in the deleted node's left subtree and is either
greater than or less than the deleted node's parent, depending on whether the deleted node
was a right or left child. Therefore, replacing the deleted node with its right child will maintain
the binary search tree property.
Case 3: Finally, if the deleted node's right child does have a left child, then the deleted node
needs to be replaced by the deleted node's right child's left-most descendant. That is, we
replace the deleted node with the deleted node's right subtree's smallest value.
Note: Realize that for any BST, the smallest value in the BST is the leftmost node, while the
largest value is the right-most node. This replacement choice maintains the binary search tree
property because it chooses the smallest node from the deleted node's right subtree, which is
guaranteed to be larger than all node's in the deleted node's left subtree. Too, since it's the
smallest node from the deleted node's right subtree, by placing it at the deleted node's
position, all of the nodes in its right subtree.
Answer:
Red-Black Trees
Definition: A red-black tree is a binary search tree whose leaves are external nodes. A red-
black tree must satisfy the following properties:
5) For each node, all paths from that node to a descendant leaf contain the same number of
black nodes
Insertions are done at a leaf and will replace an external node with an internal node with two
external children. The newly inserted node is always red. If its parent is black, no additional
readjustment needs to be done to maintain properties 4 and 5 above. If the parent is red there
are three cases to consider:
Case1: parent(y) and uncle (w) are both red. Make both nodes black and their parent red. Let
node x now denote the grandparent of the original x (the node that was changed to red), and
continue applying this algorithm. If new x is the root, make it black and you are finished.
Case 2: parent (y) is red, uncle (w) is black, and x is a right child of a parent that is A left child
(or symmetrically equivalent case of x is a left child of a parent that is a right child) Rotate left
(or symmetrical case – right) about the parent and denote the rotated parent node as x – a left
child of a left child (or symmetrically equivalent). This now becomes Case 3.
Case 3: parent (y) is red, uncle (w) is black, and x is a left (right) child of a
Rotate right (left) about the grandparent and make the rotated Grandparent
Answer:
An AVL tree is a binary search tree whose left subtree and right subtree differ in height by no
more than 1, and whose left and right subtrees are they AVL trees. To maintain balance in a
height balanced binary tree, each node will have to keep an additional piece of information that
is needed to efficiently maintain balance in the tree after every insert and delete operation has
been performed. For an AVL tree, this additional piece of information is called the balance
factor, and it indicates if the difference in height between the left and right subtrees is the same
or, if not, which of the two subtrees has height one unit larger. If a node has a balance factor rh
(right high) it indicates that the height of the right subtree is 1 greater than the height of the left
subtree. Similarly the balance factor for a node could be lh (left high) or eh (equal height).
Example: Consider the AVL tree depicted below. The right subtree has height 1 and the left
subtree, height 0. The balance factor of the root is tilted toward the right (right high – rh) since
the right subtree has height one larger than the left subtree. Inserting the new node 21 into the
tree will cause the right subtree to have height 2 and cause a violation of the definition of an
13
AVL tree. This violation of the AVL property is indicated at the root by showing that the balance
factor is now doubly unbalanced to the right. The other balance factors along the path of
insertion will also be changed as indicated. The node holding 12 is also doubly unbalanced to
the right.
The AVL property of this tree is restored through a succession of rotations. The root of this tree
is doubly unbalanced to the right. The child of this unbalanced node (node 12) is also doubly
unbalanced rh, but its child (node 24) is lh. Before a rotation around the root of the right
subtree can be performed, a rotation around node 24 is required so that the balance factors of
both the child and grandchild of the unbalanced node – the subtree root (node 12) – agree in
direction (both rh in this case).
Now both nodes 12 and 21 have a balance factor of rh, and a left rotation can be performed
about the node 12. This rotation reduces the height of the right subtree by 1 and will restore
AVL balance to the tree.
14
Next we add a new key 6 to this tree. Addition of a new node holding this key causes the root
to become doubly unbalanced to the right.
Since the root is doubly unbalanced right and its right child is left high, we must first perform a
right rotation around node 21. Now a rotation around the root readjusts the balance of the tree.
The left rotation about the root promotes the right child of the original root (node 12), and
makes the old root (node 4) the left child of the new root – replacing the left subtree originally
attached to node 12. The former left subtree of node 12 is now the right subtree of node 4. In a
left rotation, all of the keys in the left subtree of the right child must be greater than the key of
the root and less than the key of the parent. When the root becomes the left child of the
15
parent, the keys in this subtree remain in the left subtree of the new root and in the right
subtree of the new left child.
4. Insert the keys in order show into an initially empty VAL tree. And show the
sequence of trees produced by these insertions.
A,Z,B,Y,T,M,E,W,D,G,X,F,P,O,C,H,Q,S [ 6 Marks]
Answer:
5. Discuss the techniques for allowing a hash file to expand and shrink dynamically.
What are the advantages and disadvantages of each.
Answers:
Dynamic Hashing
1) Choose hash function based on current file size. Get performance degradation as file
grows.
2) Choose hash function based on anticipated file size. Space is wasted initially.
Requires selecting new hash function, recomputing all addresses and generating new bucket
assignments. Costly, and shuts down database. Some hashing techniques allow the hash
function to be modified dynamically to accommodate the growth or shrinking of the database.
These are called dynamic hash functions. Extendable hashing is one form of dynamic
hashing. Extendable hashing splits and coalesces buckets as database size changes. This
imposes some performance overhead, but space efficiency is maintained. As reorganization is
on one bucket at a time, overhead is acceptably low.
Advantages:
1) Extendable hashing provides performance that does not degrade as the file grows.
3) Bucket address table only contains one pointer for each hash value of current prefix
length.
Disadvantages:
16
2) Added complexity
Answers:
Clustered files
17