You are on page 1of 24

WEB APPLICATION TECHNOLOGIES

Most modern businesses software is web-enabled.This article describes how such applications are implemented using the five main technology stacks. Types of Web Applications There are three main types of web applications: Customer-facing applications are known as ecommerce or B2C sites and use the internet. These typically present a customer with choices of products or services to buy using a shopping cart and payment method. Examples are travelreservations, http://www.amazon.com andhttp://www.ebay.com. Employee-facing applications use the intranet in a company. One example is a company's accounting application. Another might be employee expense reporting. A third might be the ERP (enterprise requirements planning) system. These applications previously operated on an internal client-server network. They are now webenabled to make them easier to use and deploy. Disparate applications, such as an ERP and CRM (Customer Relationship Management) systems are now being integrated using XML and web services. Customer-Supplier facing applications are known as B2B (Business to Business) sites and use the extranet, (an extension of an intranet that allows outside companies to work in a passwordprotected space). B2B sites provide a secure means for sharing selected information. One example is supply chain software that allows all suppliers to see demand and inventory in the supply chain. Another example is procurement software that allows a customer to send RFQs and receive quotes over the web.A third example is collaboration software that allows companies to share product development and project management information. Not all applications fit the above categories. For example Yahoo! email is not in any of the above. However, the above categories are representative of the main types of applications. The same technologies are used for these other applications. The 3-tier architecture Web applications are built using a 3-tier architecture in which the client, server and database constitute the main elements. This is sometimes called an n-tier architecture because there can be multiple levels of servers. This architecture is distinct from prior mainframe (1-tier) and client-server (2-tier) architectures. Technologies Used to Build Web Applications Originally, the internet was designed to serve static pages. A rudimentary technology

based on CGI was developed to allow information to be passed back to a web server. During the last ten years, four main technologies have emerged to replace CGI and the basic CGI technology has been further refined, using Perl as the primary programming language. This has lead to 5 competing technology stacks that differ in the following attributes: Programming languages (Lang) Operating system (OS). This can be Linux (L), Unix (U) or Windows (W). Web server (Server) Database support (DB) Sponsoring companies (Sponsors) The following table summarizes these technology stacks. Stack Sponsor OS Server DB Lang CGI Open source L/U Apache Varies Perl ColdFusion Macromedia W/L/U ColdFusion Varies CFML LAMP Open source L/W/U Apache MySQL PHP Java/J2EE Sun, IBM L/U J2EE Varies Java .NET Microsoft W ASP.NET SQL server VBas

Note variations have been left off to show the main combinations.These technologies are quite different, which means that someone who's familiar with one approach would have a high learning curve to use a different one. Once an application is developed using one technology, it is difficult and expensive to convert it to a different one. As a result, many web application Web Internet developers have a strong interest in promoting the technology they are familiar with. When choosing a technology for the web application, one typically chooses a whole stack. Trying to use one piece from one stack (e.g. Java/J2EE) and another from a second (e.g .NET) is possible. The following is a more detailed explanation of each of these technology stacks: CGI/Perl. CGI is the granddaddy of interfaces for passing data from a submitted web page to a web server. Perl is an open-source language optimized for writing server-side applications. Together, CGI and Perl make it easy to connect to a variety of databases. Apache tends to be the web server used because it runs on all major operating systems and is highly reliable. Other open-source languages such as C and Python can also be used. For highend applications, especially e-commerce sites like amazon.com this technology is used because it is so powerful. However other technology stacks can be implemented more easily and quickly. Macromedia sells a collection of products that make it easy to build small and medium-

sized web applications. The primary tools provided by Macromedia are ColdFusion, which is an engine that lets one program in CFML (Cold Fusion Markup Language) and Dreamweaver, which is a development tool for making web applications. Because Macromedia is a smaller player, they have focused on trying to make their products compatible with components from other technology stacks. Macromedia also sells Flash and has tools for using this in web applications. Java/J2EE is a robust, well-developed method for creating medium to large web applications. It has support from a number of large industry players. Sun Microsystems provides the Java. IBM (Websphere) and BEA Systems (Weblogic) are two major suppliers of web application servers and associated software to make it easy to create and manage these applications. There is a large body of Java programmers available to write the code. This technology stack works with a variety of databases and is particularly well-tuned to mainstream commercial databases like Oracle and DB2. IBM has developed a development environment called Eclipse that is making it easier to write applications, but in general, Java is associated with powerful applications built by capable programmers. LAMP (Linux, Apache, MySql, PHP) is a relatively new technology stack for building web applications that has been adopted for many small and medium-size web tasks because: (a) the entire technology stack is available through open-source; (b) it works well; (c) it is easy to learn; (d) it allows one to build a web application quickly; and (e) there are many open source code samples that can be bolted together to make a full solution. LAMP relies on CGI for data exchange between the server and browser, but the CGI commands are hidden from the developer. LAMP doesn't have all of the capabilities of J2EE, but it gains ground every year. Sites that use PHP can be seen by the ".php" as part of the page name in the URL. LAMP has become especially popular with ISVs (independent software vendors) because they can create an application and sell it without having to pay for the underlying (open source) software. Microsoft .NET. Microsoft is using their .NET strategy to take over the server market the way in which Windows, Office, and Internet Explorer have taken over the desktop. The stack comprises a web server (ASP.NET) and two programming languages (VisualBasic.NET and C#.NET) that compete against PHP and Java respectively. They also have a database (SQL Server). Microsoft has done an excellent job making their products easy to use so a business analyst can create a web application without needing a programmer.

MS-DOS vs. Linux / Unix In the below chart is a listing of common MS-DOS commands with their Linux / Unix

counterpart. MS-DOS Linux / Unix 1.attrib 1.chmod 2.backup 2. tar 3.dir 3. ls 4.cls 4.clear 5.copy 5.cp 6.del 6. rm 7.deltree 7.rm -R ,rmdir 8.edit 8.vi pico 9.format 9.fdformat ,mount ,umount 10.move / rename 10. mv 11.type 11. less <file> 12.cd 12. cd , chdir 13.md 13. mkdir 14.win 14. startx /////////////////////////////////
Posted: Thu May 06, 2010 7:12 am Subject description: Computer Science Post subject: Introduction to PERL Programming

INTRODUCTION
Perl and Networking PERL (Practical Extraction of Report Language) designed by Larry Wall. Why would you want to write networking applications in Perl? The Internet is based on Transmission Control Protocol/Internet Protocol (TCP/IP), and most networking applications are based on a straightforward application programming interface (API) to the protocol known as Berkeley sockets. The success of TCP/IP is due partly to the ubiquity of the sockets API, which is available for all major languages including C, C++, Java, BASIC, Python, COBOL, Pascal, FORTRAN, and, of course, Perl. The sockets API is similar in all these languages. There may be a lot of work involved porting a networking application from one computer language to another, but porting the part that does the socket communications is usually the least of your problems. Because of having below features ,PERL is a good for networking. A Language Built for Interprocess Communication Perl was built from the ground up to make it easy to do interprocess communication (the thing that happens when one program talks to another). As we shall see later in this chapter, in Perl there is very little difference between opening up a local file for reading and opening up a communications channel to read data from another local program. With only a little more work, you can open up a socket to read data from a program running remotely on another machine somewhere on the Internet. Once the communications channel is open, it matters little whether the thing at the other end is a file, a program running on the same machine, or a program running on a remote machine. Perl's input/output functions work in the same way for all three types of connections. A Language Built for Text ProcessingAnother Perl feature that makes it good for network applications is its powerful integrated regular expression-matching and textprocessing facilities. Much of the data on the Internet is text based (the Web, for instance), and a good portion of that is unpredictable, line-oriented data. Perl excels at manipulating

this type of data, and is not vulnerable to the type of buffer overflow and memory overrun errors that make networking applications difficult to write (and possibly insecure) in languages like C and C++. An Open Source Project Perl is an Open Source project, one of the earliest. Examining other people's source code is the best way to figure out how to do something. Not only is the source code for all of Perl's networking modules available, but the whole source tree for the interpreter itself is available for your perusal. Another benefit of Perl's openness is that the project is open to any developer who wishes to contribute to the library modules or to the interpreter source code. This means that Perl adds features very rapidly, yet is stable and relatively bug free. The universe of third-party Perl modules is available via a distributed Web-based archive called CPAN, for Comprehensive Perl Archive Network. You can search CPAN for modules of interest, download and install them, and contribute your own modules to the archive. Object-Oriented Networking Extensions Perl5 has object-oriented extensions, and although OO purists may express dismay over the fast and loose way in which Perl has implemented these features, it is inarguable that the OO syntax can dramatically increase the readability and maintainability of certain applications. Nowhere is this more evident than in the library modules that provide a highlevel interface to networking protocols. Among many others, the IO::Socket modules provide a clean and elegant interface to Berkeley sockets; Mail::Internet provides crossplatform access to Internet mail; LWP gives you everything you need to write Web clients; and the Net::FTP and Net::Telnet modules let you write interfaces to these important protocols. Security Security is an important aspect of network application development, because by definition a network application allows a process running on a remote machine to affect its execution. Perl has some features that increase the security of network applications relative to other languages. Because of its dynamic memory management, Perl avoids the buffer overflows that lead to most of the security holes in C and other compiled languages. Of equal importance, Perl implements a powerful "taint" check system that prevents tainted data obtained from the network from being used in operations such as opening files for writing and executing system commands, which could be dangerous. Performance A last issue is performance. As an interpreted language, Perl applications run several times more slowly than C and other compiled languages, and about par with Java and Python. In most networking applications, however, raw performance is not the issue; the I/O bottleneck is. On I/O-bound applications Perl runs just as fast (or as slowly) as a compiled program. In fact, it's possible for the performance of a Perl script to exceed that of a compiled program. Benchmarks of a simple If execution speed does become an issue, Perl provides a facility for rewriting time-critical portions of your application in C, using the XS extension system. Or you can treat Perl as a prototyping language, and implement the real application in C or C++ after you've worked out the architectural and protocol details. Running PERL Programming To run a Perl program from the Unix command line: perl progname.pl Alternatively, put this as the first line of your script:

#!/usr/bin/env perl and run the script as /path/to/script.pl. Of course, it'll need to be executable first, so chmod 755 script.pl (under Unix). (This start line assumes you have the env program. You can also put directly the path to your perl executable, like in #!/usr/bin/perl ). Basic syntax overview A Perl script or program consists of one or more statements. These statements are simply written in the script in a straightforward fashion. There is no need to have a main() function or anything of that kind. Perl statements end in a semi-colon: print "Hello, world"; Comments start with a hash symbol and run to the end of the line # This is a comment Whitespace is irrelevant: print "Hello, world" ; Except inside quoted strings: # this would print with a line break in the middle print "Hello world"; Double quotes or single quotes may be used around literal strings: print "Hello, world"; print 'Hello, world'; Only double quotes "interpolate" variables and special characters such as newlines (\n ): print "Hello, $name\n"; # works fine print 'Hello, $name\n'; # prints $name\n literally Numbers don't need quotes around them: print 42; You can use parentheses for functions' arguments or omit them according to your personal taste. They are only required occasionally to clarify issues of precedence. print("Hello, world\n"); print "Hello, world\n"; Example :- Write a PERL Script to print Hello World. [student@localhost~]$vi hello.pl printHello World; Esc shift: wq $perl hello.pl Output: Hello World Perl Data Types

Variables

A variable is a container that holds one or more values that can change throughout a program. There are 3 types of variables in perl: 1) Default variable 2) Scalar variable 3) Array 4) Associative array (hash/lookup table)

Default variable: This variable use positional parameters Example:- while (<>) { Print $_ ; } Scalar A scalar variable holds a single value, which can be a number or a character string. Scalar variable have a dollar sign ($) prefix. Examples of scalar variables: $name = ' priya ; $len = 5 ; Strings Strings are sequences of characters (like hello). Single-Quoted Strings Text placed between a pair of single quotes is interpreted literally. To get a single quote into a single-quoted string, precede it by a backslash (\). To get a backslash into a single-quoted sting, precede the backslash by a backslash. Examples of strings: hello # hello can\t # cant http:\\\\www # http:\\www Double-Quoted Strings The double quote interpolates variables between the pair of quotes, which means that the variable names within the string are replaced by their current values. Examples

$x = 1; print $x ; # will print out $x print $x ; # will print out 1

There are several different escape characters that can be printed out: \n Newline \\ Backslash \t Tab \ Double quote Auto increment and Auto decrement Operators (++, --) $a++; # Equivalent to $a = $a + 1; $a--; # Equivalent to $a = $a 1;

Numeric and String Comparison Operators Comparison Numeric String Equal == eq Not Equal != ne Less Than < lt Greater Than > gt Less than or equal to <= le Greater than or equal to >= ge

Example :- Write a PERL script to find factorial of a number. [student@localhost~]$vi factorial.pl # Factorial of a number printEnter number\n; $a=<>; for($i=$a-1,$f=$a;$i>1;$i--) { $f=$f*$i; } print factorial is $f\n;

Output -> Enter number 5 Factorial is 120

Smart Dust What is smart dust? Smart dust is a hypothetical wireless network of tiny micro electromechanical Sensors that can detect light, temp, vibrations. The smart dust concept was introduced by Kristofer S.J. Pister in 2001. Smart dust devices are based on sub voltage and deep sub voltage nanoelectronics and include the micro power sources with all solid states impulse super capacitors.

Components of smart dust A single smart dust mote has: 1. A semiconductor laser diode and mems beam steering mirror from active optical transmission. 2. mems corner cube retro reflector for passive optical transmission. 3. An optical receiver. 4. A single processing and control circuitry. 5. A power source based on thick film batteries and solar cells. 6. Photodetector and receiver Construction A significant trend in electronics technology is the increasing ability to provide adaptive features into smaller and smaller electronic devices. An example of this technology trend is electronic motes. Electronic motes are devices that can: Support the collection and integration of data form a variety of miniature sensors. Analyze the sensor data as specified by system level controls. Wirelessly communicate the results of their analyzes to other motes, system base stations and the internet as specified by system automation system automation. Motes are also sometimes referred to as smart dust. One mote is composed of a small, low powered and cheap computer connected to several sensors and a radio transmitter capable of forming ad hoc networks. The computer monitors the different sensors in a mote. These sensors can measure light, acceleration, position, stress, pressure, humidity, sound and vibration. Data gathered are passed on to the radio link for transmission from mote to mote until data reaches the transmission node. Working smart dust motes are run by microcontrollers

microcontrollers consists of tiny sensors for recording various type of data sensors are run by timers timers work for specific period by powering up the sensors to collect data data obtained are stored in its memory for further interpretations or are send to the base controlling stations ccr comprises of three mutually perpendicular mirrors of gold coated poly silicon it has the property that any incident ray of light is reflected back to the source provided that is incident within a certain range of angles centered about the cubes body diagonal the micro fabricated ccr includes an electrostatic actuator that can deflect one of the mirrors at kilohertz rate thus the external light source can be transmitted back in the form of modulated signal at kilobits per second ccr based optical lens require an uninterrupted line of sight path ccr can transmit to the bts only when the ccr body diagonal happens to point directly towards the bts, within a few tens of degrees a passive transmitter can be made more omni directional by employing several ccrs oriented in different directions, at the expense of increased dust mote size.

Applications Environmental protection (identification and monitoring of pollution). Habitat monitoring (observing the behavior of the animals in there natural habitat). Military application (monitoring activities in inaccessible areas, accompany soldiers and alert them to any poisons or dangerous biological substances in the air). Indoor/Outdoor Environmental Monitoring. Security and Tracking Health and Wellness Monitoring (enter human bodies and check for physiological problems). Factory and Process Automation. Seismic and Structural Monitoring. Monitor traffic and redirecting it. A typical application scenario is scattering a hundred of these sensors around a building or around a hospital to monitor temperature or humidity, track patient movements, or inform of disasters, such as earthquakes. In the military, they can perform as a remote sensor chip to track enemy movements, detect poisonous gas or radioactivity.

Advantages of smart dust Main benefits for organizations: Dramatically reduce systems and infrastructure cost Increase plant/factory/office productivity Improve safety, efficiency and compliance Smart Dust has the most useful applications, such as:

detecting corrosion in aging pipes before they leak and cost huge amounts of money to repair automating many manual error-prone tasks which involve calibration and monitoring providing accurate data of motor health in order to perform more timely maintenance when needed having most instruments become wireless and even be able to recalibrate, reconfigure and upgrade them wirelessly Monitoring power consumption to better understand where most energy is being used which would help plants save money in the long run. In an office environment, Smart Dust could also prove invaluable, and have the potential to be used in such applications as:

eliminating wired routers entirely, and replacing them with a single Smart Dust chip which would handle all hardware and software functions for distributed networks, using five times less power than conventional networks tracking the movements of visitors as they roam around the office to see if they are going into any restricted locations, or to let the CEO know what they are up to Tracking important packages leaving the offices (Smart Dust nodes can even be equipped with GPS receivers!) The uses of Smart Dust are so numerous and wide-ranging that it would take pages and pages to mention them all. It has much potential in the fields of medicine, agriculture, dentistry, etc. and we've only begun to tap into that potential /////////////////////////////////////////////////////////// GRID COMPUTING Grid Computing enables virtual organizations to share geographically distributed resources as they pursue common goals, assuming the absence of central location, central control, omniscience, and an existing trust relationship. HISTORY Term originated in 1990 for making computer power easy to access. IAN FOSTER,CARL KESSELMAN & STEVE TUECKE, regarded as fathers of grid. OVERVIEW Form of distributed computing. Composed of many networked loosely coupled computers acting together to perform large tasks. Uses middleware to divide & apportion pieces of program among several computers. Special type of parallel computing that relies on complete computers connected to a network by a conventional computer computer interface such as ethernet.

HOW GRID WORKS? Each computers resources are shared with every other computer in network. Processing power, memory and data storage are all community resources that authorized users can tap into and leverage for specific tasks. This sharing turns a computer network into a powerful supercomputer. APPLICATIONS In drug discovery Economic forecasting Seismic analysis Back office data processing in support for e-commerce & web-services Biomedical applications ADVANTAGES No need to buy large six figure SMP servers for applications that can be split up and farmed out to smaller commodity type servers. Grid environments are much more modular and don't have single points of failure. If one of the servers/desktops within the grid fail there are plenty of other resources able to pick to load. Jobs can be executed in parallel speeding performance. Grid environments are extremely well suited to run jobs that can be split into smaller chunks and run concurrently on many nodes. DISADVANTAGES You may need to have a fast interconnect between compute resources (gigabit ethernet at a minimum). Grid environments include many smaller servers across various administrative domains. Good tools for managing change and keeping configurations in sync with each other can be challenging in large environments.

GRID EXAMPLES

Network for Earthquake Engineering and Simulation (NEESGrid) Biomedical Informatics Research Network (BIRN) ++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++ Introduction to Shell Programming We use shells because shell is a simple way to string together a bunch of UNIX commands for execution at any time without the need for prior compilation. Also because its generally fast to get a script going. Not forgetting the ease with which other scripters can read the code and understand what is happening. Lastly, they are generally completely portable across the whole UNIX world, as long as they have been written to a common standard. The Shell History: The basic shells come in three main language forms. These are (in order of creation) sh, csh and ksh. Be aware that there are several dialects of these script languages which tend to make them all slightly platform specific. The different dialects are due, in the main, to the different UNIX flavors in use on some platforms. All script languages though have at their heart a common core which if used correctly will guarantee portability. Bourne Shell: Historically the sh language was the first to be created and goes under the name of The Bourne Shell. It has a very compact syntax which makes it obtuse for novice users but very efficient when used by experts. It also contains some powerful constructs built in. On UNIX systems, most of the scripts used to start and configure the operating system are written in the Bourne shell. It has been around for so long that is it virtually bug free. C Shell: Next up was The C Shell (csh), so called because of the similar syntactical structures to the C language. The UNIX man pages contain almost twice as much information for the C Shell as the pages for the Bourne shell, leading most users to believe that it is twice as good. This is a shame because there are several compromises within the C Shell which makes using the language for serious work difficult (check the list of bugs at the end of the man pages!). True, there are so many functions available within the C Shell that if one should fail another could be found. The point is do you really want to spend your time finding all the alternative ways of doing the same thing just to keep yourself out of trouble. The real reason why the C Shell is so popular is that it is usually selected as the default login shell for most users. The features that guarantee its continued use in this arena are aliases, and history lists. Korne Shell: Lastly we come to The Korne Shell (ksh) made famous by IBM's AIX flavor of UNIX. The Korne shell can be thought of as a superset of the Bourne shell as it contains the whole of the Bourne shell world within its own syntax rules. The extensions over and above the Bourne shell exceed even the level of functionality available within the C Shell (but without any of the compromises!), making it the obvious language of choice for real scripters. However, because not all platforms are yet supporting the Korne shell it is not

fully portable as a scripting language at the time of writing. This may change however by the time this book is published. Korne Shell does contain aliases and history lists aplenty but C Shell users are often put off by its dissimilar syntax. Persevere, it will pay off eventually. Any sh syntax element will work in the ksh without change. eg1 Write a shell script to add two numbers. echo Enter First value : read a echo Enter Second value : read b echo Addition : `expr $a + $b` 2 Write a shell script to perform the arithmetic operations. echo Enter First value : read a echo Enter Second value : read b echo Addition : `expr $a + $b` echo Subtraction : `expr $a - $b` echo Multiplication : `expr $a \* $b` echo Division : `expr $a / $b` echo Modulus : `expr $a % $b` 3 Write a shell script to check whether the given number is odd or even. echo Enter the number to be checked : read n if [ `expr $n % 2` -eq 0 ] then echo Number is EVEN else echo Number is ODD 4 Write a shell script for grade calculation. echo Enter Marks of Subject1 : read s1 echo Enter Marks of Subject2 : read s2 echo Enter Marks of Subject3 : read s3 total=`expr $s1 + $s2 + S3` per=`expr $total \ 3` if [ $per gt 75 ]

then echo Result : Honours elif [ $per gt 60 ] then echo Result : First Division elif [ $per gt 50 ] then echo Result : Second Division elif [ $per gt 36 ] then echo Result : PASS only else echo Result : FAIL fi 5 Write script to print number as 5, 4, 3, 2, 1 using while loop. echo Enter the number : read n while [ $n gt 0 ] do echo $n n=`expr $n - 1` done

Posted: Fri Jul 23, 2010 3:14 pm Post subject: Subject description: Computer Science

Difference between Linux and Windows

Linux Vs Windows Linux is an open-source Operating System. People can change codes and add programs to Linux OS which will help use your computer better. Linux evolved as a reaction to the monopoly position of windows. you can't change any code for windows OS. You can't even see which processes do what and build your onw extension. Linux wants the programmers to extend and redesign it's OS. Linux user's can edit its OS and design new OS. All flavors of Windows come from Microsoft. Linux come from different companies like LIndows , Lycoris, Red Hat, SuSe, Mandrake, Knopping, Slackware. Linux is customizable but Windows is not. For example,NASlite is a version of Linux that runs off a single floppy disk and converts an old computer into a file server. This ultra small edition of Linux is capable of networking, file sharing and being a web server. Linux is freely available for desktop or home use but Windows is expensive. For server use,

Linux is cheap compared to Windows. Microsoft allows a single copy of Windows to be used on one computer. You can run Linux on any number of computers. Linux has hign security. You have to log on to Linux with a userid and password. You can login as root or as normal user. The root has full previlage. Linux has a reputation for fewer bugs than Windows. Windows must boot from a primary partition. Linux can boot from either a primary partition or a logical partition inside an extended partition. Windows must boot from the first hard disk. Linux can boot from any hard disk in the computer. Windows uses a hidden file for its swap file. Typically this file resides in the same partition as the OS (advanced users can opt to put the file in another partition). Linux uses a dedicated partition for its swap file. Windows separates directories with a back slash while Linux uses a normal forward slash. Windows file names are not case sensitive. Linux file names are. For example "abc" and "aBC" are different files in Linux, whereas in Windows it would refer to the same file. Windows and Linux have different concepts for their file hierarchy. Windows uses a volumebased file hierarchy while Linux uses a unified scheme. Windows uses letters of the alphabet to represent different devices and different hard disk partitions. eg: c: , d: , e: etc.. while in linux " / " is the main directory. Linux and windows support the concept of hidden files. In linux hidden files begin with " . ", eg: .filename In Linux each user will have a home directory and all his files will be save under it while in windows the user saves his files anywhere in the drive. This makes difficult to have backup for his contents. In Linux its easy to have backup's.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ ++++++++++++++++++++++++++++++++


1 Write script to print given number in reverse order, for eg. If no is 123 it must print as 321. echo Enter the number : read n while [ $n gt 0 ] do temp=`expr $n % 10` echo n $temp n=`expr $n \ 10` done

2 Writing the shell script to check whether a particular user is logged in or not. Continue checking after each 60 seconds, until success. while [ $c eq 1 ] do who | grep jietsetg if [ $? eq 0 ] then echo User jietsetg is logged in break; else echo waiting. Sleep 60 fi done 3 Write a shell script to find factorial on given number. echo Enter the number : read n f=1 while [ $n gt 1 ] do f=`expr $f /* $n` n=`expr $n - 1` done echo Factorial of $n = $f 4 1 2 3 Write a shell script to print the following pattern 1 212 33123

echo Enter the number : read n i=1 while [ $i le $n ] do j=1 while [ $j le $i ] do echo n $j j=`expr $j +1` done echo

i=`expr $i +1` done

5 Write a shell script to display complete file & directory structure. Check whether it is a file or <Directory>. If it is file then also specify its file access permissions. echo "Enter a directory name: read dn echo "Files in the directory $dn are: " for fn in `ls $dn` do if [ -d $dn ] then echo "<$dn> Directory " else echo "$dn File " if [ -r $dn ] then echo "Read Permission" elif [ -w $dn ] then echo " Write Permission" elif [ -x $dn ] then echo " Execute Permission" else echo No such file exists fi fi done 6 Write a shell script which receives two files names as arguments. It should check whether the two file contents are same or not. If they are same then second file should be deleted. echo "Enter I File Name:" read f1 echo "Enter II File Name:" read f2 d=`cmp $f1 $f2` d1="" if [ $d -eq $d2 ] then echo "Two Files are similar and $f2 is deleted" rm $f2 else

echo "Two Files differ each other" fi ++++++++++++++++++++++++++++


Posted: Tue Jul 13, 2010 3:45 pm Subject description: New Technology Post subject: Cloud Computing

INTRODUCTION Imagine yourself in the world where the users of the computer of todays internet world dont have to run, install or store their application or data on their own computers, imagine the world where every piece of your information or data would reside on the Cloud (Internet). As a metaphor for the Internet, "the cloud" is a familiar clich, but when combined with "computing", the meaning gets bigger and fuzzier. Some analysts and vendors define cloud computing narrowly as an updated version of utility computing: basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is "in the cloud", including conventional outsourcing. Cloud computing comes into focus only when you think about what we always need: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends ICT's existing capabilities. Cloud computing is Internet ("cloud") based development and use of computer technology ("computing"). It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Comparison Cloud computing is often confused with grid computing ("a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, looselycoupled computers, acting in concert to perform very large tasks"), utility computing (the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity") and autonomic computing ("computer systems capable of self-management"). Implementation The majority of cloud computing infrastructure as of 2009 consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that has access to networking infrastructure. The Cloud appears as a single point of access for all the computing needs of consumers. Commercial offerings need to meet the quality of service requirements of customers and typically offer service level agreements. Open standards are critical to the growth of cloud computing and open source software has provided the foundation for many cloud computing implementations. Characterstics

As customers generally do not own the infrastructure, they merely access or rent, they can avoid capital expenditure and consume resources as a service, paying instead for what they use. Many cloud-computing offerings have adopted the utility computing model, which is analogous to how traditional utilities like electricity are consumed, while others are billed on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not left idle, which can reduce costs significantly while increasing the speed of application development. A side effect of this approach is that "computer capacity rises dramatically" as customers do not have to engineer for peak loads. Adoption has been enabled by "increased high-speed bandwidth" which makes it possible to receive the same response times from centralized infrastructure at other sites. Companies Providers including Amazon, Microsoft, Google, Sun and Yahoo exemplify the use of cloud computing. It is being adopted by individual users through large enterprises including General Electric, L'Oral, and Procter & Gamble. KEY CHARACTERSTICS 1.Cost is greatly reduced and capital expenditure is converted to operational expenditure. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and minimal or no IT skills are required for implementation. 2.Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using, e.g., PC, mobile. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet the users can connect from anywhere. 3.Multi-tenancy enables sharing of resources and costs among a large pool of users, allowing for: Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.) Peak-load capacity increases (users need not engineer for highest possible load-levels) Utilization and efficiency improvements for systems that are often only 10-20% utilized. 4.Reliability improves through the use of multiple redundant sites, which makes it suitable for business continuity and disaster recovery. Nonetheless, most major cloud computing services have suffered outages and IT and business managers are able to do little when they are affected. 5.Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, selfservice basis near real-time, without users having to engineer for peak loads. Performance is monitored and consistent and loosely-coupled architectures are constructed using web services as the system interface. 6.Security typically improves due to centralization of data, increased security-focused resources, etc., but raises concerns about loss of control over certain sensitive data. Security is often as good as or better than traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible.

7.Sustainability comes about through improved resource utilization, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy. CLOUD COMPUTING ARCHITECTURE These services are broadly divided into three categories: Software as a Service(SaaS) Platform as a Service(PaaS) Infrastructure as a Service(IaaS) SOFTWARE AS A SERVICE Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run the application on the customer's own computers and simplifying maintenance and support. Key characteristics include: Network-based access to, and management of, commercially available (i.e., not custom) software Activities that are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web Application delivery that typically is closer to a one-to-many model (single instance, multitenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics Centralized feature updating, which obviates the need for downloadable patches and upgrades. PLATFORM AS A SERVICE Platform as a service encapsulates a layer of software and provides it as a service that can be used to build higher-levelservices.Someone might produce a platform by integrating an OS, middleware, application software, and even a development environment that is then provided to a customer as a service. Someone would see an encapsulated service that is presented to them through an API.The customer interacts with the platform through the API, and the platform does what is necessary to manage and scale itself to provide a given level of service.

INFRASTRUCTURE AS A SERVICES Infrastructure as a service delivers basic storage and compute capabilities as standardized services over the network. Servers, storage systems, switches, routers, and other systems are pooled and made available to handle work loads that range from application components to high-performance computing applications Cloud Computing is a vast topic and the above report does not give a high level introduction to it. It is certainly not possible in the limited space of a report to do justice to these technologies. What is in store for this technology in the near future? Well, Cloud Computing is leading the industrys endeavor to bank on this revolutionary technology. Cloud Computing Brings Possibilities.. Increases business responsiveness Accelerates creation of new services via rapid prototyping capabilities Reduces acquisition complexity via service oriented approach

Uses IT resources efficiently via sharing and higher system utilization Reduces energy consumption Handles new and emerging workloads Scales to extreme workloads quickly and easily Simplifies IT management Platform for collaboration and innovation Cultivates skills for next generation workforce

Posted: Sat May 29, 2010 12:01 pm Post subject: Subject description: Computer Science (Basics)

Chromium Operating System

INTRODUCTION People at Google spend much of our time working inside a browser. We search, chat, email and collaborate in a browser. And in their spare time, they shop, bank, read news and keep in touch with friends -- all using a browser. Because do spend so much time online, that they began seriously thinking about what kind of browser could exist if we started from scratch and built on the best elements out there. They realized that the web had evolved from mainly simple text pages to rich, interactive applications and that we needed to completely rethink the browser. What was really needed was not just a browser, but also a modern platform for web pages and applications, and that's what we set out to build the CHROME OS. Google Chrome OS is an upcoming open source operating system designed by Google to work exclusively with web applications. Announced on July 7, 2009, Chrome OS is set to have a publicly available stable release during the second half of 2010. The operating system is based on Linux and will run only on specifically designed hardware. What is Chromium Operating System? Chromium OS is an open-source project that aims to build an operating system that provides a fast, simple, and more secure computing experience for people who spend most of their time on the web. Here you can review the project's design docs, obtain the source code, and contribute.

HARDWARE SPECIFICATION Google Chrome OS is initially intended for secondary devices like notebooks, not a user's primary PC, and will run on hardware incorporating an x86 or ARM-based processor. While Chrome OS will support hard disk drives, Google has requested that its hardware partners use solid-state drives due to their higher performance and reliability, as well as the lower capacity requirements inherent in an operating system that accesses applications and most user data on remote servers. Google Chrome OS consumes one-sixtieth as much drive space as Windows 7. Companies developing hardware for the operating system include Hewlett-Packard, Acer, Adobe, Asus, Lenovo, Texas Instruments, Free scale, Intel, Samsung Australia and Qualcomm. Solid State Hard Drive (SSD) Google Chrome operating system will be downloaded only on systems having solid state hard drives. SSDs allow a faster boot time as well as a faster write time. The cost difference

between SSD and HDD is pretty evident. SSD cost up to $3 per GB on the other hand HDD cost up to only 20-30 cents a GB. ARCHITECTURE In preliminary design documents for the Chromium OS open source project, Google describes a three - tier architecture: firmware, browser and window manager, and systemlevel software and Userland services. The firmware contributes to fast boot time by not probing for hardware, such as floppy disk drives, that are no longer common on computers, especially net books. The firmware also contributes to security by verifying each step in the boot process and incorporating system recovery. System-level software includes the Linux kernel that has been patched to improve boot performance. Userland software has been trimmed to essentials, with management by Upstart, which can launch services in parallel, re-spawn crashed jobs, and defer services in the interest of faster booting. The window manager handles user interaction with multiple client windows much like other X window managers. Firmware The firmware plays a key part to make booting the OS faster and more secure. To achieve this goal we are removing unnecessary components and adding support for verifying each step in the boot process. We are also adding support for system recovery into the firmware itself. We can avoid the complexity that's in most PC firmware because we don't have to be backwards compatible with a large amount of legacy hardware. For example, we don't have to probe for floppy drives. Our firmware will implement the following functionality: System recovery: The recovery firmware can re-install Chromium OS in the event that the system has become corrupt or compromised. Verified boot: Each time the system boots, Chromium OS verifies that the firmware, kernel, and system image have not been tampered with or become corrupt. This process starts in the firmware. Fast boot: We have improved boot performance by removing a lot of complexity that is normally found in PC firmware. System level and Userland software From here we bring in the Linux kernel, drivers, and user-land daemons. Our kernel is mostly stock except for a handful of patches that we pull in to improve boot performance. On the user-land side of things we have streamlined the init process so that we're only running services that are critical. All of the user-land services are managed by Upstart. By using Upstart we are able to start services in parallel, re-spawn jobs that crash, and defer services to make boot faster. Here's a quick list of things that we depend on: D-Bus: The browser uses D-Bus to interact with the rest of the system. Examples of this include the battery meter and network picker. Connection Manager: Provides a common API for interacting with the network devices, provides a DNS proxy, and manages network services for 3G, wireless, and Ethernet. WPA Supplicant: Used to connect to wireless networks. Auto-update: Our auto update daemon silently installs new system images. Power Management: (ACPI on Intel) Handles power management events like closing the

lid or pushing the power button. Xscreensaver: Handles screen locking when the machine is idle. Standard Linux services: NTP, syslog, and cron. Window Manager The window manager is responsible for handling the user's interaction with multiple client windows. It does this in a manner similar to that of other X window managers, by controlling window placement, assigning the input focus, and exposing hotkeys that exist outside the scope of a single browser window. Parts of the ICCCM (Inter-Client Communication Conventions Manual) and EWHM (Extended Window Manager Hints) specifications are used for communication between clients and the window manager where possible. The window manager also uses the XComposite extension to redirect client windows to offscreen pixmaps so that it can draw a final, composited image incorporating their contents itself. This lets windows be transformed and blended together. The Clutter library is currently used to animate these windows and to render them via OpenGL or OpenGL|ES. SECURITY Chromium OS has been designed from the ground up with security in mind. Security is not a one-time effort, but rather an iterative process that must be focused on for the life of the operating system. The goal is that, should either the operating system or the user detect that the system has been compromised, an update can be initiated, andafter a reboot the system will have been returned to a known good state. Chromium OS security strives to protect against an opportunistic adversary through a combination of system hardening, process isolation, continued web security improvements in Chromium, secure auto update, verified boot, encryption, and intuitive account management. In computer security, a sandbox is a security mechanism for separating running programs. It is often used to execute untested code, or un-trusted programs from unverified thirdparties, suppliers and un-trusted users. The sandbox typically provides a tightly-controlled set of resources for guest programs to run in, such as scratch space on disk and memory. Network access, the ability to inspect the host system or read from input devices are usually disallowed or heavily restricted. In this sense, sandboxes are a specific example of virtualization. In March 2010, Google software security engineer Will Drewry discussed Chrome OS security. Drewry described Chrome OS as a "hardened" operating system featuring autoupdating and sandbox features that will reduce malware exposure. He said that Chrome OS netbooks will ship with Trusted Platform Module, and include both a "trusted boot path" and a physical switch under the battery compartment that actuates a developer mode. That mode drops some specialized security functions but increases developer flexibility. Drewry also emphasized that the open source nature of the operating system will contribute greatly to its security by allowing constant developer feedback.

You might also like