Education

SEMINAR TOPICS

1)WEB HOSTING

One of the easiest ways to offer information on the Internet is to host a Web Site. Pick a good host, create and upload our pages submit our site to search engines.

INTRODUCTION:
The main advantage of a web site is to share our ideas and views with the world. To create web pages is very easy. However, to provide access to everyone, we need to host it on a server that is available to everyone. There are hundreds of hosts both paid and free that provide you with space on their servers to serve our pages. Different hosts have different services, technologies, requirements etc.  Now we look at what we need to consider when choosing a host. After that we discuss few hosts and features.

SELECTING OUR HOST:-

The life of a site depends on its quality. One of the factors that affects the quality of the site is the host. When choosing a host, we need to consider the following.

 

 COST: - If a personal web site is all we are looking for, cost will definitely be a factor we need to consider first.  If we are looking for a service based site or if we expect more than ten thousands hits a day, you may want to consider a paid server. Since free servers would be highly trafficked merely because of the volume of sites they host. Otherwise, a free service provider is an ideal choice for all sites.

 

SPEED:-Speed is very important in ensuring stickiness our site. Remember that there are hundreds of sites on every topic, if a visitor has a bad experience with our site, he may never come back.  Checkup on the kind of speeds available with a server by checking out other sites on the same server.


FREE SPACE:-We must consider the amount of free space available on the server before choosing one.  Most personal sites will fit fewer than 5MB. Again if we want to host a multimedia intensive site, we will need much more space. A minimum of 15MB is recommended for discussion based sites.  Some sites also provide additional space in standard lots for a small fee.

 SERVER PLATFORM:-


The operating system running on the server is very important when considering the web server.  It indicates the reliability of the server.  It determines the technologies available to us.  The security features and also kind of support we expect.  Before we decide on this, we will first need to consider our requirements.
A static site with HTML pages and images will run well on any server
 TECHNOLOGIES SUPPORTED:-

This is neither another consideration that is nor really a choice. We would be using web development software and scripts that we are most comfortable with.  The server should support all resources we use for you to be able to implement our site with the least hassles.
Server side scripting has its own fickleness.  An ASP site by default will run on WindowsNT.  If we are using script other than Vbscript or Javascript for our ASP pages.
If our site involves databases, the database the connector (the driver used to connect the database to the front end) will also affect the server configuration.  Currently most databases have better connectors for Windows platform. The connectors for databases may also depend on the frontend.  For example, in general Oracle may have stable connectors FW both windows, as well as Linux.  However, the connector for PHP would be different from the connector for zone.

.SECURITY:-
As a beginner, we may not be bothered too much about security on the server.  However, this is an important issue for many.  Try to find out security features implemented on the server.  Also, you may want to check up on recent reports on hacking to see if your host figures anywhere. 
If you can get your hands on it, gather data about how often the servers have been down and how long the downtime on an average is.  This will give you an indication of the security as well as the recovery measures implemented by the administrator.

TERMS AND CONDITION:-
Read the terms and conditions for using the server very carefully.  Take a note of all the fine print.  For example, some sites mention the amount of data we can serve under their terms and conditions The limit may be anywhere from one month to two years.  Some even require us to login to our account regularly. Some hosts provide services without any limitations, too.

FREE WEB SERVICES:
There are lots of services on the Web that provide us ready-to-use applications to add to your site. Some of these can be very useful in generating traffic, adding value or even tracking our visitors.  Note that to use most of these, our host will have to allow the usage of scripts used to create the application.  Others will run off the service provider’s site.
These services can not only ad value to our Web site, they also cut down on a lot of development time.  All we need to do is place their code (or files) on our site and it’s ready.  However, most of them require some kind of advertisement, so they would be appropriate for those whose hosts don’t require advertising and allow third-party advertising.

WEB DESIGN GUIDE:
Choosing a good server and useful Web services for our site is crucial for driving traffic to our site.  The knowledge of HTML will help us create crisp designs for the pages on our site.  However, there are other factors that influence traffic as well as stickiness, many of which are design related issues.  In many ways, good design exists only in the eyes of the beholder, in this case visitors to our site.  It doesn’t discuss some issues here.  Keep in mind that Web development is a subjective process and that any one opinion may differ from the next.  The idea is not to establish what is right or wrong, but to try to lay down some guidelines and to give us a place to start.

FOLLOW HTML STANDARDS:
 Make sure we follow all the HTML standards. The more exactly we spell out the content, the better the browser will be able o display it.  Some common HTML omissions include <HTML> and <HEAD> tags, HEIGHT and WIDTH attributes for <IMG> tags, ALT attributes with alternate text for <IMG> tags, closing tags for <HEAD>   , <BODY>, <P> and <CENTER> tags, HTML codes for extended characters, etc. 
Try to be fully HTML-compliant; as some browsers are more efficient when proper HTML code is used. As an example, newer releases of Netscape and Microsoft browsers will insert placeholders for images if we have specified the HEIGHT and WIDTH tags.  This allows the browser to display the rest of the page even if the images have not fully loaded.  Many people also browse without loading images.

EMBEDDING OBJECTS:
Every week there is a hot new technology on the Web, but if we want to reach a wider audience, try to wait until something is generally accepted by users and developers.  Somebody needs to push the envelope with testing new products, but this has to be balanced by the willingness and ability of our visitors to download components just to view our site. Some examples of this are java applets, ActiveX controls, Shockwave objects, specialised document formats, etc.


UPDATES: 
Make an effort to keep our Web site fresh, especially if we want our visitors to keep returning.  If a site remains stagnant, people will stop visiting it again and again, but if there is always something new, people will often drop by just to see what has changed. You might want to think about doing a complete site redesign every few months, as it not only keeps it fresh, but it also gives us the opportunity to take advantage of newer technologies as they evolve.  This also helps to improve our site as time goes on, as we will invariably learn better techniques through each iteration. 



LAYOUT:
Layout and design are very subjective, but the important thing is to make sure that there is a layout. As opposed to just putting information up, make an effort to display it aesthetically.  The Web makes it possible to control how our information is presented and there is no reason to not do it.  Some general guidelines we can follow include splitting our information into logical sections, making sure your starting page is attractive and well laid out, having a consistent theme throughout the entire site and using colors styles and fonts that complement each other.

SUBMITTING TO SEARCH ENGINES:
Most visitors to our site will come from search engines.  This makes it very important for us to submit our pages to search engines work and how to design our pages for best rankings.
Search engines are always changing their parameter.  There are no secret tricks that will magically put you on top and keep you there.  Rich content with title, tags, text and links that contain our relevant keywords-the words people will use to search for us in the search engines-that is what is needed.  Another factor of growing importance is that number of reputed sites that link to our site.  Work on getting others to link to our site.  Having one main theme per site is also beneficial.  It is even better if our domain name also contains that main theme keyword.

HOW SEARCH ENGINES WORK:
Search engines build their databases automatically by sending out software agents across the Web.  These agents, known as robots, Web crawlers or spiders, are programs that follow links from page to page, gathering information about those pages on the way.  If we understand a little about how these robots work and the kind of information they gather, we can influence how our Web pages are indexed and so improve the chances that we reach the right audience.

IMPROVING PRESENTATION:
A very important part of the recipe for improving our search engine performance involves improving the presentation of our search listing.  The search engine may be displaying our site among the top ten sites listed on searches for a particular term. Yet, the user has to be attracted enough for him to click on it.
As we have seen, our page title should include keywords wherever possible.  The title should also be meaningful when listed out of context on a search results page.  If our page title is something like ‘HOME PAGE’ that’s exactly how it will be listed in the search results. If the page has a relevant and descriptive title, we are far more likely to reach our target audience.

2)4G-MOBILE COMMUNICATION


1.INTRODUCTION

  Communication is one of the important areas of electronics and always been a focus for exchange of information among parties at locations physically apart. There may be different mode of communication. The communication may be wired or wireless between two links. Initially the mobile communication was limited to between one pair of users on single channel pair. Mobile communication has undergone many generations. The first generation of the RF cellular used analog technology. The modulation was FM and the air interface was FDMA. Second generation was an offshoot of Personal Land Mobile Telephone System (PLMTS). It used Gaussian Shift Keying modulation (GMSK). All these systems had practically no technology in common and frequency bands, air interface protocol, data rates, number of channels and modulation techniques all were difficult. Dynamic Quality of Service (QoS) parameter was always on the top priority list. Higher transmission bandwidth and higher efficiency usage had to be targeted. On this background development of 3G mobile communication systems took place. In this Time Division Duplex (TDD) mode technology using 5MHz channels was used. This had no backward compatibility with any of the predecessors. But 3G appeared to be somewhat unstable technology due to lack of standardization, licensing procedures and terminal and service compatibility. Biggest single inhibitor of any new technology in mobile communication is the mobile terminal availability in the required quantity, with highest QoS and better battery life. The future of mobile communication is FAMOUS-FUTUERE Advanced Mobile Universal Systems, Wide-band TDMA, Wideband CDMA are some of the technologies. The data rates targeted are 20MBPS. That will be the 4G in the mobile communication. 4G must be hastened, as some of the video applications cannot be contained within 3G.  




2.DEVELOPMENT OF THE MOBILE COMMUNICATION

The communication industry is undergoing cost saving programs reflected by slowdown in the upgrade or overhaul of the infrastructure, while looking for new ways to provide third generation (3G) like services and features with the existing infrastructures. This has delayed the large-scale development of 3G networks, and given rise to talk of 4G technologies. Second generation (2G) mobile systems were very successful in the previous decade. Their success prompted the development of third generation (3G) mobile systems. While 2G systems such as GSM, andIS-95 etc. were designed to carry speech and low bit-rate data. 3G systems were designed to provide higher data-rate services. During the evolution from 2G to3G, a range of wireless systems, including GPRS, IMT-2000, Bluetooth, WLAN, and Hiper LAN have been developed. All these systems were designed independently, targeting different service types, data rates, and users. As these systems all have their own merits and shortcomings, there is no single system that is good to replace all the other technologies. Instead of putting into developing new radio interface and technologies for 4G systems, it is believed in establishing 4G systems is a more feasible option.


3. ARCHITECTURAL CHANGES IN 4G TECHNOLOGY

In 4G architecture, focus is on the aspect that multiple networks are able to function in such a way that interfaces are transparent to users and services. Multiplicities of access and service options are going to be other key parts of the paradigm shift. In the present scenario and with the growing popularity of Internet, a shift is needed to switch over from circuit switched mode to packet switched mode of transmission. However 3G networks and few others, packet switching is employed for delay insensitive data transmission services.  Assigning packets to virtual channels and then multiple physical channels would be possible when access options are expanded permitting better statistical multiplexing. One would be looking for universal access and ultra connectivity, which could be enabled by:
(a)    Wireless networks and with wire line networks.
(b)   Emergence of a true IP over the air technology.
(c)    Highly efficient use of wireless spectrum and resources.
(d)   Flexible and adaptive systems and networks.


4. SOME KEY FEATURES OF 4G TECHNOLOGY

Some key features (mainly from the users point of view) of 4G networks are:
1. High usability: anytime, anywhere, and with any technology
2. Support for multimedia services at low transmission cost
3. Personalization
4. Integrated services

First, 4G networks are all IP based heterogeneous networks that allow users to use any system at any time and anywhere. Users carrying an integrated terminal can use a wide range of applications provided by multiple wireless networks.

Second, 4G systems provide not only telecommunications services, but also data and multimedia services. To support multimedia services high data-rate services with good system reliability will be provided. At the same time, a low per-bit transmission cost will be maintained.

Third, personalized service will be provided by the new generation network.
 Finally, 4G systems also provide facilities for integrated services. Users can use multiple services from any service provider at the same time.

To migrate current systems to 4G with the features mentioned above, we have to face number challenges. Some of them were discussed below.


4.1   MULTIMODE   USER   TERMINALS

In order to use large variety of services and wireless networks in 4G systems, multimode user terminals are essential as they can adopt different wireless networks by reconfiguring themselves. This eliminates the need to use multiple terminals (or multiple hardware components in a terminal). The most promising way of implementing multimode user terminals is to adopt the software radio approach. Figure.1 shows the design of an ideal software radio receiver


                                             Analog                                                  Digital 

                                                                                                   
                                      BPF              LNA                  ADC                   Base band        
                                                                                                       DSP    
                                                                                                                                        
                         
                                   Figure.1: An ideal software radio receiver

The analog part of the receiver consists of an antenna, a band pass filter (BPF), and a low noise amplifier (LNA). The received analog signal is digitized by the analog to digital converter (ADC) immediately after the analog processing. The processing in the next stage (usually still analog processing in the conventional terminals) is then performed by a reprogrammable base band digital signal processor (DSP). The Digital Signal Processor will process the digitized signal in accordance with the wireless environment.

4.2. TERMINAL MOBILITY  

In order to provide wireless services at any time and anywhere, terminal mobility is a must in 4G infrastructures, terminal mobility allows mobile client to roam across boundaries of wireless networks. There are two main issues in terminal mobility: location management and handoff management. With the location management, the system tracks and locates a mobile terminal for possible connection. Location management involves handling all the information about the roaming terminals, such as original and current located cells, authentication information, and Quality of Service (QoS) capabilities. On the other hand, handoff management maintains ongoing communications when the terminal roams. MobileIPv6 (MIPv6) is a standardized IP-based mobility protocol for Ipv6 wireless systems. In this design, each terminal has an IPv6 home address whenever the terminal moves outside the local network, the home address becomes invalid, and the terminal obtain a new Ipv6 address (called a care-of address) in the visited network. A binding between the terminal’s home address and care-of address is updated to its home agent in-order to support continuous communication.
                                                                                                                                                                                                                                                      
                                                              UMTS
                                                              Coverage


                                                                                                        Vertical handoff   
                                                              GSM
                                                              Coverage
                                                                                                         Horizontal handoff                   

                                                              WLAN
                                                              Coverage

Figure.2:  Vertical and Horizontal handoff of a mobile terminal
                                                                                                           
Figure.2 shows an example of horizontal and vertical handoff. Horizontal handoff is performed when the terminal moves from one cell to another cell within the same wireless system. Vertical handoff, however, handles the terminal movement in two different wireless systems (e.g, from WLAN to GSM)



4.3  PERSONAL   MOBILITY

In addition to terminal mobility, personal mobility is a concern mobility management. Personal mobility concentrates on the movement of users instead of user’s terminals, and involves the provision of personal communications and personalized operating environments.

A personal operating environment, on the other hand, is a service that enables adaptable service presentations inorder to fit the capabilities of the terminal in use regardless of network types. Currently, There are several frame works on personal mobility found in the literature. Mobile-agent-based infrastructure is one widely studied solution. In this infrastructure, each user is usually assigned a unique identifier and served by some personal mobile agents (or specialized computer programs running on same servers. These agents acts as intermediaries between the user and the Internet. A user also belongs to a home network that has servers with the updated user profile (including the current location of the user’s agents, user’s performances, and currently used device descriptions). When the user moves from his/her home network to a visiting network, his/her agents will migrate to the new network. For example, when somebody makes a call request to the user, the caller’s agent first locates user’s agent by making a location request to user’s home network. By looking up user’s profile, his/her home network sends back the location of user’s agent to the caller’s agent. Once the caller’s agent identifies user’s location, the caller’s agent can directly communicate with user’s agent. Different agents may be used for different services.

4.4   SECURITY  AND    PRIVACY

Security requirements of 2G and 3G networks have been widely studied in the literature. Different standards implement their security for their unique security requirements. For
example, GSM provides highly secured voice communication among users. However, the existing security schemes for wireless systems are inadequate for 4G networks. The key concern in security designs for 4G networks is flexibility. As the existing security schemes are mainly designed for specific services, such as voice service, they may not be applicable to 4G environments that will consist of many heterogeneous systems. Moreover, the key sizes and encryption and decryption algorithms of existing schemes are also fixed. They become inflexible when applied to different technologies and devices (with varied capabilities, processing powers, and security needs). As an example, Tiny SESAME is a lightweight reconfigurable security mechanism that provides security services for multimode or IP-based applications in 4G networks.

5. CONCLUSIONS

The future of mobile communication is FAMOUS-Future Advanced Mobile Universal Systems. The data rates targeted are 20 MBPS. That will be the FOURTH GENERATION 4G in the mobile communication technology. 4G must be hastened, as some of the video applications cannot be contained within 3G.This paper highlights that current systems must be implemented with a view of facilitate to seamless integration into 4G infrastructure. Inorder to cope with the heterogeneity of network services and standards, intelligence close to end system is required to map the user application requests onto network services that are currently available. This requirement for horizontal communication between different access technologies has been regarded as a key element for 4G systems. Finally, this paper describes how 4G mobile communication can be used in any situation where an intelligent solution is required for interconnection of different clients to networked applications aver heterogeneous wireless networks.

 3)CYBER CRIMES
     

1. INTRODUCTION:

                        Today an increasing number of companies are connecting to the Internet to support sales activities or to provide their employees and customers with faster information and services.

                        The virtual world has taken over the real one, E-business and E-commerce, which are the new mantras and electronic transactions and dominate the overall business paradigm. In this rapidly evolving e-world that depends on free flowing information, security is the major problem to be considered.

                        Security on Internet is challenging. Security on an Internet is important because information has significant value. Implementing security involves assessing the possible threats to one’s network, servers and information. The goal is then to attempt to minimize the threat as much as possible.

                        This developing world of information technology has a negative side effect. It has opened the door to antisocial and criminal behavior.

1.3 Definition of computer crimes:

                        Experts debated on what exactly constitutes computer crime or a computer related crime. Even after several years there is no internationally recognized definition of these terms. A global definition of computer crime has not been achieved. Computer crime has been defined as “any illegal unethical or unauthorized behavior involving automatic processing or transmission of data”.

                        Threats come in two categories:

1.Passive threats.
2.            Active threats.
               
Passive threats:               

                        This involves monitoring the transmission data of an organization.
         Here the goal of the assembler if to obtain information that is being transmitted. Passive threats are difficult to detect because they do not involve alterations of data. These are of two types:

a.       Release of message content.
b.      traffic analysis.

Active threats:                  

                        These threats involve some modification of data stream or the creation of a false stream. These are of three types:

a.       Modification.
b.      Denial of message service.
c.       Masquerade.


2. TYPES OF CYBER CRIMES:

         2.1 Fraud by computer manipulation:              

                        Intangible assets represented in data format such as money on deposits or hours of work are the most common targets related to fraud.

                        Modern business is quickly replacing cash with deposits transacted on computer system creating computer fraud. Credit card information as well as personal and financial information on credit card has been frequently targeted by organized criminal crimes. Assets represented in data format often have a considerably higher value than traditionally economic assets resulting in potentially greater economic class.

2.2 Computer Forgery:

                        This happens when data is altered which is stored in documents that are in computerized form. Computers however can also be used as instruments for committing forgery. A new generation of fraudulent alteration or duplication emerged when computerized color laser copies became available.

                        These copies are capable of high-resolution copying, modification of documents that are even creating false documents without benefit of original. They produce documents with an equality that is indistinguishable from original documents.
Experts can only distinguish this.

                   

                            The widespread of computer networks is the need for people with common and shared interest to communicate with each other. Information can easily be represented and manipulated in electronic form. To meet the needs of sharing and communicating information, the computers need to be connected which is called data communication network.

2.3 Damage to Data/Programs:

                        This category of criminal activity involves either direct or search unauthorized access to computer system by introducing new programs known as viruses, worms or logic bombs. The unauthorized modification suppression or erasure of computer data or functions with the Internet to hinder normal functioning of the system is clearly a criminal activity and is commonly referred to as computer sabotage.

VIRUS: (Vital information resources under seize).

                        Virus is a series of program codes with the ability to attach itself to legitimate programs and propagate itself to other computer programs. Viruses are file viruses and bootsector viruses.
It attacks the fat so that there is no sequence of file content and it destroys the data content.

WORMS: (Write Once Read Many).

                        They are just added to the files and they do not manipulate. It differs from a virus in that it does not have the ability to replicate itself.


LOGIC BOMB:

                        As it involves the programming the destruction or modification of data is at a specific time in the future.

2.4 Unauthorized access:

                        The desire to gain unauthorized access to computer system can be prompted by several motives:

1.      From simple curiosity.
2.      To computer sabotage.

                        International unjustified access by a person not authorized by the owners or operators of a system may often constitute criminal behavior.

                        Unauthorized access creates the opportunity to cause additional unintended damage to data and system crashes. Accessing is often accomplished from a remote location along a telecommunication network by one of several means. The intruder may be able to take advantage of security measures to gain access or may find loopholes in existing security measures or system procedures. Frequently hackers impersonate legitimate users. This is especially common in systems.


3. PRECAUTIONS TO PREVENT COMPUTER HACKING:   

                        Nobody’s data is completely safe. But everybody’s computers can still be protected against would-be hackers. Here is your defense arsenal.

3.1 Firewalls:

                        These are the gatekeepers to a network from the outside. Firewall should be installed at every point where the computer system comes in contact with other networks, including the Internet a separate local area network at customer’s site or telephone company switch.

3.2 Password protection:

                        At minimum, each item they logon, all PC users should be required to type-in password that only they and network administrator know. PC users should avoid picking words, phrases or numbers that anyone can guess easily, such as birth dates, a child’s name or initials. Instead they should use cryptic phrases or numbers that combine uppercase and lowercase.

 

                      Letters such as the “The Moon Also Rises”. In addition the system should require all users to change passwords every month or so and should lockout prospective users if they fail to enter the correct password three times in a row.

3.3 Viruses:

                        Viruses generally infect local area networks through workstations. So anti-virus software that works only on the server isn’t enough to prevent infection.

                        You cannot get a virus or any system-damaging software by reading e-mail. Viruses and other system-destroying bugs can only exist in files, and e-mail is not a system file. Viruses cannot exist there. Viruses are almost always specific of the operating system involved. Meaning, viruses created to infect DOS application can do no damage to MAC systems, and vice versa. The only exception to this is the Microsoft Word “macro virus” which infects documents instead of the program.

3.4 Encryption:

                        Even if intruders manage to break through a firewall, the data on a network can be made safe if it is encrypted. Many software packages and network programs – Microsoft Windows NT, Novel NetWare, and lotus notes among others- offer and – on encryption schemes that encode all the data sent on the network. In addition, companies can buy stand alone encryption packages to work with individual applications. Almost every encryption package is based on an approach known as public-private key.

I
6. CONCLUSION:

                        The issue of network and Internet security has become increasingly more important as more and more business and people go on-line.

                        To avoid the information from hackers we use the passwords secretly and we change the passwords regularly. We cannot use our names, initials as passwords that are easily traced. We should not download any executable files from unknown sources, information from any sources without checking for virus. We have to use licensed anti-virus software. Also teams like CERT and FIRST assist in solving hacker attacks and to disseminate information on security.