Ans : IP is the transmission mechanism used by TCP/IP protocols for host-to-host

communication Packets in IP layer are called datagrams. Figure 5 shows the IP
datagram format:

A brief description of header fields in order is given below:

• Version (4 bits): It defines the version of IP protocol. Currently, the version is
4(IPv4), indicated by value 4. In future it would contain 6, for IPv6.

• HLEN (4 bits): It is needed because length of header is variable. When the
header size is 20 bytes, its value is 5(54=20). With options, the maximum size is 60 bytes, when the value is 15 (154=60). Each value represents number of 32-bit words.

• Service Type (8 bits): It is used to define type of service in terms of reliability,
precedence, delay and cost.

• Total length (16 bits):
it defines the total length of IP datagram. The maximum
value can be 216=65536 bytes.

Identification (16 bits): This field is used to unusually identify a datagram. It is
useful to know the fragments belonging to same datagram fragments that are part
of a datagram which contain same value in identification fields, so that they can
be put together in the order to reassemble the datagram at receiver.

• Flags (3 bits): This field is used to uniquely identify a datagram. It is useful to
know the fragments belonging to same datagrams.

• Fragmentation Offset (13 bits):
It is a pointer that indicates the offset of the
fragment in the original datagram before fragmentation.

• Time to Live (8 bits): It is used to control the maximum number of hops visited
by the datagram. It is needed to restrict a datagram from continuing to travel in
infinite loop without reaching the destination. This infinite looping may cause
network congestion. This field limits the lifetime of datagram, after which the
packet is discarded, so that datagram does not travel in infinite loop.

• Protocol (8 bits): An IP datagram may encapsulate data from various higher-level protocols like TCP, UDP, ICMP, and IGMP. This field specifies the final destination protocol to which the IP datagram should be delivered. Each protocol
TCP, UDP etc. identified with a unique number.

• Source Address (32 bits):
It stores the IP address of the source.

• Destination Address (32 bits): It stores the IP address of the final destination.

• Options: This field contains optional information such as routing details,
timestamp etc. For instance, it can store route of a datagram, in the form of IP
addresses of intermediate routers, optionally the time when it pass through that
router.

 Ans :


Communication

Communication the process of sharing ideas, information, and messages with others
at a particular time and place. Communication is a vital part of personal life and is
also important in business, education, and any other situation where people encounter
each other. Communication between two people is an outgrowth of methods
developed over centuries of expression. Gestures, the development of language, and
the necessity to engage in joint action all play a part. Communication, as we see it
today, has evolved a long way. We will discuss the primitive modes of
communication briefly.

i) Early Methods
Early societies developed systems for sending simple messages or signals that could
be seen or heard over a short distance, such as drumbeats, fire and smoke signals, or
lantern beacons. Messages were attached to the legs of carrier pigeons that were
released to fly home (this system was used until World War I, which started in 1914).
Semaphore systems (visual codes) of flags or flashing lights were employed to send
messages over relatively short but difficult-to-cross distances, such as from hilltop to
hilltop, or between ships at sea.

ii) Postal Services
The postal system is a system by which written documents normally enclosed in
envelopes, and also small packages containing other matter, are delivered to destinations
around the world. Anything sent through the postal system is called post.
In India the East India Company in Mumbai, Chennai and Calcutta introduced the
postal system in 1766, further these postal service became available to the general
public. Even after implementing different electronic communication mediums, postal
system is still one of the popular communication systems available.

iii) Telegraph
The first truly electronic medium for communication was the telegraph, which sent
and received electrical signals over long-distance wires. The first practical
commercial systems were developed by the physicist, Sir Charles Wheatstone and the
inventor Sir William F. Cooke in Great Britain, and by the artist and inventor Samuel
F. B. Morse in the United States. Morse demonstrated the first telegraph system in
New York in 1837. But regular telegraph service, relaying Morse code (system of
code using on and off signals), was not established until 1844. Telegraphers would
translate the letters of the alphabet into Morse code, tapping on an electrical switch,
or key. The telegrapher at the other end of the line would decode the tapping as it
came in, write down the message, and send it to the recipient by messenger. The
telegraph made it possible for many companies to conduct their business globally for
the first time.

iv) Telephone
Early devices capable of transmitting sound vibrations and even human speech
appeared in the 1850s and 1860s. The first person to patent and effectively
commercialize an electric telephone was Scottish-born American inventor Alexander
Graham Bell. Originally, Bell thought that the telephone would be used to transmit
musical concerts, lectures, or sermons.
The telephone network has also provided the electronic network for new
computer-based systems like the Internet facsimile transmissions, and the World
Wide Web. The memory and data-processing power of individual computers can be
linked together to exchange the data transmitted over telephone line, by connecting
computers to the telephone network through devices called modems (modulator demodulators).

v) Computers and Internet
The earliest computers were machines built to make repetitive numerical calculations
that had previously been done by hand. While computers continued to improve, they
were used primarily for mathematical and scientific calculations, and for encoding
and decoding messages. Computer technology was finally applied to printed
communication in the 1970s when the first word processors were created.

At the same time computers were becoming faster, more-powerful and smaller, and
networks were developed for interconnecting computers. In the 1960’s the Advanced
Research Projects Agency (ARPA) of the U.S. Department of Defense, along with
researchers working on military projects at research centers and universities across
the country, developed a network called the ARPANET, for sharing data and
processing time of uniform computer connection over specially equipped telephone
lines and satellite links. The network was designed to survive the attack or destruction
of some of its parts and continue to work.

Soon, however, scientists using the ARPANET realized that they could send and
receive messages as well as data and programs over the network. The ARPANET
became the first major electronic-mail network; soon thousands of researchers all
over the world used it. Later on the National Science Foundation (NSF) helped
connect more universities and non-military research sites to the ARPANET, and
renamed it the Internet because it was a network of networks among many different
organizations.

TCP/IP Protocols
Today, the Internet is the widely known computer network. It uses interconnection of
computer system by both wired and wireless. Smaller networks of computers, called
Local Area Networks (LANs), can be installed, in a single building or for a whole
organization. Wide Area Networks (WANs) can be used to span a large geographical
area. LANs and WANs use telephone lines, computer cables, and microwave and
laser beams to carry digital information around a smaller area, such as a single
college campus. Internet can carry any digital signals, including video images,
sounds, graphics, animations, and text, therefore it has became very popular
communication tool.

 High Level Characteristics

The following are the high level characteristics of a good function oriented design:

Functional Independence
The modeled functions should be independent in terms of the tasks they perform.
This would help in.

• Reusability: The function can be more reusable if it performs a single task.

Maintainability: Maintainability is increased due to single task execution and loose coupling.

• Troubleshooting:
Helps in quicker and easier debugging in case of exception scenarios.

• Understandability: The function is more easily understood.

Adherence to Key Design criteria

The functions should' adhere to other important design criteria:

• Completeness: The functions should implement all the requirements
specified in the requirements document completely.

• Correctness: The functions should implement all the requirements as
per their specifications correctly.

Efficiency: The functions should efficiently use resources such as,
database connection, file connection, etc.

• Functional cohesion: This is the strongest type of cohesion wherein,
all functions within a module are designed to achieve a single
functionality.

• Cost: The functions should aim to reduce the overall cost in terms of
maintainability and extensibility.

Module Level Characteristics

In this section, we shall look at the main characteristics that are to be possessed by for a
good design.

Cohesion
Cohesion is the measure of how well the internal elements of module or a
function are connected to each other. In other words, the function should perform
only a single tasks. The logic and variables present in the function should be to
perform a single activity only, in the most optimal fashion. This enables a strong
internal relationship between the internal elements of a function.

Types of Cohesion
The following are the different types of Cohesion:

Coincidental cohesion: In this type of cohesion, the internal
functions in a given module are loosely correlated.

• Communication cohesion: In this type of cohesion, the functions
within the module update the same data type.

• Sequential cohesion: The functions of a module are said to have sequential cohesion if individual functions of a module form a sequenced for executing a given functionality. For instance, the output of the first function is fed as input for the second function

• Functional cohesion: This is the strongest type of cohesion wherein,
all functions within a module are designed to achieve a single
functionality.

Examples of Cohesion
The following is a function with weak cohesion.
Function update Employee (int empld, String newname) {
Logger log = new Logge ();
Connection con = new Connection 0;
Statement stmt = new Statement ():
Stmt.executeQuery ("Update employee set name = "[lnewnarnel]" where
empho = "[lernpld):
Log.log("Statement executed successfully"); }

The following is a function with strong cohesion:

Function Connection createConnection 0 {
If (con ;:= null)
con = new Connection 0;
return con;}

The above function performs a single task of"creating a connection. The internal
variables and logic is completely concerned with opening a connection. Strong
internal cohesion is always considered a best practice as it helps in re-usability and
maintainability.

Coupling'
Coupling indicates the dependencies across different modules. If, a module is
dependent on multiple functions in another module, then it is known as strong
coupling; lesser number of dependencies indicate loose coupling. A module with
loose coupling on another module also has strong cohesion among its internal
functions. wherein, internal functions co-ordinate to implement the module
behavior.

The design thumb rule is, to have loose coupling so that, each of the individual
modules and its internal structure can be changed with greater impact on other
modules .

Types of Coupling

• Content coupling: In this type of coupling, one module is dependent
and updates the internal state of another module. This is a very tight
form of coupling.

• Common coupling: If modules share the same global data, it is
known as common coupling. For instance, all modules acting on a
common shared persistent store for their functionality, it is an
example of common coupling.

• Control coupling: If a function argument passed from the first
module controls the logic and order of instructions in another module,
for instance, it is an illustration of a control coupling if we pass
control flags and switches from one module to function in another
module through which the sequence of steps and branching can be
varied, it forms control coupling.

• Data coupling: If two modules are coupled by a function parameter it
is said to be data coupling. This is an example of loose coupling. For
.instance, a function call through the-specified argument forms a
standard data coupling.

Conscience .
Two modules are said to be in conscience if a change in the first module requires
a mandatory change in the second module for ensuring overall functionality. This
is also a form of a very tightly coupled modules.

Identification Process

  1. Functionality modeling: Component identification always starts by decomposing
    the business domain problem structure into a well-defined visual model. Data flow
    diagram is one of the most popular visual models to depict function oriented
    design. .
  2. Design module development: The next step is to convert individual elements in
    the DFD into a design module. During this process, the design modules need to be
    designed to ensure high cohesion and loose inter-module coupling. The modules
    that perform similar kind of functionalities and processes qualify for the main
    . design module. Design modules would then become key function component.
  3. Sub function development: The main functions need to be broken down into sub
    functions. Utilities such as, data validation, data conversion, information Jogging
    would be good candidates for sub-functions.
  4. Interaction modeling: The input and output elements from DFD can be used to
    design the interactions between functions.

    Component and Stage Mapping
    The following table indicates the component/association identified at each level of the
    process:
StageComponent/Associativity
Functional modelingData flow diagram
Design module developmentHigh level function
Sub function developmentSub functions
Interaction modelingInput and output data

 i) Hardware Issues

At the hardware level, an additional component called router is used to connect
physically distinct networks as shown in Figure 1. A router connects to the network
in the same way as any other computer. Any computer connected to the network has a
Network Interface Card (NIC), which has the address (network id+host id), hard
coded into it. A router is a device with more than one NICs. Router can connect
incompatible networks as it has the necessary hardware (NIC) and protocols
(TCP/IP).

ii) Software Issues
The routers must agree about the way information would be transmitted to the
destination computer on a different network, since the information is likely to travel
through different routers, there must be a predefined standard to which routers must
confirm. Packet formats and addressing mechanism used by the networks may differ.
One approach could be to perform conversion and reconversion corresponding to
different networks. But this approach is difficult and cumbersome. Therefore, the
Internet communication follows one protocol, the TCP/IP protocol suite. The basic
idea is that it defines a packet size, routing algorithms, error control, flow control
methods universally.

TCP/IP Protocols It would be unwise to club all these features in a single piece of software ─ it would
make it very bulky. Therefore, all these features are logically sub-grouped and then
the sub-groups are further grouped into groups called layers. Each layer has an
interface with the adjacent layers, and performs specific functions.

Need for Layering
Since it is difficult to deal with complex set of rules, and functions required for
computer networking, these rules and functions are divided with logical groups called
layers. Each layer can be implemented interdependently with an interface to other
layers providing with services to it or taking its services like flow control and error
control functions are grouped together and the layer is called data link layer. Speech
in telephone conversation is translated, with electrical segments and vice-versa.
Similarly in computer system the data or pattern are converted into signals before
transmitting and receiving. These function and rules are grouped together in a layer
called physical layer.

 Ans : Identification Process

  1. Functionality modeling: Component identification always starts by decomposing
    the business domain problem structure into a well-defined visual model. Data flow
    diagram is one of the most popular visual models to depict function oriented
    design.
  2. Design module development: The next step is to convert individual elements in
    the DFD into a design module. During this process, the design modules need to be
    designed to ensure high cohesion and loose inter-module coupling. The modules
    that perform similar kind of functionalities and processes qualify for the main
    design module. Design modules would then become key function component.
  3. Sub function development: The main functions need to be broken down into sub
    functions. Utilities such as, data validation, data conversion, information Jogging
    would be good candidates for sub-functions.
  4. Interaction modeling: The input and output elements from DFD can be used to
    design the interactions between functions.


Component and Stage Mapping
The following table indicates the component/association identified at each level of the
process:

          Stage                                                                                       Component! Association
Functional modeling                                                                                 Data flow diagram
Design module development                                                                    High level function
Sub function development                                                                            Sub functions
Interaction modeling                                                                               Input and output data

We have seen the positive benefits of SRS in previous section. Let us look at a scenario wherein the SRS is not properly defined and its impact on the project. This will enable us to understand the importance of SRS on the project:


• Impact on cost and schedule: Without a complete and accurate SRS, it
would be difficult to properly estimate and plan the overall cost of the
project. This would have ripple’ effect on resource staffing, milestone
planning and overall project budget. As a result the entire project schedule,
will be in jeopardy.

• Quality Impact: Incomplete requirements specification would manifest
itself into incomplete test plan and impacts the quality of all project
deliverables. This negatively impacts the project by re-testing, re-coding and
re-design efforts leading to cost and effort overruns.

• Impact on overall customer/user satisfaction: An improperly translated
user requirements would damage the customer confidence on the software
product and reduces the usability ilrrd overall satisfaction index.

• Impact on maintenance: Without proper traceability, it would be difficult
to extend the software, enhance it and to fix the issues. ‘

 • Correct: SRS should specify the functionality correctly from all aspects. It

also be continually updated to reflect all the software updates and
enhancements.

Unambiguous: As SRS is written iri natural language, it is possible for it to
be interpreted in multiple ways based on the context, cultural background
etc. So, SRS should consider these things and define and refine in most
unambiguous fashion as possible. This would include providing references,
elaborating any abstract requirement with example scenarios etc. It is a good
practice to get a proof read of SRS by another person to weed out an)'
ambiguous descriptions.

Precise: The description should not contain fizzy words so as to make it
precise.

Complete: SRS should provide all the details required by software
.designers for design and implementation of the intended software.

Consistent: The terminologies, definitions and others used throughout the
SRS should be consistent. It is a good practice to pre-define all definitions,
abbreviations and refer them consistently throughout SRS.

Verifiable: This supplements the unambiguous characteristic. All
requirements should be quantified with exact and verifiable numbers. For
instance "The home page should load quickly" is non-verifiable as "quickly"
is subjective; it is also not mentioned if the page should load quickly across
all geographies. Instead of these subjective terms the requirement should
quantify it with the exact response time: "The home page should load within
2 seconds in North America region".


Modifiable: The requirements should be detailed only once throughout the
document so that it is easy to modify and maintain the document in long run.
To ensure that SRS is modifiable it should:
1. Be coherent, well-organized and contain cross-referencing
2. Avoid redundancy
3. State each requirement separately

Traceable: SRS should map the requirements to other business/user
requirement documents so that it is possible to trace the requirements. It
should also support backward-traceability and forward traceability.

Ranked for importance/stability: The requirements should be ranked
based on its deemed business/user importance. Ranking is done based on:
1. Degree of stability: Stability is related to number of changes required
for implementing functionality.
2. Degree of importance: In this case, the requirements are classified
into categories such as essential, conditional and optional.

Few other characteristics of a good SRS are that it should be understandable by people
of varied backgrounds and it should be design independent. That is, without favoring
any particular design.

• Functionality: Complete details of the software.


• External Interfaces: Details of how the software interacts with external
systems, and end users.

• Performance: Provides details of transaction speed, software availability,
response time, failover conditions, disaster recovery scenarios, etc.

• Attributes: Provides details about portability, correctness, maintainability,
security, extensibility, flexibility, etc.

• Constraints: All applicable architecture and design constraints including
the maximum load supported, supported browsers, JavaScript dependency
and others should be detailed.

 • Forms the basis of agreement between customers and suppliers about ,

the software functionality: SRS serves as a structured contract between these parties specifying all functionalities along with constraints and mentions the behavior of the intended software. End user/customer can verify if the intended software meets all the needs and requirements stated in user requirements document.

• Optimizes development effort: 
As the requirements are fully specified beforehand, the implementation team can design the system accurately
thereby reducing the effort in re-design, re-work, re-testing and defect fixing.

• Forms basis for cost and schedule estimation: Using the functional and' non-functional requirements specified in SRS, the project management team can estimate the overall project cost and schedule in more accurate fashion
and make informed decisions about risk identification and mitigation.


• Forms basis for verification and validation: Quality team can design the validation and testing strategy including various kinds of test cases based on the requirements specified in SRS.

• Helps software portability and installation: The software usability information contained in SRS helps to transfer the software across various locations including multiple inter-company departments and other external customers.

• Helps in enhancement: As SRS specifies each requirement in fullest details, it would be easier to assess the impact of any enhancement planned providing the cost and schedule estimate of the enhancement