CIS 505 Full Course Discussions STR
CIS 505 Full Course Discussions STR
Just Click on Below Link To
Download This Course:
https://bit.ly/2RnKsFi
CIS 505 Full Course Discussions STR
CIS 505 Week 1 Discussion 1
I will be totally honest this weeks
discussion caught me totally off guard especially with the fourth of July just
passing, but when I think of networking trends the first thing that
comes to mind, and I am sure this is the same for many of you too. Are
networking trends like cloud computing, Virtualization, and
software-defined networks(SDN). With software-defined networks,
organizations can do things like move intelligence out to the software
layer making hardware much simpler
CIS 505 Week 1 Discussion 2
- Identify an acceptable system response time for
interactive applications. Compare how this response time relates to an
acceptable response time for Websites. This is a question that I have
heard many times, and to be honest I really don’t think
there is anyone specific time. I think you must analyze your site in
terms of who your customers and visitors are. Where are they
located? What are their equipment and connection needs etc…
A frequently adopted definition of a
real-time system is that in a real-time system not only the result of a computation
is of importance, but also the time this very result is delivered. e.g., in an
antilock braking system (ABS) not only the breaking pressure must be
calculated, but also the time of application is critical to gain a functioning
ABS.
CIS 505 Week 2 Discussion 1
Take a position on the following
statement, “Mainframe computers are still needed even though personal computers
and workstations have increased in capabilities.” Defend your position by
providing at least one example to support your position. Are
mainframe computers still needed? You bet they are I will give you one really
good reason why they are. With the increasing number of people online,
increasing speeds increased bandwidths that make increased requests to
enterprises. For example, let’s look at credit card giant Visa. Visa probably
handles over 120,00 transactions every second. It is imperative that
government, universities, and commercial businesses of all sizes keep up
with this demand and with the number of small servers that it would require to
do this it just would not be feasible. Then if we look at the modern
mainframes they not only can run their own environment, but they can
ritualize hundreds or even thousands of pedestrian servers at the same
time Just food for thought.
- Analyze the differences between distributed data
processing and centralized data processing. Provide an example of each.
Then compare each to the processing used in cloud computing.
A centralized cloud storage data-center
topology is defined as a topology in which the cloud storage provider has one
or a few data-centers located in a small geographical area because the distance
between the end-user and the data-center can potentially be large. An advantage
of a centralized data center topology is the economies of scale of operational
expenses A disadvantage is the higher risk of single-point-of-failure. Table 1
gives an overview of some providers that have all their data-centers in a small
geographical area. A distributed cloud storage data-center topology is a
topology in which the cloud storage provider has multiple data-centers spread
over a large geographical area and in which the user stores and retrieves data
from the data-center closest to it. Testa et al. explained that DNS-based
load-balancing algorithms are primarily used to determine the data-center to
which traffic has to be directed the algorithm shows how the clients are
distributed over the data-centers. The content switch constantly monitors the
traffic load of the data-centers are called step 1. A client was resolved using
the DNS servers because these IP addresses can give an indication of
data-center location.
An example of a distributed data processing
is
For example, an
organizational-distributed network comprising of three computers can have
each machine in a different branch. The three machines are interconnected via
the Internet and are able to process data in parallel, even while at
different locations. This makes distributed data-processing networks
more flexible. An example of a centralized data processing system is:
centralized processing. The processing
performed on one computer or in a cluster of coupled computers in a single
location. Access to the computer is via “dumb terminals,” which send only input
and receive output or “smart terminals,” which add screen formatting. All
data processing is performed on the central computer.
CIS 505 Week 2 Discussion 2
By Definition: A packet switched
network Packet-switched networks move data in separate, small blocks —
packets — based on the destination address in each packet. When received,
packets are reassembled in the proper sequence to make up the message.
Circuit-switched networks require dedicated point-to-point connections during
calls. Circuit-switched networks and packet-switched networks have
traditionally occupied different spaces within corporations. Circuit-switched
networks were used for phone calls and packet-switched networks handled data.
But because of the reach of phone lines and the efficiency and low cost of data
networks, the two technologies have shared chores for years. (L.
Copeland,2000).The reason Packet switches are more appropriate for the Internet
is, because: With the internet being packet switched, your computer
can send off packets to one host, then send packets off to another host before
receiving a response from the first host. It can even receive response
packets from the first host while it is sending off packets to the second host.
CIS 505 Week 3 Discussion 1
IPv6 (Internet Protocol version 6) is a set
of specifications from the Internet Engineering Task Force (IETF) that’s
essentially an upgrade of IP version 4 (IPv4). IP addresses for IPv6 are
128 bits and IPv4 are 32 bits. IPv6 was designed with security in mind
while IPv4 had no security. Both the IPv6 and IPv4 define network layer
protocol and how data is sent from one computer to another computer over
packet-switched networks such as the Internet. IPv6 contains addressing
and control information to route packets for the next generation Internet.
Network address translation (NAT) is eliminated which caused several networking
problems. This age of video/audio, interactive games or e-commerce needs the
capabilities that IPv6 brings. QoS is a set of service requirements to
deliver performance guarantee while transporting traffic over the network which
used in the IPv6.
IPv6 is stronger in security for mobile
devices because each device gets a reliable IP address which permits businesses
to outline a security policy for each device that will apply wherever that
device is used. A higher level of data protection due to encryption being
mandatory. IPv6 built to have encryption from the start to finish.
An issue with IPv6 would be the switching of IP addresses for possible
hackers. This makes it difficult to track and trace criminals
CIS 505 Week 3 Discussion 2
Fat server places more function on the server
while the fat client does the opposite. Fat servers are groupware,
transactions, and the web server. Databases and file servers are examples
of fat clients. Distributed objects can be either. In most scenarios, the
client machines in a fat server based client / server environment are thin
clients. That is, they have very insufficient processing capabilities and
principally rely on the fat server. Transaction and object servers encapsulate
the database. Fat servers try to minimize network interchanges by
creating more abstract levels of service. These applications are easier
to manage and deploy on the network because most of the code runs on servers.
Fat clients are used for decision support and
personal software. The more traditional form of client/server is fat
clients. Versatility and possibilities for generating front-end tools
that let end-users create their own applications. Fat clients can work
independently; the functioning is smooth as there is no load on the server.
Software licensing costs are decreasing as a result of which thick clients seem
to be becoming more popular.
An intranet is a private computer network
that operates within an organization and facilitates internal communication and
information sharing with the same technology used by the internet. The
major difference is that an intranet is confined to an organization, while the
Internet is a public network that operates between organizations.
The Internet serves businesses by creating
opportunities for networking, information retrieval, communications, marketing,
and sales. The internet is used by companies as a tool to sell products
to consumers all around the globe. Companies such as eBay or Amazon are
online stores that sell various of products. Companies can also use the
Internet for internal communications and other electronic activities, which
many small businesses do in lieu of developing their own networks.
Pros of an Intranet are increased employee
productivity, allows for greater collaboration, provides a social networking
platform, simplifies decision making, streamlined data management. Cons
are one’s company could be at risk without a strong security foundation, time-consuming
and expensive, and can be counterproductive.
The Internet is an accessible, unrestricted
space, while an intranet is created to be a secret space. An intranet may be
obtainable from the Internet, but as a precept, it’s guarded by a password and open
only to employees or other approved users. If we execute a single interspersed
system, we only need one developer organization. From within a business, an
intranet server may react much more swiftly than a regular Web site. This is
because the unrestricted Internet is at the tolerance of passage spikes, server
failures and other obstacles that may impede the network. Within an
organization, however, users have much more bandwidth and network hardware may
be more stable. This makes it simpler to serve high-bandwidth content, such as
audio and video, over an intranet.
CIS 505 Week 4 Discussion 1
Hyper Text Transfer Protocol (HTTP) or Hyper
Text Transfer Protocol Secure (HTTPS) basically HTTPS is the secure version of
HTTP. The secure form encrypts all communication between the browser and
the website. HTTPS is used for the highly confidential online
transactions such as banking and shopping. The padlock icon in the web
browser is an indication that HTTPS is in effect. HTTP is a set of standards
that allow users of the web to exchange information found on web pages.
HTTPS provides confidentiality, integrity, and identity. HTTP I see it will
soon disappear since HTTPS provides a more secure way of communicating.
Sites that do not use HTTPS judiciously are crippling the privacy controls you
thought were protecting your data. HSTS (Hyper Strict Transport Security makes
sure communication is only used during HTTPS which is needed to prevent
downgrade attacks and cookie hijacking.
HTTP came around when internet protocols were
simple text-based. Hackers can easily obtain information sent over HTTP
because it is in plain text form. HTTPS sends information in encrypted
text format which is hard to read for hackers. Protocols for HTTPS are
SSL and TLS. SSL or secure sockets layer and TLS or Transport Layer
Security use asymmetric public key infrastructure (PKI) which uses two keys to
encrypt communication. The two keys are a public key and a private
key. An example of a private key would be replacing letters with numbers.
The only way it can be deciphered if the receiver knows the key pattern.
A public key is provided by the sharing of a secret between two entities.
Such as if person A had a message for person B the keys would be sent and the
encryption could only be decoded by these two people.
CIS 505 Week 4 Discussion 2
Inelastic traffic does not adjust its
material in answer to network conditions. Apps such as Skype,
Facebook Messenger, or Google hangouts are examples of Inelastic traffic.
These apps implement free options to voice and video calls.
Elastic traffic alters its material between
and end hosts in reply to network situation. Elastic traffic is
TCP-friendly. The cloud is used in many ways and Elastic Computing is
when supplies can be ascended up and down effortlessly by the cloud service
provider. Cloud computing is about equipping on-demand computing resources with
the simplicity of a mouse click. A number of resources which can be sourced
through cloud computing incorporate almost all the facets of computing from raw
processing power to massive storage space. The knack of a service provider to
running malleable computing command when and wherever mandatory. The
flexibility of these means can be in relations of processing power, storage, and
bandwidth.
Elastic Traffic can adjust, over wide ranges,
to changes in delay and throughput across the internet and still meet the needs
of its applications. Elastic traffic was created for the internet due to
its support of TCP/IP. Organizers flourish on building innovative
commodities and services to satisfy purchaser requirements, although, prices
decide if a corporation earns a profit. If prices are inexpensive, businesses
can’t cover their liabilities, and if costs are too great, they may not lure just
shoppers to break even. Elastic demand illustrates a situation where the
quantity of product consumers wants to buy is very sensitive to its price,
which can be an important consideration when setting or changing prices.
CIS 505 Week 5 Discussion 1
Routing and Switching Selection Criteria. Please respond
to the following:
Compare distance-vector and link state routing and
analyze the limitations that would prevent the usage of each.
Distance vector routing protocols are
designed to run on small networks (usually fewer than 100 routers). Examples of
distance vector routing protocols include RIP and IGRP. Distance vector
protocols are generally easier to configure and require less maintenance than
link-state protocols. On the downside, distance vector protocols do not scale
well because they require higher CPU and bandwidth utilization. They also take
longer to converge than do link-state protocols. Distance vector protocols
always choose the route with the fewest number of hops as the best route. This
can be a problem when the best route to a destination is not the route with the
least number of hops.
Link state routing protocols are designed to
operate in large, enterprise-level networks. Examples of link state protocols
include OSPF and NLSP. Link state routing protocols are very complex and are
much more difficult to configure, maintain, and troubleshoot than distance
vector routing protocols. Additionally, link state convergence occurs faster
than distance vector convergence. This is because link state establishes a
neighbor relationship with directly connected peers and shares routing
information with its neighbors only when there are changes in the network
topology. Link state routing protocols can be difficult to configure and
maintain.
Choose the method best suited for designing a new routing
protocol for a LAN architecture. Justify your decision.
The best method for designing a new routing
protocol for a LAN is employ two or three tiers of LANs. Within a department, a
low-cost, moderate speed LAN supports a cluster of personal computers and
workstations. These departmental LANs are lashed together with a backbone LAN
of higher capacity. The architecture of a LAN is best described in terms of a
layering of protocols that organize the basic functions of a LAN.
CIS 505 Week 5 Discussion 2
Suggest a way to improve the way LLC and MAC are used for
LAN operation.
The MAC layer is responsible for detecting
errors and discarding any frames that contain errors. The LLC layer optionally
keeps track of which frames have been successfully received and retransmits
unsuccessful frames. The relationship between LLC and the MAC protocol can be
seen by considering the transmission formats involved. User data are passed
down to the LLC layer, which prepares a link-level frame, known as an LLC
protocol data unit (PDU). This PDU is then passed down to the MAC layer where
it is enclosed in a MAC frame. In a wireless environment, data packets may be
lost entirely, partially lost due to truncation, or corrupted by bit errors.
However, adaptive packet shrinking and forward error correction (FEC) policies
significantly increase the useful throughput of a wireless LAN, while the fixed
policies are effective only in some environments & can be added fairly
easily to the MAC and LLC implementations.
Stallings, William. (10/2008). Business Data
Communications, 6th Edition. [VitalSource Bookshelf Online]. Retrieved
from https://strayer.vitalsource.com/#/books/9781323079324/
Eckhardt, D. A., & Steenkiste, P. (1998, October).
Improving Wireless LAN Performance via Adaptive Local Error Control. Retrieved
from http://repository.cmu.edu/cgi/viewcontent.cgi?article=1526&context=compsci
Evaluate guided and unguided transmission medium to
determine which you would use to design a new facility.
Transmission media can be classified as a
guided or unguided. With guided media, the waves are guided along a solid
medium, such as copper twisted pair, copper coaxial cable, or optical fiber.
The atmosphere and outer space are examples of unguided media, which provide a
means of transmitting electromagnetic signals but do not guide them; this form
of transmission is usually referred to as wireless transmission. For a new
facility I would initially go with guided transmission media and then add
unguided transmission media. Guided transmission capacity depends on the
distance and on whether the medium is point-to-point or multipoint. Unguided
transmission and reception are achieved by means of an antenna, is directional
& puts out focused beam. However, the transmitter and receiver must be
aligned but can spread signals out in all directions & can also be received
by many antennas.
CIS 505 Week 6 Discussion 1
Asynchronous Transfer Mode (ATM) was made for
multimedia applications or similar items due to the large bandwidths they
carried. ATM is better suited for applications using real-time
transmission of video signals. If ATM is preferred, a significant
overhead is involved in integration into the ATM layers. The fastest ATM
products at the moment operate at 622Mbit/s.
Gigabit Ethernet offers routine improvements
for prevailing networks without having to change the cables, protocols and
applications already in use. Gigabit Ethernet advantage is basically its origin
in the proven technology of Ethernet. Meaning it is fully compatible with
existing Ethernet technologies. This means a performance enhancement for
existing networks without having to change the cables, protocols and
applications already in use. Users can therefore work on the basis that
migration from Ethernet to Gigabit Ethernet will be very easy and transparent.
Applications that run through Ethernet will also work by Gigabit Ethernet. At
1000Mbit/s, Gigabit Ethernet is almost twice as fast as ATM. Speeds are
seamlessly transmitted at a higher rate.
Compare and contrast the advantages of Fast
Ethernet, Gigabit Ethernet, and 10-Gbps Ethernet.
Fast Ethernet carries traffic at a rate of
100Mbit per second. When Fast Ethernet was upgraded by improving the
speed and reducing the bit.
10-Gbps advantage is the speed. It has better
hardware and rack space efficiency, simple virtualization, and greater
scalability. In standard Ethernet, bit was transmitting in one second and in
Fast Ethernet it takes 0.01 microsecond for one bit to transmit.
Gigabit Ethernet is another term of Ethernet
in computing network, for carrying on the traffic at the rate of 1billion bits
per second. This method is expressed in the IEEE 802.3 standard and is
presently being used as the fortitude in many enterprise networks. Gigabit is
carried primarily on optical fiber. Optical fiber is the medium that transmit
information when light pulses along a glass or plastic strand or fiber.
Connecting Gigabit Ethernet using the lower
speed Ethernet mechanisms is easy because one can use LAN switches or routers
to modify one physical line speed to the other. Gigabit Ethernet uses 64 –
1514-byte packets IEEE 802.3 frame format which is found in Ethernet
technologies and Fast Ethernet. Changes are not necessary because the frame
format and size are the same. This mutative advancement track permits Gigabit
Ethernet to be flawlessly joined into current Ethernet and Fast Ethernet
networks.
CIS 505 Week 6 Discussion 2
Wireless LAN. Please respond to the
following:
- Analyze the characteristics of wireless LANs and assess the
security concerns of this technology in organizations such as universities
or hospitals. Identify additional areas of concern for organizations that
implement a wireless LAN. Then, explain whether the implementation of a
WAN would solve these problems. Explain your rationale.
- Rank the following IEEE 802.11 standard addresses in order of
importance with the first one being the most important. Justify the reason
for your chosen order.
·
- Association
- Re-association
- Disassociation
- Authentication
A Wireless LAN performs a malleable data
communication system regularly increasing satisfactorily than interchanging a
wired LAN within a edifice or college grounds. WLANs use radio frequency
to conduct and obtain data over the air, diminishing the essential for wired
correlations.
Wireless safekeeping is vital. Wireless
indicators often circulate afar tangible obstacles, the peril of someone
struggling to force an entry using the wireless foundation is sophisticated
paralleled to someone attaining sensible admittance to a wired port.
Guests and Personnel should be placed on
different Wi-Fi networks so that important information cannot fall into the
wrong hands. One’s goal to provide sufficient Wi-Fi signal only to the areas
where it’s required. If one has Wi-Fi signal that extends outside structure
ramparts and out into communal cosmoses, one chances of alluring potential
threats from criminals who may try to interrupt one’s network or impede with
the wireless signal. Limiting Wi-Fi signal would solve this issue.
Association: Once authentication is complete,
mobile devices can associate (register) with an AP/router to gain full access
to the network. Association allows the AP/router to record each mobile device
so that frames are properly delivered. Association only occurs on wireless
infrastructure networks, not in peer-peer mode. A station can only associate
with one AP/router at a time.
http://www.intel.com/content/www/us/en/support/network-and-i-o/wireless-networking/000006508.html
Disassociation: A station or access point may
invoke the disassociation service to terminate an existing association. This
service is a notification; therefore, neither party may refuse termination.
Stations should disassociate when leaving the network. An access point, for
example, may disassociate all its stations if being removed for maintenance.
http://www.informit.com/articles/article.aspx?p=24411&seqNum=7
Re-association: The re-association service
enables a station to change its current state of association. Re-association
provides additional functionality to support BSS-transition mobility for
associated stations. The re-association service enables a station to change its
association from one access point to another. This keeps the distribution
system informed of the current mapping between access point and station as the
station moves from one BSS to another within an ESS. Re-association also
enables changing association attributes of an established association while the
station remains associated with the same access point. The mobile station
always initiates the association service.
http://www.informit.com/articles/article.aspx?p=24411&seqNum=7
Authentication: 802.11 authentication is
the first step in network attachment. 802.11 authentication requires a mobile
device (station) to establish its identity with an Access Point (AP) or
broadband wireless router. No data encryption or security is available at this
stage.
The Institute of Electrical and Electronics
Engineers, Inc.(IEEE) 802.11 standard defines two link-level types of
authentication:
- Open System
- Shared Key
http://www.intel.com/content/www/us/en/support/network-and-i-o/wireless-networking/000006508.html
CIS 505 Week 7 Discussion 1
The Advantages of VPNs for an organization
include cost savings and network scalability. A VPN can save organizations
money in several situations: eliminating the need for expensive long-distance
leased lines, reducing long-distance telephone charges, & offloading
support costs. The cost to an organization of building a dedicated private
network may be reasonable at first but increases exponentially as the
organization grows. Internet based VPNs avoid this scalability problem by
simply tapping into the public lines and network capability readily
available& offers superior reach and quality of service.
From the e-Activity, compare three VPN services available
to organizations to determine the primary differences among each. Discuss the
pros and cons of each VPN service and suggest the type of organization that
would best fit each network provider.
PPTP VPN – This
is the most common and widely used VPN protocol. They enable authorized remote
users to connect to the VPN network using their existing Internet connection
and then log on to the VPN using password authentication. They don’t need extra
hardware and the features are often available as inexpensive add-on software.
PPTP stands for Point-to-Point Tunneling Protocol. The disadvantage of PPTP is
that it does not provide encryption and it relies on the PPP (Point-to-Point
Protocol) to implement security measures.
SSL – SSL
or Secure Socket Layer is a VPN accessible via https over web browser. SSL
creates a secure session from your PC browser to the application server you’re
accessing. The major advantage of SSL is that it doesn’t need any software
installed because it uses the web browser as the client application. A
disadvantages is its optional (as opposed to in-built) user authentication.
This is a major security weakness. Also, SSL tunneling (basically mimics IPSec)
is not supported on Linux or non-Windows OS.
MPLS VPN – MPLS
(Multi-Protocol Label Switching) are no good for remote access for individual
users, but for site-to-site connectivity, they’re the most flexible and
scalable option. These systems are essentially ISP-tuned VPNs, where two or
more sites are connected to form a VPN using the same ISP. An MPLS network
isn’t as easy to set up or add to as the others, and hence bound to be more
expensive.
For individual users, PP2P VPNs offer the
best deal, but for large offices or ones with complex requirements for
connectivity MPLS VPNs might be the best option.
http://techpp.com/2010/07/16/different-types-of-vpn-protocols/ http://www.addictivetips.com/windows-tips/what-is-vpn-how-to-create-and-connect-to-vpn-network/
CIS 505 Week 7 Discussion 2
Frame relay is a
packet switching technology for connecting network points in Wide Area Networks
(WAN). It is a connection oriented data service and establishes a virtual
circuit between two end points. Data transfer is done in packets of data known
as frames. These frames are variable in packet size and more efficient due to
flexible transfers. Frame Relay was originally introduced for ISDN interfaces
though it is currently used over a variety of other network interfaces as well.
ATM is a
network switching technology that uses a cell based methodology to quantize
data. ATM data communication consists of fixed size cells of 53 bytes. An ATM
cell contains a 5byte header and 48 bytes of ATM payload. This smaller size,
fixed-length cells are good for transmitting voice, image and video data as the
delay is minimized. ATM is a connection oriented protocol and therefore a
virtual circuit should be established between sending and receiving points. It
establishes a fixed route between two points when the data transfer starts.
Another important aspect of ATM is its asynchronous operation in time division
multiplexing. ATM provides a good quality of service in networks where
different types of information such as data, voice, and are supported. With
ATM, each of these information types can pass through a single network
connection.
Advantages & Disadvantages
Although ATM uses fixed size packets (53
bytes) for data communication, frame relay uses variable packet sizes depending
on the type of information to be sent. Both information blocks have a header in
addition to data block and transfer is connection oriented. Frame Relay is used
to connect Local Area Networks (LAN) and it is not implemented within a single
area network contrast to ATM where data transfers are within a single LAN. ATM
is designed to be convenient for hardware implementation and therefore, cost is
higher compared to frame relay, which is software controlled. Therefore, frame
relay is less expensive and upgrading is easier. Frame relay has a variable
packet size. Therefore, it gives low overhead within the packet which results
it an efficient method for transmitting data. Although fixed packet size in
ATM, can be useful for handling video and image traffic at high speeds, it
leaves a lot of overhead within the packet, particularly in short transactions.
Suppose you are in charge of selecting ATM or frame relay
as a WAN alternative for your technological needs of a communication technology
organization. Choose one of the WAN alternatives and justify your decision.
A WAN alternative I would choose for my technological
needs of a communication technology organization would be Frame Relay. Frame
Relay networking services deliver a permanent virtual circuit (PVC), which
means that customers benefit from what looks like a continuous, dedicated
connection, without having to pay for a full-time leased line. At a service
provider level, the route which each frame travels to its destination is
allocated dynamically and can be charged based on actual usage. It’s a proven
technology, resilient, is more cost effective than leased lines, and is more
scalable than a network of private circuits.
CIS 505 Week 8 Discussion 1
The three mentioned satellites are used for
communications, whether marketable or in customary native groundwork is
inaccessible or capricious, circle in the earth in three orbits. Satellites are
used as a way to communicate around the globe. The broadcast industry as
well as the telecommunication companies and weather satellites uses GEO to
serve as its main access to communicate.
Geo configuration advantages are the
stationary relative to the earth which leaves no issues for frequency changes
due to the relative motion of the satellite and antennas on earth. Tracking is
simplified by earths stations. At 35,838 km above the earth, the
satellite can communicate with roughly one-fourth of the earth; three
satellites in geostationary orbit separated by 120x cover most of the inhabited
portions of the entire earth, excluding only the areas near the north and south
poles.
LEO or the original AT&T satellite proposal was for
low earth orbititng satellites, but most of the early commercial satellites
were geostationary. The idea of LEO sattelites is to use constellations of
inexpensive. A LEO satellite can be “seen” by a point on earth on the
order of minutes before the satellite passes out of sight. If intermediate
orbits are used—higher than the LEOS and lower than GEOS—a point on earth can
see the satellite for periods on the order of hours. Such orbits are
called medium-earth-orbiting
satellites(MEOS).
These orbits are on the order of 10,000 km above the earth, and require fewer
handoffs. While propagation delay to earth from such satellites (and the power
required) is greater than for LEOS, they are still substantially less than for
GEOS. ICO Global Communications, established in January 1995, proposed a MEO
system. Launches began in 2000; 12 satellites, including two spares, are
planned in 10,400 km orbits. The satellites will be divided equally between two
planes tilted 45x to equator. Proposed applications are digital voice, data,
facsimile, high-penetration notification, and messaging services.
http://www.informit.com/articles/article.aspx?p=23761
http://www.harriscaprock.com/blog/high-throughput-satellite-communications-systems-meo-vs-leo-vs-geo/
“VSAT systems provide high speed, broadband
satellite communications for Internet or private network communications.
VSAT is ideal for mining camps, vessels at sea, satellite news gathering,
emergency responders, oil & gas camps or any application that requires a
broadband Internet connection at a remote location. VSAT is an excellent way to
connect your remote sites and workers with Internet communications for email,
web access, video transmissions, Voice over IP telephone services, or other IP
applications for your field operations. VSAT enables you to expedite your
business processes by integrating field operations with your corporate wide
area network.”
Coverage for the USA is on Galaxy 16, Galaxy
18 & AMC 9 satellites, provide options for connectivity across the USA. The
coverage at sea is Maritime VSAT services.
The business applications currently being
used to support VSAT is Television distribution, Long-distance telephone
transmission, and Private business networks. Because of their broadcast nature,
satellites are well suited to television distribution and are being used
extensively for this purpose in the United States and throughout the world. In
its traditional use, a network provides programming from a central location.
Programs are transmitted to the satellite and then broadcast down to a number
of stations, which then distribute the programs to individual viewers. A more
recent application of satellite technology to television distribution is direct
broadcast satellite (DBS), in which satellite video signals are transmitted
directly to the home user. GEOS (Geosynchronous Earth Orbit Satellites)
are used mainly for communications and broadcasts. Business applications could
be for satellite broadcasts, maritime phone calls, meteorological applications
and cable and satellite tv as well.
LEOS (Low Earth Orbit Satellites) are also
used in business for communications but on a more personal scale. LEOS are used
for email and mobile phone networks, video conferencing, high bandwidth data
connections and in government or business, can be used for spying and espionage
operations.
MEOS (Middle Earth Orbit Satellites) are used
in business for navigation. MEOS make up the backbone of the GPS enabled
applications that are out there.
VSAT (Very Small Aperture Terminals) is being
used to support businesses in a couple different ways. First, it allows for
businesses in remote locations to have access to Internet resources. Laying
cable is time consuming and very costly for rural areas. VSAT technology allows
businesses to remain competitive no matter where they are. VSAT allow for a
high QoS which allows them to be great partnered with VPN access for
businesses. The QoS is also improved because most VSAT connections are only
single hop resulting in less lag and loss.
References:
http://old.repertoiremag.com/Article.asp?Id=524
CIS 505 Week 8 Discussion 2
- Compare the four items related to channel capacity:
data rate, bandwidth, noise, and error rate. Determine the most important
and justify its significance over the remaining items.
- Describe real-world examples of attenuation and
white noise. Examine the effect on the information-carrying capacity of
the link and present a way to avoid these types of interruptions. Data
Transmission is the process of sending digital or analog data over a
communication medium to one or more computing, network, communication or
electronic devices. Data Rate is the term associated with the rate
of data transferred between two or more computing and telecommunication
devices or systems. Bandwidth is a wider term, which is basically
associated with the computer networking and digital technologies and
measures the bit rate of communication resources available or consumed.
Noise is unwanted electrical or electromagnetic energy that degrades the
quality of signals and data. Error rate is the degree of errors
encountered during data transmission over a communications or network
connection.
Noise is the most important component in the
channel capacity. Noise can disrupt the flow of information. There are so many
different types of noise one could face in a business setting: Environmental,
Physiological-Impairment, Semantic, Syntactical, Organizational, Cultural, and
Psychological. Issues in communications that
derives from the above mentioned noises could affect the sender, the message
itself, the channel it is being sent through, or the recipient of that message.
Attenuation is a loss of communication signal
strength measured in decibals (dB). Amplification is used to surge signal
potency. Range, Interference and Wire size are the reasons attenuation happens.
DSL uses attenuation. Typical values for line attenuation on a DSL connection
are between 5dB and 50dB.
White noise is a type of noise that is
produced by combining sounds of all different frequencies together.All the
imaginable tones the human ear can hear and combine them all together would
form white noise.White noise is random noise that has a flat spectral
density.The audible frequency range 20-20000 hertz. An example of white noise
is the sound of the ocean or the sound the train or subway makes as it moves
across the tracks. Ambulances, Fire Trucks or Police vehicles all use white
noise.The white noise is over heard through traffic and makes these emergency
vehicles more noticeable. When looking at the four items I would say that
error rate is the most important because it is great that you may get more
stuff transferred faster with less time but if it has a lot of errors in the
data then it is not as useful.
Techopedia explains Attenuation
Attenuation can relate to both hard-wired
connections and to wireless transmissions.
There are many instances of attenuation in
telecommunications and digital network circuitry.
Inherent attenuation can be caused by a number of signaling issues including:
Inherent attenuation can be caused by a number of signaling issues including:
- Transmission medium – All electrical signals transmitted down
electrical conductors cause an electromagnetic field around the
transmission. This field causes energy loss down the cable and gets worse
depending upon the frequency and length of the cable run. Losses due to
- Crosstalk from adjacent cabling causes attenuation in copper or
other conductive metal cabling.
- Conductors and connectors – Attenuation can occur as
a signal passes across different conductive mediums and mated connector
surfaces.
Repeaters are used in attenuating circuits to
boost the signal through amplification (the opposite of attenuation). When
using copper conductors, the higher the frequency signal, the more attenuation
is caused along a cable length. Modern communications use high frequencies so
other mediums which have a flat attenuation across all frequencies, such as
fiber optics are used instead of traditional copper circuits.
Different types of attenuation include:
- Deliberate attenuation can occur for example where a volume control
is used to lower the sound level on consumer electronics.
- Automatic attenuation is a common feature of televisions and other
audio equipment to prevent sound distortion by automatic level sensing
that triggers attenuation circuits.
- Environmental attenuation relates to signal power loss due to the
transmission medium, whether that be wireless, copper wired or fiber optic
connected.
- References
CIS 505 Week 9 Discussion 1
Digital Transmission. Please respond to the
following:
Compare the data communication technologies of guided
media and unguided media. This should include transmission media, data link
control protocols, and multiplexing.
Guided media is a physical path in which the
signals prorogate; twisted pair, coaxial cable, and fiber optic. Unguided media
usually employs an antenna for transmitting via air, vacuum, or water. Thus,
the reason unguided transmission techniques are usually used for broadcast
radio, terrestrial microwave, and satellite. Network environments are very
costly due to the efforts need to protect transmission. High Level Data Link
Control (HDLC) is a group of protocols or rules for transmitting data between
network points. It’s one of the most used of many protocols. Data Link controls
helps with the flow and error control of the transmission. Multiplexing aides
in achieving transmission efficiency.
An analog wave form (or signal) is
characterized by being continuously variable along amplitude and frequency. In
the case of telephony, for instance, when you speak into a handset, there are
changes in the air pressure around your mouth. Those changes in air pressure
fall onto the handset, where they are amplified and then converted into
current, or voltage fluctuations. Those fluctuations in current are an analog
of the actual voice pattern—hence the use of the term analog to describe these
signals. Digital transmission is quite different from analog transmission. For
one thing, the signal is much simpler. Rather than being a continuously
variable wave form, it is a series of discrete pulses, representing one bits
and zero bits. Each computer uses a coding scheme that defines what
combinations of ones and zeros constitute all the characters in a character set
(that is, lowercase letters, uppercase letters, punctuation marks, digits, keyboard
control functions).
References
Digital transmission has several advantages
over analog transmission. Analog circuits require amplifiers, and each
amplifier adds distortion and noise to the signal. In contrast, digital
amplifiers regenerate an exact signal, eliminating cumulative errors. An
incoming (analog) signal is sampled, its value is determined, and the node then
generates a new signal from the bit value; the incoming signal is discarded.
With analog circuits, intermediate nodes amplify the incoming signal, noise and
all. Voice, data, video, etc. can all by carried by digital circuits.
References:
CIS 505 Week 9 Discussion 2
T-1 Lines. Please respond to the following:
From the e-Activity, analyze the multiplexing techniques
of DSL and cable modem Internet and suggest the one you prefer. Explain your
decision.
Multiplexing takes multiple signals and
merges them into one, complex signal to be transmitted. Once the
receiving devices catches the multiplexed signal, it is then divided into it’s
original, multiple form. Cable modems use a new specification, named
DOCSIS 3.1, to remain competitive and add multiple enhances. One of these
enhancements is Orthogonal Frequency Division Multiplexing (OFDM). This
transmission technique divides bandwidth into smaller sub-carriers which helps
decrease interference and allow for better data transmission on parallel
channels. This standard provides the ability to define multiple
downstream profiles on a single channel. The DOCSIS 3.1 specification
also uses time and frequency multiplexing to transmit signals through a single
channel. This makes OFDM and other legacy channels to operate
simultaneously on separate or on the same frequencies.
Compare and contrast the use of leased lines in a WAN or
LAN setting. Then recommend what you would use if you were a CIO. Support your
response with evidence or examples.
WANs are often built using leased lines.
These leased lines involve a direct point-to-point connection between two
sites. Point-to-point WAN service may involve either analog dial-up lines or
dedicated leased digital private lines. An analog signal is a continuously
varying electromagnetic wave that may be transmitted over a variety of media,
depending on frequency. The principal advantages of digital signaling are that
it is generally cheaper than analog signaling and is less susceptible to noise
interference. Therefore, my recommendation would be in support of leasing lines
in a WAN setting. A wide area network allows companies to make use of common resources
in order to operate. Internal functions such as sales, production and
development, marketing, and accounting can also be shared with authorized
locations through this sort of network. The wide area network has made it
possible for companies to communicate internally in ways never before possible.
Comments
Post a Comment