In this issue:
						
						
							
							
							Big Security News
 
							Symantec, the largest security 
							software vendor in the world, has just 
							"discontinued" its most popular product, Symantec 
							Antivirus.  Symantec had been listening to its 
							customers and found that they wanted a 
							"bag-o-security" that would include all of the 
							security products they would need to remain safe in 
							one integrated bundle.  That is not possible, but 
							Symantec met their customers part way by replacing 
							Symantec Antivirus 10 with Symantec Endpoint 
							Protection 11 (https://www.symantec.com/products/endpoint-security).  
							Symantec Endpoint Protection includes protections 
							against viruses, spyware, and malware like Symantec 
							Antivirus, but also includes a strong firewall, 
							intrusion protection, and controls the ability to 
							copy to removable media.  An enhanced version can 
							also ensure policies are met before allowing 
							connection to the network.  Symantec completely 
							rewrote their software so that it requires 80% less 
							memory and demands less processor time as well. 
							Competitors are also offering similar suites of 
							products, but the level of integration and the 
							manageability of Symantec Endpoint Protection is top 
							notch. 
 
							Please call us if you are 
							interested in learning more about Symantec 
							Enterprise Protection IN PERSON with Symantec 
							engineers presenting the product at a gala dinner at 
							Flemings Steakhouse, Tysons Corner, Virginia, 
							Thursday, December 6, 2007.  If you can't wait, 
							please call us for the limited number of seats we 
							have left for a presentation hosted by Symantec at 
							Dave and Busters of North Bethesda, 11301 Rockville 
							Pike, Kensington, MD 20895 for Wednesday, November 
							14, 2007.
						
						
 
							Tom Turns 10
 
							The next time you talk to Tom 
							Sparks here at Iron Horse, congratulate him.  He 
							celebrated his 10th anniversary with the company!  
							Tom's computing, design, sales, marketing, and sales 
							skills are excellent.  We are a small company, so 
							Tom wears a lot of hats.  He purchases most of the 
							products our customers order from manufacturers and 
							distributors. He is also responsible for most of the 
							marketing you see, including our web sites, 
							brochures, and dog pictures.  He also helps people 
							get what they want.  Yes, that means he is a (gasp) 
							Sales Consultant.  Before Iron Horse, Tom was a 
							substitute high school teacher and a waiter in a 
							pizza joint.  He has a B.A. in Biology from Mary 
							Washington College (now a university).  Tony has a 
							B.S. in Chemistry from the University of Virginia 
							and an M.S. in Biological Chemistry from Caltech, so 
							both of them understand real viruses!   Tony hired 
							him because of their similar backgrounds, because of 
							Tom's success in difficult sales and customer 
							service positions (substitute teachers deserve 
							combat pay), because Tom didn't have to unlearn any 
							bad training about computers or sales, and because 
							Tom is thoughtful, kind, caring, intelligent, and, 
							best of all, a good listener.  A good salesman helps 
							clients get what they want by actively listening to 
							them.  He shows them how much he cares about them as 
							people and about their success.  Like all computer 
							industry professionals, Tom is continually 
							training.  He has become quite expert in the various 
							technical and non-technical roles he fills at Iron 
							Horse.  Over these 10 years, he married his long 
							time girlfriend, continued honing his skills as an 
							artist and as a competitive volleyball player, 
							bought a house, and became the proud father of a two 
							year old girl and two dogs.  Call or write him and 
							congratulate him.  Better yet, do business with him 
							and see why we all think so highly of him.
						
						
 
							Tony's Name in Lights
 
							Tony Stirk was interviewed by the 
							financial publication Advisor Today for an article entitled "Feeling Insecure?" as an 
							expert on computer security for the September 2007 
							issue.
 
							Tony has also just been asked to 
							serve on the Advisory Board of the ASCII group, an 
							international group of 2000 dealers worldwide (http://www.ascii.com). 
 
							In case you were wondering, these 
							activities are normal for Tony.  He wrote a column 
							for another financial computing magazine many years 
							ago.  Horse Sense has been republished, with 
							permission, by the Institute of Supply Management.  
							He also has been interviewed and quoted or used for 
							background by such publications as Computer Reseller 
							News, Government Computer News, E Week, and USA 
							Today.  He has also consulted with top policy makers 
							in government and even been invited to the White 
							House's Rose Garden to represent small businesses.  
							He recently gave a speech at ITT Technical Institute 
							on why non-technical skills are more important in 
							getting and keeping a technical job than technical 
							certifications and presented similar views as an 
							invited member of an industry advisory committee.  
							He reviewed a book entitled "Successful Proposal 
							Strategies for Small Business: Winning Government, 
							Private Sector, and International Contracts" for the 
							National Association of Contract Management.  Tony 
							is an award winning speaker, and has given 
							presentations on federal government contracting and 
							other topics to a group of computer value added 
							resellers called TechSelect.  Tony and Iron Horse 
							are former members of the Fairfax Chamber of 
							Commerce and current members of the Greater 
							Springfield Chamber of Commerce.  Tony is quite 
							active in the Springfield Chamber, their Marketing 
							Committee, and their Gateway to Government 
							Committee.  He and Iron Horse are also members of 
							ASCII, TechSelect, NASBA, CompTIA, NFIB, and the US 
							Chamber of Commerce.  Tony belongs to a business 
							networking group called The Networking Community.  
							Tony beta tests a unified secure service appliance 
							called the IPAD that offers firewall, SMTP/POP mail, 
							FTP, DNS, web services, spam blocking, and other 
							functions and has been a speaker at their IPADCON 
							conferences. Tony also holds and is working on the 
							many dealer and personal certifications needed to 
							properly represent manufacturers and support his 
							clients.  He attends many conferences each year for 
							business and technical training.  Tony works 
							directly with customers as a salesman, consultant, 
							and technician.  Tony also has a house, a wife, a 
							two year old boy, and two dogs.  He has played on 
							the same soccer team for 20 years.  Now you know why 
							he might sound tired sometimes!
						
						
 
							Why is My Internet Connection 
							Slow?
 
							Not all of the answers are 
							obvious.
 
							The obvious answer is that maybe 
							the connection really is slow, intentionally or 
							unintentionally.  Dialup access is slow, but 
							broadband or dedicated (non-dialup) access isn’t 
							always blazingly fast.  You may have contracted for 
							a slow rate of speed.  We frequently find that 
							people aren’t getting the speed that they paid for.  
							Speed testing web sites can help you determine if 
							you are getting what you paid for.
							
							www.speedtest.net,
							
							www.earthlink.com/speedtest, and
							
							www.dslreports.com all have nice speed tests.  
							Be sure to use a test location close to you to see 
							what the highest speed is.  Do not expect the test 
							to completely fill your line.  If you are within 20% 
							or so of your what your Internet Service Provider 
							(ISP) says you should have, take their word for it. 
							Some speed tests only work in certain browsers and 
							others require Java or helper programs.  The most 
							reliable speed test is to time the download or 
							upload of a very large file from the ISP’s local 
							servers a couple of times.  These speed testing 
							sites simulates downloads and uploads. 
 
							Most home and some commercial 
							broadband connections allow much faster downloads 
							than uploads.  These asymmetric links work fine 
							because most people want to bring information in 
							rather than push it out.  When surfing the web and 
							downloading files, the instructions that move 
							upstream are usually much smaller than the replies 
							that come downstream.  Upload bandwidth doesn’t 
							limit your download speed very much.  So, why don't 
							you tend to get the full download bandwidth out of 
							your connection?
 
							Now for the non-obvious reasons:
 
							(1) Latency causes slower 
							than expected connections.  Each TCP/IP 
							(Transmission Control Protocol/Internet 
							Protocol--the dominant Internet "language") 
							connection is set up by building a circuit.  The 
							requestor asks for a connection, the server responds 
							with connection information, and the requestor 
							agrees to it.  After this three way handshake, the 
							information is then sent by the server using this 
							"circuit."  Many downloads actually involve building 
							a number of circuits (like one circuit for each 
							graphic in an HTML page).  Traversing long distances 
							back and forth to build the circuit limits how fast 
							you can download. And, the more circuits you have to 
							build to download the page, the longer it will 
							take.  If you talk to a server that responds to your 
							requests slowly, communicates ineffectively (lots of 
							small conversations), or is a long way away, then 
							your latency can be high and you may not see your 
							web page for a while.  Many content providers try to 
							alleviate this issue somewhat by placing a server 
							that can handle your request for information closer 
							to you on the Internet.
 
							Think of getting information as 
							delivering water through a hose.  In the TCP/IP 
							world, I roll a ball down the pipe (circuit being 
							set up) towards you at the spigot (server).  You 
							mark the ball to say you are ready and send it 
							back.  At the other end of the pipe, I mark the ball 
							and say, “OK, turn the water on.”  You connect the 
							hose (connection) and turn on the water.  The 
							effective latency includes not only the time it 
							takes the first drop of water to transit the pipe, 
							but how long it takes to set it up to get the water 
							flowing (building the circuit).  Bandwidth is how 
							much water is delivered out the end.  It can be a 
							slow trickle or a gusher.
 
							Your observable speed is a 
							combination of both latency and bandwidth.  A great 
							connection is one with low latency and high 
							bandwidth.  This is typically true of a local area 
							network (LAN) connection.  WAN and Internet 
							connections have higher latencies and lower 
							bandwidths.
 
							Latency is such a problem in the 
							computing world that information that might be 
							repetitively requested is often stored closer to 
							where it might be requested in an area called a 
							cache.  Cached information can be delivered much 
							more quickly than having to travel all the way to 
							the source to get the information.  Your Central 
							Processing Unit (CPU) has cache in it so it doesn't 
							have to go out to slower main memory to retrieve 
							information.  Likewise, when you are surfing, your 
							browser will keep a cache of recently requested 
							information, just in case you ask for the same 
							information again.  If you do, it delivers the 
							information immediately, rather than building 
							circuits and retrieving it from the Internet.  Even 
							Internet Service Providers (ISPs) use caching.  They 
							will often use devices to cache the requests of 
							thousands of customers so they can get this 
							information from the local ISP's servers and not 
							have to go farther out on the Internet to get it.
						
 
							(2) The lowest speed wins, 
							like the speed of the server and/or its link.  A 
							server may not be able to deliver formatted data at 
							the top rate of your connection.  It may be at the 
							end of a very slow link.  You will always be limited 
							by the slowest factor in your connection.  In our 
							water analogy, if you link together different 
							diameter hoses, the maximum flow will be determined 
							by the smallest hose, the ability of the spigot to 
							deliver the water, and the receiver's ability to 
							receive the water (do I have my thumb over the 
							outlet)?
						
 
							(3) Contention for the 
							same resource causes slowdowns.  The Internet, your 
							connection to your local area network, and your 
							Digital Subscriber Line (DSL) connection are shared 
							by (potentially) lots of users. Oversubscription is 
							a necessary evil.  The idea is that not everyone 
							will need information at the same time, so rather 
							than build 8 dedicated connections to a server that 
							are 100 Mbps (Megabits per second) each, you put in 
							only one connection to a switch that delivers 100 
							Mbps to 8 PCs on your network.  Each PC can talk to 
							the server at full wire speed.  Most of the time, 
							however, a particular PC won’t need anything from 
							the server and someone else can talk to it at wire 
							speed.  In the (hopefully rare) circumstance that 
							two PCs need information at the same time, they can 
							both talk at half the line rate.  Oversubscription 
							allows for reasonable performance at low cost.  It 
							works best when conversations are short and rarely 
							overlap.  In this example, the oversubscription rate 
							is 8 to 1 and many phone systems are built with this 
							oversubscription rate in mind, so you can see that 
							even critical networking systems use 
							oversubscription.  One defunct DSL ISP used to 
							oversubscribe its bandwidth by an order of 200 or 
							more.  Although the line rates to each individual 
							user were good, the chances of running into many 
							others wanting the use of the uplink at the same 
							time were very high, so the effective line rate was 
							many times less than what was quoted.  Performance 
							was terrible.  In fact, using an analog modem (dial 
							up) was much faster.  Companies often don’t publish 
							their oversubscription rates but will tell you if 
							you ask them.  Consumer grade connections tend to 
							have much higher oversubscription rates than 
							commercial grade connections.  For example, a 
							typical consumer oversubscription rate might be 4 to 
							20 whereas a commercial oversubscription rate will 
							be less than 8 and there is often no 
							oversubscription in the link between the customer 
							and the ISP’s network.  ISP  Once the connection is 
							made into the ISP network, bandwidth will again be 
							shared, but commercial links tend to be less 
							oversubscribed than consumer links.  Commercial 
							links tend to cost more, be more reliable, are less 
							oversubscribed, and offer additional features and 
							assurances.
 
							You also have to compete with 
							others for the use of the server on the other end of 
							your connection and all the links to it as well.  
							You can expect that a server with no other users 
							will be able to deliver its web pages to you at top 
							speed.  However, one with hundreds of users may be 
							much slower.  Many router and server processes work 
							on a first in/first out basis.  Requests are placed 
							in a queue and serviced in order.
 
							It can be difficult to determine 
							how well you are using your bandwidth and whether 
							there is contention on your network between users or 
							between applications.  One of the most difficult 
							networking tasks is to ensure that when a user is 
							performing a certain task, they can expect a 
							predictable result.  Most network connections have 
							no quality of service whatsoever.  This isn’t really 
							a problem if you can throw enough bandwidth at the 
							problem.  If there is enough bandwidth, then your 
							users and applications will always get what they 
							need.  In effect, the pipes are so big and the 
							answers to questions are returned so quickly because 
							of it, that there really is no oversubscription 
							issue to worry about. This isn’t true on slower 
							Internet and wide area networking links.  A typical 
							LAN speed is 100 to 1000 Mbps.  A typical WAN speed 
							is 1 Mbps. It is easy for a large download to soak 
							up all of your bandwidth and make surfing the web or 
							a voice over IP call all but impossible, even though 
							you probably don’t care if the download takes a 
							little longer. Quality of service classifies users 
							and data so more important communications take 
							priority by limiting connection speeds and 
							prioritizing users or applications.  A good example 
							of a device that can monitor and enforce quality of 
							service is the Cymphonix Network Composer.  You could easily 
							see that your bandwidth isn’t as big as you thought 
							it was because you have multiple people listening to 
							Internet radio, for example.
						
 
							(4) The quality of the 
							connection can vary.  If packets (a TCP/IP data 
							bundle) get dropped, you need to figure out which 
							ones are missing.  The missing data must then be 
							requested again and be retransmitted.  Think of this 
							as a cell phone call where the message is garbled or 
							blanks out and you have to say “Can you repeat 
							that?”  This can take a long time to do.  Any link 
							which drops packets can cause your download speed to 
							nose dive.  In fact, a 1% packet loss can drop your 
							observed speed to less than half of what you expect.
						
 
							(5) Overhead costs you a 
							lot.  When you are sending the data back and forth, 
							you have to assemble it into data packets that the 
							other end will understand.  While each packet has a 
							payload consisting of a portion of the data you 
							requested, it also contains things like information 
							on which piece of information is being sent, the 
							size and nature of the payload, and a mathematical 
							validation sequence (checksum) to show that the 
							packet didn't change during transfer.  Each packet 
							must be deconstructed and the checksum validated.  
							The construction/deconstruction process, validation, 
							and simply sending lots of extra information other 
							than just the payload lowers your effective 
							throughput, sometimes by quite a bit.  Instead of 
							the water analogy above, think about using tanker 
							trucks of water that are checked out at the spigot 
							and delivery ends of the hose to make sure the water 
							is pure.  All communications protocols have some 
							amount of overhead.  And, it is worse than it 
							sounds.  TCP/IP typically rides on top of Ethernet 
							or ATM links which use their own communication 
							protocols.  ATM, which is widely used between ISPs, 
							has a very high protocol overhead.
						
 
							(6) Flow control in TCP/IP 
							can limit your throughput.  Small files are a real 
							bugaboo.  They really keep you from reaching your 
							potential. First, latency kills you as you usually 
							have to set up at least one circuit for each file 
							being transferred.  Second, TCP/IP is pretty 
							conservative.  It will start out sending a single 
							segment (a small amount of data) and then will keep 
							increasing by one segment each time the other end 
							successfully acknowledges receipt.  Of course, the 
							maximum rate is the smaller of what the sender and 
							receiver will allow.  If data doesn't get 
							acknowledged (you drop a packet), the number of 
							segments drops by HALF and starts counting up 
							again.  Each transmission needs to be acknowledged 
							by the other end.  If the connection is reliable, 
							larger amounts of data can be sent before the 
							receiving end acknowledges that all of the data has 
							been received. If the connection is less reliable, 
							then TCP/IP will require more frequent 
							acknowledgements and higher overhead, hurting your 
							throughput.  Typically, local area connections are 
							highly reliable and allow for large numbers of 
							segments to be sent safely.  The Internet is much 
							less reliable, and its acknowledgements take longer 
							periods of time to move back and forth.
						
 
							(7) Misconfiguration or 
							sub-optimal configurations can kill your 
							throughput.  You can often tune a server, 
							workstation, firewall, switch, or router for better 
							performance.  Not long ago, a customer was having 
							problems with the speed of their LAN backups.  I 
							looked at their configurations and found that they 
							had manually set their communication speed at a 100 
							Mbps rate.  When I advised them to change their 
							configuration to automatically negotiate the highest 
							possible speed and to connect the server to higher 
							bandwidth ports, they got 1000 Mbps throughput, a 
							ten fold increase, without changing any hardware or 
							software on their network!
 
							Many companies have errors in 
							their public DNS, servers that perform poorly, or a 
							lack of redundancy in their servers.  This can mean 
							that when you lookup on a name like
							
							www.ih-online.com you get wrong answers or no 
							answers at all.  Your computer will keep trying the 
							servers it can find in an effort to get through.  If 
							it doesn’t get an answer in time the first time, it 
							must try another server.  Sometimes you can’t get a 
							response in time and the connection will fail 
							entirely.  Upwards of 90% of the domains that I have 
							inspected have one or more DNS errors.  I’d say 75% 
							of those are serious enough to cause e mail delivery 
							failures or other problems. 
 
							There are also more subtle 
							problems which I've seen frequently.  Bad cabling 
							can cause a disconnection, but it can also cause 
							drop packets. I've seen 10 Mbps connections run 
							faster than 100 Mbps connections simply because the 
							100 Mbps packets frequently didn't make it to the 
							other end intact.  Because they had to be resent, 
							the speed was actually slower than the 10 Mbps 
							connections!  We commonly see this problem when 
							clients make their own cables, use cables that are 
							too long, run cables near power sources, etcetera.  
							We don't advise anyone to make their own cables or 
							to cable inside walls or ceilings without the proper 
							training and tools.  Always buy your patch cables 
							pre-made and pre-tested.
 
							A malfunctioning network card or 
							switch can degrade a large portion of a network.  
							I've often seen switches designed for small office 
							environments catastrophically fail, reset 
							themselves, or drop connections because they can't 
							handle the work of a larger network.  We regularly 
							see routers and firewalls drop connections entirely 
							or drop back dramatically in speed because they 
							don't have the capacity to handle a larger 
							workload.  A firewall that you might use in your 
							home is NOT appropriate for most business 
							environments.
 
							Some problems occur simply 
							because of the size or nature of the connections 
							themselves.  I've seen large Ethernet networks that 
							have trouble doing normal work because the 
							housekeeping functions of their protocols take up so 
							much bandwidth.  We saw one network where the 
							workstations were performing at 70% or less of their 
							capability and were only able to use 70% of their 
							bandwidth because of poor network design.
 
							It isn't unusual for us to see 
							very powerful servers and workstations hobbled by 
							unnecessary or old software, unnecessary processes, 
							unnecessary or poor performing drivers, unnecessary 
							files, fragmented files, and other configuration 
							issues experience a 400% speed improvement and 
							better reliability when these issues are fixed.  
							These sub-optimal conditions can develop slowly over 
							time.  Often, we are brought in to replace a piece 
							of equipment that only needs to be "de-gunked."
						
 
							(8) Poor connection and 
							communication methods drag everything down. Some 
							communication protocols, like HTTP 1.0 (used for web 
							requests), make poor use of a TCP/IP connection 
							because HTTP 1.0 tends to send out many small 
							requests for information.  If you have a server and 
							a browser that understand later versions of HTTP, 
							then you can get much faster page loading because 
							the requests will be fewer in number and larger. 
							Newer browsers make use of these newer protocols and 
							can be much faster than older browsers.  Some people 
							make claims that Firefox is faster than Internet 
							Explorer, and others like yet another browser.  We 
							think you will find that each browser has its 
							trade-offs.  We like Firefox for its speed and 
							security, but use Internet Explorer at many sites 
							because those sites were developed with only that 
							browser in mind and Firefox won’t work.  We have a 
							saying here that applies to badly written software.  
							There is no amount of good network engineering that 
							can “fix” bad programming.  On the other hand, well 
							written software will run better on the same 
							resources.
 
							Software now exists that can 
							automatically compress your data, caches redundant 
							requests, optimizes TCP/IP communications, and even 
							removes redundant data from a file transfer.  This 
							can make a slow WAN connection operate as if it were 
							a connection 5-150 times as large.  The size of the 
							pipe hasn’t changed, but it can “magically” deliver 
							more water when needed, at least between two sites 
							that use the software.
						
 
							(9)  People, networks, and 
							web sites often make poor use of their connections.  
							When you are delivering pictures over the web for 
							display on a monitor, high resolution images are of 
							little use because a desktop monitor can’t display 
							high resolutions.  We sites using lower resolutions 
							and high levels of compression will speed delivery 
							time to the recipient who will see the same picture 
							in either case.  Yet, many sites don’t optimize 
							their graphics.  Similarly, many people use e mail 
							to transfer large files.  E mail is a particularly 
							inefficient way to do this as the encoding method 
							that e mail uses actually makes these files much 
							larger.  Sending large files to multiple people is 
							even worse as one copy has to be sent to each e mail 
							address.  Posting a single copy of the file at a 
							location everyone can download from is very 
							efficient by comparison.
 
							Frequently, organizations have 
							remote sites connect via an encrypted virtual 
							private network link to a central site and from 
							there go out to the Internet.  Those on the remote 
							site have much higher latency connections because of 
							the extra hops they need to take through routers to 
							reach the Internet.  The encryption process also 
							adds greatly to latency.  Encryption can also lower 
							the effective bandwidth because turning on extra 
							security options like encryption means less central 
							processing unit time will be available to route 
							packets and packets may back up or even be dropped 
							as the load increases.  In addition, remote sites 
							are often using large amounts of WAN bandwidth to 
							contact resources on the internal corporate 
							network.  Low throughputs, high latencies, and 
							dropped packets are very common in many remote 
							office scenarios.  Proper design and provisioning 
							can help minimize these issues.
 
							Caching helps, but not as much as 
							it used to.  Your browser should maintain a small 
							cache.  16M is usually enough as very large caches 
							can slow web browsing.  By default, Internet 
							Explorer tends to use caches that are much too 
							large, in our opinion.  Your business may also have 
							a caching server to cache web pages and common FTP 
							downloads, so multiple people needing the same 
							information can get it without having to go out to 
							the Internet.  Finally, your ISP may cache web pages 
							and FTP downloads on servers within its network that 
							many of its customers might want so they don't have 
							to go to the larger Internet to get it. Intelligent 
							caching lowers response time and the amount of 
							bandwidth you need to the Internet.  However, more 
							and more Internet content is becoming dynamic.  Web 
							pages are built as needed, defeating the ability of 
							caching software to cache common elements.
						
 
							(10) Inappropriate 
							expectations.  Even though you bought a faster 
							computer, it doesn’t make your connection faster.  
							The slowest link wins.  Nothing is instant.  Relax.  
							Have a cappuccino.
 
							You will never fully utilize your 
							inbound or outbound link to the Internet.  Don't be 
							too surprised when many operations you perform on 
							the Internet don't seem to be all that different 
							when you get a higher speed connection.  One of the 
							reasons above is probably the culprit.
 
							If you suspect you are not 
							getting all that you could out of your network, WAN 
							links, or Internet connections, call Iron Horse.  We 
							can help you get what you have paid for!