Chapter 3
In this section one of the things that I learned was web robots. I never even knew these things existed. When searching the web on chrome or Firefox I never knew how they found the results. It is cool to know the name of the actual utility behind the scenes. It is strange to see that the spider deletes its own data on old sites and the site managers do not usually put in a dead link request to a web browser company.
I always knew that different sites give different results, but I didn’t know why. I thought that each company just paid to use the ability to find results. To find out they sometimes make their own searchers and use their own rules. Making it so websites with more links and good links at that are priority Is strange. It seems like for the most part the results are relatively similar though for most searches.
I didn’t know about the restrictions on sites. Being able to use results that are common in different areas is a cool trick. If I wanted to see certain events in Germany I can easily, do it. I would always just put the location in, but now I know I can use location:de to find it. This will come in handy in everyday events or travel thoughts.