Toggle Menu
  1. Home/
  2. Info/

VIDEO: Shocking Secrets on the Deep Web!

You may not know this, but  you can only access around  5-10% of the internet with your average browser. The rest is called the “Deep Web” and it is full of dark information and terrifying secrets. This video presents some of them.

According to Wikipedia, the deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard search engines for any reason. The content is hidden behind HTML forms. It is estimated that the deep web makes up 96% of the whole internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term.

The first conflation of the terms “deep web” and “dark web” came about in 2009 when the deep web search terminology was discussed alongside illegal activities taking place on the Freenet darknet.

loading...

Since then, the use in the Silk Road’s media reporting, many people and media outlets, have taken to using Deep Web synonymously with the dark web or darknet, a comparison many reject as inaccurate and consequently is an ongoing source of confusion. Wired reporters Kim Zetter and Andy Greenberg recommend the terms be used in distinct fashions. While the deep web is reference to any site that cannot be accessed through a traditional search engine, the dark web is a small portion of the deep web that has been intentionally hidden and is inaccessible through standard browsers and methods.

In 2001, Michael K. Bergman said how searching on the Internet can be compared to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed. Most of the web’s information is buried far down on sites, and standard search engines do not find it. Traditional search engines cannot see or retrieve content in the deep web. The portion of the web that is indexed by standard search engines is known as the surface web. As of 2001, the deep web was several orders of magnitude larger than the surface web. An analogy of an iceberg used by Denis Shestakov represents the division between surface web and deep web respectively:

“It is impossible to measure, and harsh to put estimates on the size of the deep web because the majority of the information is hidden or locked inside databases. Early estimates suggested that the deep web is 400 to 550 times larger than the surface web. However, since more information and sites are always being added, it can be assumed that the deep web is growing exponentially at a rate that cannot be quantified.

Estimates based on extrapolations from a study done at University of California, Berkeley in 2001 speculate that the deep web consists of about 7.5 petabytes. More accurate estimates are available for the number of resources in the deep web: research of He et al. detected around 300,000 deep web sites in the entire web in 2004, and, according to Shestakov, around 14,000 deep web sites existed in the Russian part of the Web in 2006.”

Anyone can write on Evonews. Start writing!

Joanna Grey

Loading...