Toggle Menu
  1. Home/
  2. Info/

VIDEO: Scariest Internet Myths

We’ve all heard the urban legends about creatures like the Hook, Slender Man or Bloody Mary. Most people believe they are nothing but stories and myths. Whatever the case may be, they are pretty creepy.

Anyone can write on Evonews. Start writing!

What’s even creepier is the thought of such myths circulating around the web, as we are ALWAYS connected!

loading...

You might not know this, but you can only access around  5-10% of the internet with your average browser. The rest is called the “Deep Web” and it is full of dark information and terrifying secrets.

According to Wikipedia, the deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard search engines for any reason.

The content is hidden behind HTML forms. It is estimated that the deep web makes up 96% of the whole internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term.

Since then, the use in the Silk Road’s media reporting, many people and media outlets, have taken to using Deep Web synonymously with the dark web or darknet, a comparison many reject as inaccurate and consequently is an ongoing source of confusion.

Wired reporters Kim Zetter and Andy Greenberg recommend the terms be used in distinct fashions. While the deep web is reference to any site that cannot be accessed through a traditional search engine, the dark web is a small portion of the deep web that has been intentionally hidden and is inaccessible through standard browsers and methods.

In 2001, Michael K. Bergman said how searching on the Internet can be compared to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed.

Most of the web’s information is buried far down on sites, and standard search engines do not find it. Traditional search engines cannot see or retrieve content in the deep web. The portion of the web that is indexed by standard search engines is known as the surface web. As of 2001, the deep web was several orders of magnitude larger than the surface web.

loading...

“It is impossible to measure, and harsh to put estimates on the size of the deep web because the majority of the information is hidden or locked inside databases. Early estimates suggested that the deep web is 400 to 550 times larger than the surface web.

However, since more information and sites are always being added, it can be assumed that the deep web is growing exponentially at a rate that cannot be quantified. Estimates based on extrapolations from a study done at University of California, Berkeley in 2001 speculate that the deep web consists of about 7.5 petabytes.

More accurate estimates are available for the number of resources in the deep web: research of He et al. detected around 300,000 deep web sites in the entire web in 2004, and, according to Shestakov, around 14,000 deep web sites existed in the Russian part of the Web in 2006.”

Joanna Grey

Loading...