George Harwood and Evangeline Walker, students at the University of Leicester in the UK, made use of the English version of Wikipedia as an example of a website that contained vast amounts of information.
They randomly selected 10 articles and estimated that they would have to print 15 pages for each one.
Using this figure, the researchers then multiplied it by the number of pages in Wikipedia, projected at 4,723,991, yielding a result of about 70,859,865 paper pages, Tech Times reported.
They then extrapolated that value to the number of total webpages on the internet, roughly 4.5 billion, and tweaked their final guess to account for the variable size of different websites.
To find out how many trees in the Amazon would have to be harvested, Harwood and Walker established an assumption that every tree in the rainforest could be used to produce paper, and that there are approximately 70,909 equally distributed trees per square kilometre in the forest.
They estimated that they could convert 17 reams of paper from each usable tree and 500 individual paper sheets in each ream, for a total of 8,500 sheets of paper per Amazon tree.
By dividing the 70,859,865 Wikipedia paper pages using the 500 sheets of paper in each ream, the researchers ended up with 141,720 reams needed to print the web pages of Wikipedia alone.
With 17 reams of paper produced from each tree, 8,337 trees would have to be collected to print Wikipedia.
With 500 sheets of paper per ream, the researchers calculated that it would take about 16 million trees in order to produce the 136 billion sheets of the standard size 8-by-11 paper needed to print the Internet.
Nepal struggles to bury its people as toll mounts to over 3,700
Diageo says support to Mallya subject to 'absence of defaults'
Should you buy the Moto Turbo for Rs 40k?