Again, all that proves is they need more detailed information over a longer period of time to show all the details on the "trends" page than to just reflect the trend in auto-complete.
It could be, it depends on how it's implemented behind the scenes to be honest.
The autocomplete feature on most search engines is usually implemented as a digital trie of some sort, probably a burst trie because people are typing one letter at a time: https://neuraldump.wordpress.com/tag/burst-trie/
Each of the 'leafs' of the tree has a speculative predictive algorithm that based on the web crawl. These days it's usually a semi-supervisted machine learning algo.
Trends is mostly based on what people actually search for. They need to press return. Different data structure. It's a trend of what people receive a ranked list of result for, not a what people are typing. Usually you can tell such things by taking letters from completely unrelated unicode tables.
source: this is how a basic information retrieval system is taught in computer science and based on my own interpretation based on my experience in search engine optimization over the last few decades. I could be totally off the mark.
The only info needed for auto-complete is the term itself. "crazy times carnival incident". Is it really that hard to imagine they'd need more data over a longer period of time for interest over time, interest by region, related topics, and related queries? And then possibly even more time to properly compile that information into their charts?
I have never searched for anything even remotely related to carnivals or crazy times and I got this
https://imgpile.com/i/7SMZeL
Again, all that proves is they need more detailed information over a longer period of time to show all the details on the "trends" page than to just reflect the trend in auto-complete.
It could be, it depends on how it's implemented behind the scenes to be honest.
The autocomplete feature on most search engines is usually implemented as a digital trie of some sort, probably a burst trie because people are typing one letter at a time: https://neuraldump.wordpress.com/tag/burst-trie/
Each of the 'leafs' of the tree has a speculative predictive algorithm that based on the web crawl. These days it's usually a semi-supervisted machine learning algo.
Trends is mostly based on what people actually search for. They need to press return. Different data structure. It's a trend of what people receive a ranked list of result for, not a what people are typing. Usually you can tell such things by taking letters from completely unrelated unicode tables.
source: this is how a basic information retrieval system is taught in computer science and based on my own interpretation based on my experience in search engine optimization over the last few decades. I could be totally off the mark.
I am not defending Google but that does make sense.
The only info needed for auto-complete is the term itself. "crazy times carnival incident". Is it really that hard to imagine they'd need more data over a longer period of time for interest over time, interest by region, related topics, and related queries? And then possibly even more time to properly compile that information into their charts?