Abstract:
This paper shows, how we can extract the signal of interest from wikipedia public dump. The fundamental idea lean on the fact each wikipedia page which belongs to specific location included with geographical coordination. Thus at the first phase of our project, we checked for each and every title line in wikipedia dump, whether it is included with geographical coordination. In case of true assumption, we analyse the coordination to discover pages belong to Italy and collect them based on time series data-base. At the final phase of our project we categorize and count our extracted attributes based on different languages and visualize them by plot for days, months and years.