Creating a large dataset by crawling some public website
₹1500-12500 INR
Closed
Posted almost 6 years ago
₹1500-12500 INR
Paid on delivery
I need a large dataset in JSON format, to upload in a MongoDB database. The contents can be anything, but should be meaningful. I need between 500 MB and 5 TB of data to be generated. The data will be used for some training demonstrations.
I want someone to write a program that crawls some website for publicly available data (such as books and reviews from some e-commerce site; news articles from some news sites; hotels and reviews from some travel site; restaurants and reviews from some food aggregator site; articles from wikipedia, etc).
I don't need you to send me the data. I need you to write a program I can run at my end to download the data. But the program must store it in a JSON format that can be directly imported into MongoDB. The structure could be flat JSON documents, or documents that contain embedded documents.
Individual documents may be anywhere in the range from 100 bytes to 100 KB. No individual document should be bigger than 100 KB in size.
We'll have to discuss together to decide the site from which the data is to be downloaded. There should be no violation of any data access policies of the site. This is very important for me; I don't want us to break any law. I will need an assurance from you on this, and a link to the data access policies of the site, if available.
Once we agree on the site to download the data from, you will write the program, test it at your end, send me some sample data, and once approved, send me the program for me to run at my end. If I run into any difficulties while running the program I would require you to support me. The program should allow me to choose the approximate data size (such as 500 MB) after which it will stop crawling any further to download the data.
I am an expert nodejs/Javascript developer with good experience. I have worked a very good data scraping and crawling script with nodejs.
I am interested in working on your project and also available for ongoing support and development.
Please contact me via chat to discuss the details.
I have done many crawling projects. On of my interesting project is webdb, a mongdb 9.1GB collection of URLs from online search engines. I crawled 2 million words on Google by maintaining policy using proxy servers. I have very hand on experience in python, Java and many languages and used scrapy, request, BS, lxml and selenium many times. Please let's make it together.
I have more than 10 years of experience in data scraping and extraction. Kindly message me so we can decide the website from which the data will be scraped.
Hi,
I am a senior developer from Czech Republic with 10 years of experiences with Python on Windows or Linux, C/C++ and much more.
I love precision and i am applying this in my work. I am sure that i can do the best for you, cause i want to start career as freelance and this job should be great for my good name and honor.
Because of this i can offer you maximum of my time, all my knowledge and experiences.
So...lets do it ;)
With regards,
Jan
Hello Sir,
I have read your Requirements and after reading them i can see that i already have written code similar to the piece of code you need, my code downloads tweets which have famous celebs mentioned in them and then stores it text file in json format, i can do same for you with twitter or Wikipedia or another website which suits you
Please send me a message in the chat so i can describe it more to you .Hoping to hear from you soon.