Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the proper way to scrape asynchronously and store my results using django celery and redis and store my?

I have been trying to understand what my problem is when I try to scrape using a function I created in my django app. The function goes to a website gathers data and stores it in my database. At first I tried using rq and redis for a while but I kept getting an error message. So someone thought I should try and use celery,and I did. But I see now that rq nor celery is the problem. For I am getting the same error message as I was before. I tired importing it, but still got the error message, and then I thought well maybe If I have the actual function in my tasks.py file that it would make a difference but it didn't. Heres my function I tried to use in my tasks.py

import requests
from bs4 import BeautifulSoup
from src.blog.models import Post
import random
import re
from django.contrib.auth.models import User
import os

@app.tasks
def p_panties():
    def swappo():
        user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" '
        user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" '
        user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" '
        user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" '

        agent_list = [user_one, user_two, user_thr, user_for]
        a = random.choice(agent_list)
        return a

    headers = {
        "user-agent": swappo(),
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
        "accept-encoding": "gzip,deflate,sdch",
        "accept-language": "en-US,en;q=0.8",
    }

    pan_url = 'http://www.example.org'
    shtml = requests.get(pan_url, headers=headers)
    soup = BeautifulSoup(shtml.text, 'html5lib')
    video_row = soup.find_all('div', {'class': 'post-start'})
    name = 'pan videos'

    if os.getenv('_system_name') == 'OSX':
        author = User.objects.get(id=2)
    else:
        author = User.objects.get(id=3)

    def youtube_link(url):
        youtube_page = requests.get(url, headers=headers)
        soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
        video_row = soupdata.find_all('p')[0]
        entries = [{'text': div,
                    } for div in video_row]
        tubby = str(entries[0]['text'])
        urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', tubby)
        cleaned_url = urls[0].replace('?&autoplay=1', '')
        return cleaned_url

    def yt_id(code):
        the_id = code
        youtube_id = the_id.replace('https://www.youtube.com/embed/', '')
        return youtube_id

    def strip_hd(hd, move):
        str = hd
        new_hd = str.replace(move, '')
        return new_hd

    entries = [{'href': div.a.get('href'),
                'text': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(),
                'embed': youtube_link(div.a.get('href')), #embed
                'comments': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(),
                'src': 'https://i.ytimg.com/vi/' + yt_id(youtube_link(div.a.get('href'))) + '/maxresdefault.jpg', #image
                'name': name,
                'url': div.a.get('href'),
                'author': author,
                'video': True

                } for div in video_row][:13]

    for entry in entries:
        post = Post()
        post.title = entry['text']
        title = post.title
        if not Post.objects.filter(title=title):
            post.title = entry['text']
            post.name = entry['name']
            post.url = entry['url']
            post.body = entry['comments']
            post.image_url = entry['src']
            post.video_path = entry['embed']
            post.author = entry['author']
            post.video = entry['video']
            post.status = 'draft'
            post.save()
            post.tags.add("video", "Musica")
    return entries

and In the python shell if I run

from tasks import *

I get

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/ray/Desktop/myheroku/practice/tasks.py", line 5, in <module>
    from src.blog.models import Post
  File "/Users/ray/Desktop/myheroku/practice/src/blog/models.py", line 3, in <module>
    from taggit.managers import TaggableManager
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/taggit/managers.py", line 7, in <module>
    from django.contrib.contenttypes.models import ContentType
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 159, in <module>
    class ContentType(models.Model):
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 160, in ContentType
    app_label = models.CharField(max_length=100)
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 1072, in __init__
    super(CharField, self).__init__(*args, **kwargs)
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 166, in __init__
    self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/conf/__init__.py", line 55, in __getattr__
    self._setup(name)
  File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/conf/__init__.py", line 41, in _setup
    % (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.

which is the exact same traceback I got using rq and redis. I found that If I modify the imports like this

import requests
from bs4 import BeautifulSoup
# from src.blog.models import Post
import random
import re
# from django.contrib.auth.models import User
import os

and modify my function like this

@app.task
def p_panties():
    def swappo():
        user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" '
        user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" '
        user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" '
        user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" '

        agent_list = [user_one, user_two, user_thr, user_for]
        a = random.choice(agent_list)
        return a

    headers = {
        "user-agent": swappo(),
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
        "accept-encoding": "gzip,deflate,sdch",
        "accept-language": "en-US,en;q=0.8",
    }

    pan_url = 'http://www.example.org'
    shtml = requests.get(pan_url, headers=headers)
    soup = BeautifulSoup(shtml.text, 'html5lib')
    video_row = soup.find_all('div', {'class': 'post-start'})
    name = 'pan videos'

    # if os.getenv('_system_name') == 'OSX':
    #     author = User.objects.get(id=2)
    # else:
    #     author = User.objects.get(id=3)

    def youtube_link(url):
        youtube_page = requests.get(url, headers=headers)
        soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
        video_row = soupdata.find_all('p')[0]
        entries = [{'text': div,
                    } for div in video_row]
        tubby = str(entries[0]['text'])
        urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', tubby)
        cleaned_url = urls[0].replace('?&amp;autoplay=1', '')
        return cleaned_url

    def yt_id(code):
        the_id = code
        youtube_id = the_id.replace('https://www.youtube.com/embed/', '')
        return youtube_id

    def strip_hd(hd, move):
        str = hd
        new_hd = str.replace(move, '')
        return new_hd

    entries = [{'href': div.a.get('href'),
                'text': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(),
                'embed': youtube_link(div.a.get('href')), #embed
                'comments': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(),
                'src': 'https://i.ytimg.com/vi/' + yt_id(youtube_link(div.a.get('href'))) + '/maxresdefault.jpg', #image
                'name': name,
                'url': div.a.get('href'),
                # 'author': author,
                'video': True

                } for div in video_row][:13]
    #
    # for entry in entries:
    #     post = Post()
    #     post.title = entry['text']
    #     title = post.title
    #     if not Post.objects.filter(title=title):
    #         post.title = entry['text']
    #         post.name = entry['name']
    #         post.url = entry['url']
    #         post.body = entry['comments']
    #         post.image_url = entry['src']
    #         post.video_path = entry['embed']
    #         post.author = entry['author']
    #         post.video = entry['video']
    #         post.status = 'draft'
    #         post.save()
    #         post.tags.add("video", "Musica")
    return entries

It works, as this is my output

[2016-08-13 08:31:17,222: INFO/MainProcess] Received task: tasks.p_panties[e196c6bf-2b87-4bb2-ae11-452e3c41434f]
[2016-08-13 08:31:17,238: INFO/Worker-4] Starting new HTTP connection (1): www.example.org
[2016-08-13 08:31:17,582: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:18,314: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:18,870: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:19,476: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:20,089: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:20,711: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:21,218: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:21,727: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:22,372: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:22,785: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:23,375: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:23,983: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:24,396: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:25,003: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:25,621: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:26,029: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:26,446: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:27,261: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:27,671: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:28,082: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:28,694: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:29,311: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:29,922: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:30,535: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:31,154: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:31,765: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:32,387: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:32,992: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:33,611: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:34,030: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:34,635: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:35,041: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:35,659: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:36,278: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:36,886: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:37,496: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:37,913: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:38,564: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:39,143: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:39,754: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:40,409: INFO/Worker-4] Starting new HTTP connection (1): example.org
[2016-08-13 08:31:40,992: INFO/MainProcess] Task tasks.p_panties[e196c6bf-2b87-4bb2-ae11-452e3c41434f] succeeded in 23.767645187006565s: [{'src': 'https://i.ytimg.com/vi/3bU-AtShW7Y/maxresdefault.jpg', 'name': 'pan videos', 'url':...

It seems some type of authorization is needed to interact with my Post model. I just don't know how. I have been scouring the net for examples on how to scrape and save data into the database. oddly I have come across none. Any advice tips doc's i could read would be a great help.

EDIT

My File structure

environ\
  |-src\
     |-blog\
        |-migrations\
        |-static\
        |-templates\
        |-templatetags\
        |-__init__.py
        |-admin.py
        |-forms.py
        |-models
        |-tasks
        |-urls
        |-views
like image 475
losee Avatar asked Feb 06 '23 09:02

losee


1 Answers

You need to setup Django

You seem to be trying to run your task in Python shell, it's more likely since your code works when you comment out the Django model part.

So the problem is, when running pure python shell, Django needs to be setup, in order to run fine. When you run it through manage.py shell, manage.py takes care or setting it up for you, but doing it via a python script, needs manual setup. It's the reason for the missing DJANGO_SETTINGS_MODULE error.

You also seem to have used your defined models, to be able to import them into your python script, you need to add the path to the root folder of your project to the current python path.

Finally, you need to tell django where your settings file is (before setting up your django), in your manage.py file, you should have something like this :

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")

Go make it a constant, name it DEFAULT_SETTINGS_MODULE, so you now have:

os.environ.setdefault("DJANGO_SETTINGS_MODULE", DEFAULT_SETTINGS_MODULE)

Now you need to import the constant into your script and tell django (By setting an env var) where it should look for the settings file.

So in all:

import sys, os
sys.path.insert(0, "/path/to/parent/of/src") # /home/projects/my-crawler

from manage import DEFAULT_SETTINGS_MODULE
os.environ.setdefault("DJANGO_SETTINGS_MODULE", DEFAULT_SETTINGS_MODULE)

import django
django.setup() 
... The rest of your script ...

This way you're setup up just fine. But if you want to run a celery task, you should be using the .delay() or .apply_async(), to be sure the code runs in the background.

My own advice is to run python shell using python manage.py shell, this case django takes care of everything for you. You just need to import your task and run it.

Also about the storage of the results of your scraping task, you can do it both in the database, or in redis, or wherever(a file, another web server ... etc, you can also call another celery task to take care of the results and pass entries to it).

Just simply add this to the end of your task's code.

Redis

from redis import StrictRedis

redis = StrictRedis(host='localhost', port=6379, db=0)

redis.set("scraping:tasks:results:TASK-ID-HERE", json.dumps(entries))

It's the easiest way to save your results, but you can also use Redis lists/maps.

Just for reference, it's how you'd do it using lists

with redis.pipeline() as pipe:
    for item in entries:
        pipe.rpush("scraping:tasks:results", json.dumps(item))
    pipe.execute()

---- EDIT

As I've mentioned, you can define another celery task to take care of the results of the current scraping. so basically you have what follows:

@celery_app.task
def handle_scraping_results(entries):
    you do whatever you want with the entries array now

And call it at the end of your p_panties task like this:

handle_scraping_results.delay(entries)

What RabbitMQ does here, is to deliver the message from your p_panties task, to handle_scraping_results task. You need to notice that these are not simple functions, sharing the same memory address space, they can be in different processes, on different servers ! It's actually what celery is for. You can't call a function that's in a different process. RabbitMQ comes along here and takes a message from process A (having task p_panties), and delivers it to process B (having task handle_result) (message passing is a perfect method for RPC).

You can't save anything in rabbitmq, it's not like redis. I entice you to read more on celery, since you seem to have chosen it on wrong basis. Using celery wouldn't have solved your problem, it actually adds to it (since it might be hard to understand in the beginning). If you don't need async processing, just get rid of celery altogether. let your code be a single function and you can easily calla it from a python shell or manage.py shell like how I've described above.

--------- Edit II

You want to persist every few hours in the DB. So you have to persist whenever your task finishes somewhere or otherwise the results are lost.

You have two options

  1. To persist in the DB whenever your task finishes (This will not be every few hours)
  2. To persist in Redis whenever your task finishes, and then every hours or so, you'll have a periodic task, to persist them in a django database.

First way is easy, you just uncomment the code you've commented in your own code. Second way needs a little more work.

Considering your results are being persisted in redis as I told you, you can have a periodic task like below to handle persisting into DB for you.

redis_keys = redis.get("scraping:tasks:results:*")

for key in redis_keys:
    value_of_redis_key = redis.get(key)
    entries = json.loads(entries)
    for entry in entries:
        post = Post()
        post.title = entry['text']
        title = post.title
        if not Post.objects.filter(title=title):
            post.title = entry['text']
            post.name = entry['name']
            post.url = entry['url']
            post.body = entry['comments']
            post.image_url = entry['src']
            post.video_path = entry['embed']
            post.author = entry['author']
            post.video = entry['video']
            post.status = 'draft'
            post.save()
            post.tags.add("video", "Musica")

like image 181
SpiXel Avatar answered Feb 13 '23 06:02

SpiXel