Running a python script using tweepy which streams (using the twitter streaming API) in a random sample of english tweets, for a minute and then alternates to searching (using the twitter searching API) for a minute and then returns. Issue I've found is that after about 40+ seconds the streaming crashes and gives the following error:
Full Error:
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
The amount of bytes read can vary from 0 to well in the 1000's.
The first time this is seen the streaming cuts out prematurely and the search function starts early, after the search function is done it comes back to the stream once again and on the second recurrence of this error the code crashes.
The code I'm running is:
# Handles date time calculation
def calculateTweetDateTime(tweet):
tweetDateTime = str(tweet.created_at)
tweetDateTime = ciso8601.parse_datetime(tweetDateTime)
time.mktime(tweetDateTime.timetuple())
return tweetDateTime
# Checks to see whether that permitted time has past.
def hasTimeThresholdPast():
global startTime
if time.clock() - startTime > 60:
return True
else:
return False
#override tweepy.StreamListener to add logic to on_status
class StreamListener(StreamListener):
def on_status(self, tweet):
if hasTimeThresholdPast():
return False
if hasattr(tweet, 'lang'):
if tweet.lang == 'en':
try:
tweetText = tweet.extended_tweet["full_text"]
except AttributeError:
tweetText = tweet.text
tweetDateTime = calculateTweetDateTime(tweet)
entityList = DataProcessing.identifyEntities(True, tweetText)
DataStorage.storeHotTerm(entityList, tweetDateTime)
DataStorage.storeTweet(tweet)
def on_error(self, status_code):
def on_error(self, status_code):
if status_code == 420:
# returning False in on_data disconnects the stream
return False
def startTwitterStream():
searchTerms = []
myStreamListener = StreamListener()
twitterStream = Stream(auth=api.auth, listener=StreamListener())
global geoGatheringTag
if geoGatheringTag == False:
twitterStream.filter(track=['the', 'this', 'is', 'their', 'though', 'a', 'an'], async=True, stall_warnings=True)
if geoGatheringTag == True:
twitterStream.filter(track=['the', 'this', 'is', 'their', 'though', 'a', 'an', 'they\'re'],
async=False, locations=[-4.5091, 55.7562, -3.9814, 55.9563], stall_warnings=True)
# ----------------------- Twitter API Functions ------------------------
# XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# --------------------------- Main Function ----------------------------
startTime = 0
def main():
global startTime
userInput = ""
userInput.lower()
while userInput != "-1":
userInput = input("Type ACTiVATE to activate the Crawler, or DATABASE to access data analytic option (-1 to exit): \n")
if userInput.lower() == 'activate':
while(True):
startTime = time.clock()
startTwitterStream()
startTime = time.clock()
startTwitterSearchAPI()
if __name__ == '__main__':
main()
I've trimmed out the search function, and database handling aspects given they're seperate and to avoid cluttering up the code.
If anyone has any ideas why this is happening and how I might solve it please let me know, I'd be curious on any insight.
Solutions I have tried:
Try/Except block with the http.client.IncompleteRead:
As per Error-while-fetching-tweets-with-tweepy
Setting Stall_Warning = to True:
As per Incompleteread-error-when-retrieving-twitter-data-using-python
Removing the english language filter.
Solved.
To those curious or who are experiencing a similar issue: after some experimentation I've discovered the backlog of incoming tweets was the issue. Every time the system recieves a tweet my system ran a process of entity identification and storing which cost a small piece of time and over the time of gathering several hundred to thousand tweets this backlog grew larger and larger until the API couldn't handle it and threw up that error.
Solution: Strip your "on_status/on_data/on_success" function to the bare essentials and handle any computations, i.e storing or entity identification, seperately after the streaming session has closed. Alternatively you could make your computations much more efficient and make the gap in time insubstantial, up to you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With