Jump to content

Search the Community

Showing results for tags 'pages'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Informatii generale
    • Anunturi importante
    • Bine ai venit
    • Proiecte RST
  • Sectiunea tehnica
    • Exploituri
    • Challenges (CTF)
    • Bug Bounty
    • Programare
    • Securitate web
    • Reverse engineering & exploit development
    • Mobile security
    • Sisteme de operare si discutii hardware
    • Electronica
    • Wireless Pentesting
    • Black SEO & monetizare
  • Tutoriale
    • Tutoriale in romana
    • Tutoriale in engleza
    • Tutoriale video
  • Programe
    • Programe hacking
    • Programe securitate
    • Programe utile
    • Free stuff
  • Discutii generale
    • RST Market
    • Off-topic
    • Discutii incepatori
    • Stiri securitate
    • Linkuri
    • Cosul de gunoi
  • Club Test's Topics
  • Clubul saraciei absolute's Topics
  • Chernobyl Hackers's Topics
  • Programming & Fun's Jokes / Funny pictures (programming related!)
  • Programming & Fun's Programming
  • Programming & Fun's Programming challenges
  • Bani pă net's Topics
  • Cumparaturi online's Topics
  • Web Development's Forum
  • 3D Print's Topics

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Yahoo


Jabber


Skype


Location


Interests


Occupation


Interests


Biography


Location

Found 3 results

  1. This post requires you to click the Likes button to read this content. http://a.pomf.se/pjmwvx.png """ OLX.ro scraper Gets name, phone no., Yahoo! & Skype addresses, where applicable http://a.pomf.se/pjmwvx.png """ import re import json import requests from bs4 import BeautifulSoup as b pages = 1 # How many pages should be scraped # Category URL, a.k.a. where to get the ads from catURL = "http://olx.ro/electronice-si-electrocasnice/laptop-calculator/" # Links to the Ajax requests ajaxNum = "http://olx.ro/ajax/misc/contact/phone/" ajaxYah = "http://olx.ro/ajax/misc/contact/communicator/" ajaxSky = "http://olx.ro/ajax/misc/contact/skype/" def getName(link): # Get the name from the ad page = requests.get(link) soup = b(page.text) match = soup.find(attrs={"class": "block color-5 brkword xx-large"}) name = re.search(">(.+)<", str(match)).group(1) return name def getPhoneNum(aID): # Get the phone number resp = requests.get("%s%s/" % (ajaxNum, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No phone number if "span" in resp: # Multiple phone numbers nums = b(resp).find_all(text=True) for num in nums: if num != " ": return num else: return resp def getYahoo(aID): # Get the Yahoo! ID resp = requests.get("%s%s/" % (ajaxYah, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No Yahoo! ID else: return resp def getSkype(aID): # Get the Skype ID resp = requests.get("%s%s/" % (ajaxSky, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No Skype ID else: return resp def main(): for pageNum in range(1, pages+1): print("Page %d." % pageNum) page = requests.get(catURL + "?page=" + str(pageNum)) soup = b(page.text) links = soup.findAll(attrs={"class": "marginright5 link linkWithHash \ detailsLink"}) for a in links: aID = re.search('ID(.+)\.', a['href']).group(1) print("ID: %s" % aID) print("\tName: %s" % getName(a['href'])) if getPhoneNum(aID) != None: print("\tPhone: %s" % getPhoneNum(aID)) if getYahoo(aID) != None: print("\tYahoo: %s" % getYahoo(aID)) if getSkype(aID) != None: print("\tSkype: %s" % getSkype(aID)) if __name__ == "__main__": main() Tocmai scraper: https://rstforums.com/forum/98245-tocmai-ro-scraper-nume-oras-numar-telefon.rst
  2. Cumpar pagene de facebook de la 10k+ pina la 500k oamenii REALI. Auditoriu sa fie din Romania sau R.Moldova. Cumpar si din alte tari cum ar fi Italia,spania,America etc. Ma puteti contacta skype:king_king_92 imi scriti in pm. Astept propuneri.
  3. Google want to save its users' bandwidth at home. The company has released a "Data Saver extension for Chrome," bringing its data compression feature for its desktop users for the first time. While tethering to a mobile Hotspot for Internet connection for your laptop, this new Data Saver extension for Chrome helps you reduce bandwidth usage by compressing the pages you visit over the Internet. If you are unaware of it, the data compression proxy service by Google is designed to save users' bandwidth, load pages faster, and increase security (by checking for malicious web pages) on your smartphones and tablets. REDUCE AS MUCH AS 50% OF DATA USAGE Until now, the data compression service has been meant to benefit only mobile users, but the latest Data Saver Chrome Extension aims at helping desktop users by reducing their data usage by as much as 50 percent. When you visit a website, web server delivers the requested files to your browser. If enabled by the server, Gzip compresses web pages and style sheets before sending them over to the browser. Gzip compression drastically reduces transfer time since the files are much smaller. Data Saver Extension for Chrome checks if the website you visited has gzip enabled or not. If not, it compresses the requested web page via Google Data Compression proxy and makes it significantly smaller. AVAILABLE FOR CHROME 41 AND HIGHER The Data Saver Chrome extension currently doesn't support secure SSL pages or incognito pages, and Google notes that users may experience issues when they have enabled the extension. Data Saver is available on Chrome both for Android as well as iOS. User will need Chrome 41 or higher version to use the extension. As soon as you install it, the extension starts to work by default. In case you want to disable it, click on the Data Saver icon in the menu bar and select "Turn Off Data Saver." You can now download Google's new Data Saver extension for Chrome, which is currently in beta version, from the Chrome Web Store. The extension was released on March 23, without any announcement from the search engine giant. Source
×
×
  • Create New...