Jump to content

Brandon

Members
  • Posts

    23
  • Joined

  • Last visited

  • Days Won

    1

Brandon last won the day on March 7 2012

Brandon had the most liked content!

Profile Information

  • Expertise
    PHP

Recent Profile Visitors

1,842 profile views

Brandon's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. If you are scraping these out with regex: /Home/frmMyReportOpen.aspx?FileName=018a257a-4fa1-42e3-af0d-f818e4dfda3c.csv&FileExtension=CSV&FilePath=192.168.9.95BTReports and you are trying to get the csv file with a new cURL call you have to make the & into a "&" with html_entity_decode http://php.net/manual/en/function.html-entity-decode.php And you have to prepend the base url to that one as well as it is a relative path.
  2. A framework is basically a bunch useful tools that are ready to use out of the box. You don't have to do everything from scratch when using a framework. For example building a new MVC framework from scratch takes a lot of time and if you do it alone it will probably be full of bugs which will slow the production down. Say you want to build a new CMS, you'll need a lot of things that will take a long time to build yourself like authentication, an access control layer, routing for pretty links etc. Most MVC frameworks have all these things so you don't have to worry about building stuff like that, They are often open source as well and a lot of people have been working on it for years so there will be fewer bugs.
  3. I have never done one myself for facebook and I guess it will be tricky. They often put some random Javascript redirect stuff to make it harder to log in, and cracking that can be a daunting task. Finding good tutorials about this can be a real pain unfortunately. I can see if I can crack it but I can't really promise anything. Get firebug and read up on POST:ing with cURL in the mean time(you will have to save cookies in an cookies.txt file aswell). With firebug you can use the NET tab and collect and save all calls going back and fourth when loading pages. Then try to mimic the exact header requests there with cURL. It is actually pretty fun when you get the hang of it.
  4. I'm glad I could be of help! Well when I try the url in the browser it redirects to the front page. Is the http://www.facebook.com/RockerLips a public facebook page or do you have to be logged in to visit it?
  5. Very strange, and you are keeping echo $match[1]; in brandon.php right? If you have cURL enabled this works for me <?php $url = "http://www.facebook.com/pages/The-Quota/312465117876"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $useragent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1"; curl_setopt($ch, CURLOPT_USERAGENT, $useragent); $result = curl_exec($ch); curl_close($ch); //echo $result; $regex = '#<span class="uiNumberGiant fsxxl fwb">(.*?)</span#is'; preg_match($regex,$result,$match); //var_dump($match); echo $match[1]; ?>
  6. That is just weird because the array(2) { [0]=> string(51) "11,822" [1]=> string(6) "11,822" } is from the var_dump and the trailing 11,822 is from the echo statement. When I try your code facebook deny me the page because i don't use a proper user agent. In brandon.php what do you get if you echo $data; ?
  7. Ah I see Edit out or comment out: var_dump($match);
  8. I'll do my best! I'm sorry but I don't really understand what you are after. What do you mean with "#"? Because it seems to me that you are getting the right result if that is the number of likes 11,822. I didn't get your code to work for some reason but this regex works for me: #<span class="uiNumberGiant fsxxl fwb">(.*?)</span#is
  9. BlackHatClass: Good points. I also believe that a strong presences in social media will have positive effects on traffic building and the sites overall status. Maybe the good old SEO days are coming to an end? Or at least the game is changing for ever?
  10. Brandon

    Why Ruby?

    Ok I didn't know that twitter was initially built on Ruby, interesting! I know a lot of people rave about Java as well and I have taken some courses in that language actually, never really got the hang of it though.
  11. I have fooled around a little bit with both CakePHP and code igniter. Have launched Kohana as well but Zend looks somewhat complicated to start with. I can't really say which I like the most but what I have heard is that code igniter doesn't eat up memory as much as other frameworks. Correct me if I'm wrong!
  12. So this is my first little mini tutorial here. Hope someone will like it/find it useful. Basically what we are going to do is scrape some data from a remote website using PHP and cURL. cURL is a "client URL transfer library" for making all sorts of remote requests and is very useful for many things like getting data, logging in automatically, auto filling out forms etc. Lets get cracking! First and foremost we have to enable the cURL extension as this is not enabled by default. On a Windows machine edit your php.ini file and uncomment ;extension=php_curl.dll and restart your server. If you are using Ubuntu sudo apt-get install php5-curl and restart server. I use a WAMP server at home and it is super easy to install extensions on it simply: Go to icon down in the right corner of your screen->left click WAMPSERVER icon->PHP->PHP extensions->click on php_curl and then restart server. Voila! Alright now we are going to initiate cURL and make a request to another site and display the html with an echo: <?php $url = "http://www.nytimes.com/"; $ch = curl_init($url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $result = curl_exec($ch); curl_close($ch); echo $result; ?> Now that we have the html inside $result we can extract the data we are after using regular expressions. In this case I took a regex from http://regexlib.com/ to extract links and modded it just a little bit to make it work. You can just comment out the previous echo $result; and paste these 2 lines in there. preg_match_all("/<a[\s]+[^>]*?href[\s]?=[\s\"\']*(.*?)[\"\'].*?>([^<]+|.*?)?<\/a>/is", $result, $match, PREG_SET_ORDER); print_r($match); This is how this stuff works. Pretty easy basic stuff. You can read more about PHPs cURL support here http://php.net/manual/en/book.curl.php. I especially recommend the curl_setopt part were you can make all kinds of cool stuff like setting an user agent, set referrer, set cookie and a bunch of other stuff to mimic your request coming from an actual user. Any questions or suggestions, just fire away in the thread! More information on cURL: http://curl.haxx.se/
  13. Yes you should use permalinks or mod rewrite to get those nice looking URLs. It's like helping Google on the way to index your sites under the keywords that you would like to rank on.
  14. Brandon

    Why Ruby?

    Does anyone here know Ruby and Ruby on rails? I have read a few tutorials and just fiddled around with a little bit. Why should I learn Ruby? Why is it so good? From what I have heard it has really good support for OOP but that is all I know.
  15. Many good answers, thanks a lot guys. You are probably right about search engines(Google) will keep on being the best source for traffic. But as some of you stated, spreading out could be beneficial. I'll always be keeping an eye open for new opportunities though.
×
×
  • Create New...