I have gone through most of the solutions but they all didn't work.
I try to parse a Wikipedia page using file_get_contents but the return differs according to something in the page I haven't figure out yet.
For example when I use it with this page, http://en.wikipedia.org/wiki/Word it works fine, but when I use it with this page http://en.wikipedia.org/wiki/David_A._Kolb it returns strage characters..
And both pages they're just the same, I guess.
What could be the problem ?
UPDATE 1
Here what I got :
î²$'ˆ‰ÃBÿ—¾XP·o€Ô%4aºäÇ$ÊãÔ¼s¾w>ÈÙfb%¾ “p£+Ïü J£x&6ç>vŸŠ$BfbðzÊ~ì𥳈è`lƒW{·²±Ÿd³žç"U™ðrÉ¥ý4Ê'ú™,N—î, ¢©ª3/U+þÔGQãDý|A*¬iœø=céOy(èO€O‡ù4O3ø Ãvi_éÉ_/£K]x¢‘~~(Cp†(Q€!A£Ë±í‘åÀr\9¨N%G‘¼¸äav-ÍÁÖe€.ˆK¨È*Å/`ºøÏÄRž„¸ÔÞɉœÁ'PQ‚€Ç©Å!¿Ô$g•f|œêbT-< ŸLÑUÌ` ¡Òâ4–L¢0Èv'ÖSA€a?(Ù œ‚n÷ç€Pj°Ï4ê18·&À3þcXfÕ3pb éÌ:õ ”š˜egfTCã¦$Nñ˜Êó1õ^”æãÀO‹¹f‘«~Ø€Ø.°ñéyt!kñí½ÉXôzÀåºÛ»Û[wl¹Ûã‡{Ûn¿+·S1½§ ráõËàEs?EÆIó哬Äè 3e,™K´Ô€`‘(Ï‘Zû©–q%$à ¯ÖDÄ @k5Ó¬þì§9ô ~rüÑó-Ï@{ÅFÒF ƒ—Ï}Æ~`Kæâú1ÚŠJ2ér”OJäü˜Ã.zç ÜŸ‰ ¦§äMÀø<ÕL•$íL©Ö)¿v´€8„ÊÄqÁ·¡ƒC&_`~È–Ð’E!™zÔCŒŠÈ¶Pï³ë9 ɵaµ «' U¢„šY… E¸ç%V!N9ãÁ:º$iËòŠ¯™ªÀ€-…ž©0eȪpê¥-¡hè³$="0 ²|>G-§Ð/Ê9'/ÂhJ>Í‚àY‚¸çQ‰?G¸üŸ± B‚¡I5 ¨Îä|]/,„bA³©÷FdÑêßQÔAÊ‘*Á¦¨˜i†d•¡c^.ÒRÐLÔ꘢,ŠÛ„}"…igÓI\/áÝ]üøsTwàDH…"i°€PWI´€¦ýMå¨Sí%G„)y"º/´(,þ˜âKÙß“%ð”v‘4HUNÚ“ù´:| m>Ò\a_Ò,g ] !a4˃2ÇHÀ¾vÀiŒB×¥"؇ĕê‚!½qÄý{ªÈÞ5UJ°¯•‚rý¶Ö¬"Ü[Ô^ÒrK,GYCiàçõÂóJňšµÂ2&QÕt(5T 7 wv"å,¬06dI¹Os¶Ë3i‡•[#Á îÕڪÕUujçåfµäÚ"èÑÒ—Œ‡žiZ5@dã1Ø6.”‹ZîÔ£b’•-Ð]²–tûq¯ÕI©ÊÖR+ÍÇ†í§…·0[M”USoIì´±m<’˜KªÕŠp<çÁìr”LÓ†b•7‘Vºñ–ºÄ¥Ï‰E“eT,m¹º/Óna\É‚^A‡ª¤_+Ùª•l×Jvj%j%»µ’½ZÉÃZI¯[/ªCÝ«ƒÝ«ÃÝ«Þ«CÞë6Ùm³[‡¿×…¬U—k»¾ÛÛõåØm˜ënÃd÷úõ÷úÅÒI-»¥]ۄϪ¬æ·+ŸEÙºOŠ’—n—t»t¾<KT3(½çÑçÍOøßÆ£ÕúM†Fo³z#«”vƒì@È|ÿZr3U¦}MÈÓì¢_àþˆî`!¶wLvxÿAOìî=Üív» ÃÆãÝÉdÒwû¾c©x׸]ÕàŸõwN\@6ÑV~^˜Y-ꀿѯÜTÇ?ò+‡´fhKWÊ‘r¥È*ãs4FŒ²D(Cz{[FbÛ0íL½ƒá„ÒøhyB¯ !Í·¤ØÞ >‘QtP^j+fáßDJdÞÓQ…”{Â`Âþ½Vë?aAÆNž°ÝC\Pá.4G;nš:Ãqä-?Å (äI°Ž1-´}·e¢¼ŠÓtäâ‹3ôj´ Ú ²:Ÿµ»ÿÞ;ÐýɇV¦ ÂÕ8†h›ýȆÏOZø&×Åÿœ@3Ž¶-å§#7C Z&£„-”L‰>‚*™‰ç| F‰3\ûæ›}Ï¿d¾£.¤¨Á4±õ0Š1N…k íªíöÛÃ"ëƒÛ]¨bó6t‰ëà0@Ø´ÇÆoö9µGÅzæ²ÌäcÚc4¥ð5-òZ `‚-)ŠóÈ‚ÿL¸®!ᥫè«$Ê:„$ìÈFcl®és훂$É[³Î¾»ï¿Ûd¯bŸFSJ/ ù<�5s}ãûÌ€L,*1S!Ø:õ‰è*ÒÍŸIµÝLrÖ£9ÚufÊ…&禃•N™<Ï"§œW A5ârÏ«qp¬Á->*!D±òV£§¢^Ëø5m3ÇÒnåcgø4‰æ·Èð0ˆ á¬ö[ èþèû¨¨ð–õW{ÍþGMÐNÉ¢z·XÞÊ¢*¥I`±^ŒŸà7Ë¢ìLyõœo-:CxÕŸ’}d²É*íâ–R‡ò¯¦¥oj³¨Àh*pƒÊÔ\¦DU×Bîé\—µcµÅâá>™ºÖWî™K•’5@_“Ým£Åª¿¬°øê[ø^6ôûbþÓ\.Ý-ÃCó¶Æb‰Âªf%º1¾Ÿy €àNß@o:¡ 1Pê4 ‹y 7™èl}êb ™4%³[ô<Ñ°‡7üù”ñ€bðJøå1ExËâÏ8í:*™£#:¢Û©vNKpàô@Ác3.xØí“̃ßïd(r:YRŸíŒ¥n„âLð¦Ib’ÁG .... (it goes on).
Looks like compressed response to me. To get plain text response, you can use gzopen()
+ gzread()
:
$fp = gzopen('http://en.wikipedia.org/wiki/David_A._Kolb', 'r');
$contents = '';
while ($chunk = gzread($fp, 256000)) {
$contents .= $chunk;
}
gzclose($fp);
...or you can use file_get_contents()
, but force server to return plain text:
$context = stream_context_create(array(
'http'=>array(
'method' => "GET",
'header' => "Accept-Encoding: gzip;q=0, compress;q=0\r\n",
)
));
$contents = file_get_contents('http://en.wikipedia.org/wiki/David_A._Kolb', false, $context);
...but not all servers take this into account, so I suggest you using cURL for your task:
function get_url($url)
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_ENCODING, 'gzip');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($curl);
curl_close($curl);
return $data;
}
$data = get_url('http://en.wikipedia.org/wiki/Word');
$data = get_url('http://en.wikipedia.org/wiki/David_A._Kolb');
This sounds like it may be an encoding issue. Try converting the encoding and see if this helps.
mb_convert_encoding($wikitext, 'UTF-8',mb_detect_encoding($wikitext, 'UTF-8, ISO-8859-1', true));
The file_get_contents
function apparently has some issues with non UTF-8 encoding according to it's reference page on PHP.net this function was recommended there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With