You can freely upload and link to your own ImageBeaver galleries. And you freely browse every new gallery which is submitted to ImageBeaver, but only today's galleries, or...to be absolutely specif, you can view todays and yesterdays. If you want to go further back or search then you have to pay for a subscription.
Downloading from ImageBeaver can be a hazzel, there are popup's and ad's scattered all over and to get a single picture takes up to 3 clicks and full page loads.
So if you want what ImageBeaver has to offer you either have to pay or to utilize the free part. You can try out the free part if you visit this link hxxp://www.imagebeaver.com/browse.php?photos=Yesterday. On the other hand, you can go for a different approach. And the approach is the following.
- Go download the programming language Perl from ActiveState and install it.
- Copy the Perl script below to your text editor ( ex. Notepad ) and save it as c:\scripts\imb.pl
- Click Start->Run and type cmd.exe
- Type: cd \scripts and hit return.
- Type perl imb.pl and hit return.
- The result should be that the script visits the ImageBeaver site and begins downloading every free image of yesterdays galleries. Normally it amounts to something between 18-24 galleries and a mb size of ~60Mb.
So, provided that you have success with the above then you just need to run the script every 24 hours or so to continously get new fresh ImageBeaver porn every day.
Luckily Windows can help you do this, ie. running the script every 24 hour. You can obtain this luxury by clicking your Start->Accessories->System Tools->Task Scheduler. There you can create a task which runs every 24 hour and you type in the command to execute every 24 hour. And you have the commands above. You can make it easier for you if you make a command file. A command file for running the script above will look like this:
c:
cd \scripts
c:\Perl\bin\perl.exe c:\scripts\imb.pl
Write the above in Notepad and save is as c:\scripts\imb.cmd. And put c:\scripts\imb.cmd into the Task Scheduler. And remember to start the Task Scheduler after you have finished configuring your task, otherwise it will not run - naturally.
Here you have the Perl script:
#!C:/Perl/bin/perl -w
# Scripe coded by pwd
# Date: 7th September 2008
# Revision #1
# Handles: http://www.imagebeaver.com/browse.php?photos=Yesterday
use strict;
use POSIX qw(ceil floor);
use LWP::UserAgent;
use Archive::Zip qw( :ERROR_CODES :CONSTANTS );
use File::Find;
print "ImageBeaver Script v2008-09-07 by pwd.\n\n";
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
my $theYearMonth = sprintf("%04d-%02d-%02d",$year+1900,$mon+1,$mday);;
my $pageurl = "http://www.imagebeaver.com/browse.php?photos=Yesterday";
my $ua = LWP::UserAgent->new;
$ua->timeout(40);
$ua->agent('Mozilla/5.0');
my $thePage = "";
my $commandArgs=0;
foreach my $ARGV (@ARGV){
$commandArgs = 1;
$ARGV =~ m/(.*?).html?/;
$theYearMonth = $1;
open my $readfile, '<', $ARGV or die "error trying to load: $!";
binmode ($readfile);
my $size = -s $ARGV;
print "Loading: ", $ARGV, "\n\n";
sysread($readfile,$thePage,$size,0);
close($readfile);
getIt($thePage,$theYearMonth);
}
if ($commandArgs==0) {
print "Downloading: ", $pageurl, "\n\n";
$thePage = getHTTPPage($pageurl);
getIt($thePage,$theYearMonth);
}
exit;
sub getIt {
my ($page,$yearmonth) = @_;
$page =~ tr/\r\n\t//d;
$page =~ s/ / /g;
my $monthfilename = join "",$yearmonth,".html";
open my $overwrite, '>', $monthfilename or die "error trying to overwrite: $!";
print $overwrite $page;
close $overwrite;
mkdir($yearmonth, 0777);
$page =~ m/(\/photos\/[^"]+\.html)/;
my $oldSubPageLink=$1;
my $clipNo=1;
while ($page =~ m/(\/photos\/[^"]+\.html)/g)
{
if (($oldSubPageLink ne $1) || ($clipNo == 1)) {
my $url = join "","http://www.imagebeaver.com",$1;
$oldSubPageLink = $1;
my $subPage = getHTTPPage($url);
while ($subPage =~ m/(http:\/\/www.imagebeaver.com\/photos\/[^\d]+\d+_\d+[^]]+\.html)]/g) {
$url = $1;
my $picPage = getHTTPPage($url);
$picPage =~ m/thepic[^\/]+([^"]+)/;
$url= join "","http:",$1;
$url=~m/([^\/]+.jpg)/;
my $filename = $1;
my $dirName = sprintf("%s - %02d",$yearmonth,$clipNo);
if (getHTTPFile($url,$filename,$dirName,$yearmonth) != 100) {
chdir $dirName;
unlink $filename;
chdir "..";
print " (incomplete, deleting...)\n";
} else {
print "\n";
}
}
$clipNo++;
}
}
}
sub getHTTPPage {
my ($myurl) = @_;
my $mypage = "";
my $response = undef;
my $req = HTTP::Request->new('GET', $myurl);
$response = $ua->request($req);
if ($response->is_success) {
$mypage = $response->content;
return $mypage;
} else {
$response = $ua->request($req);
if ($response->is_success) {
$mypage = $response->content;
return $mypage;
} else {
print STDERR "ERROR: Unable to download $myurl\n";
die;
}
}
}
sub getHTTPFile {
my ($zipurl,$filename,$folder,$yearmonth) = @_;
my $req = HTTP::Request->new("GET", $zipurl);
$filename = lc($filename);
my $storefolder = $folder;
my $percentage = 100;
chdir $yearmonth;
mkdir ($storefolder, 0777);
my $expected_length;
my $bytes_received = 0;
chdir $storefolder;
if (-s $filename) {
print STDERR "\r $storefolder\\$filename already exists, skipping";
chdir "..";
return $percentage;
}
# check if file exists and has a non zero size
open my $overwrite, '>', $filename or die "error trying to overwrite: $!";
binmode $overwrite;
my $oldpercentage = 0;
my $res = $ua->request($req,sub
{
my($chunk, $res) = @_;
$bytes_received += length($chunk);
unless (defined $expected_length) {
$expected_length = $res->content_length || 0;
}
$percentage=100 * $bytes_received / $expected_length;
my $progress="";
if (floor($percentage) != floor($oldpercentage)){
print "\r";
$oldpercentage=$percentage;
if ($expected_length) {
$progress = sprintf(" (%d%%) %s\\%s %d",100 * $bytes_received / $expected_length,$storefolder,$filename,$bytes_received);
print "$progress bytes";
}
}
print $overwrite $chunk;
});
close $overwrite;
chdir "..";
return floor($percentage);
}
3 comments:
IF you copy this script and paste it into your favorite editor (Ultra Edit) be sure to comment (#) out the lines between the subroutines.
Won't work otherwise! :-)
Depending on the layout of the script then around line 77 there is which can be altered slightly to be able to download both gif, png and jpeg files.
Before:
$url=~m/([^\/]+.jpg)/;
After:
$url=~m/(?-i)([^\/]+.(jpe?g|gif|png))/;
unfortunately, this seems to be failing now; I only get a dir with todays date, and the html-page downloaded.
I'm not a perl-hacker, so if anyone fixes this, please post what to change :)
Post a Comment