Hi, guys, I am not sure if this issue is something I should check specifically with Oracle, or with Perl... but hope someone can point me out the correct direction. I have two servers, running a select to one database table. The reason is to have a load balancing. In real time, there will always be inserts in that table and the two processes are responsible to read those rows to process them. Now, if one process have taken some rows I want to lock them, so the other process doesnt take them. I tried using Oracles SELECT... FOR UPDATE .. SKIP LOCKED, however, when running a test, where I insert 10000 rows, I run the two processes and write to a text file the rows that they have read and then delete them, instead of getting 10000 lines in that text file, I get 14000 something sometimes 16000. When I see the text file, I see:

1000,1
1000,2

It means that the row number 1000 was read by both processes and the skip locked didnt work. So I was wondering, is this an issue with Oracle, or with my perl approach. Here is how I made my code:

Code:
$lck = $dbh->prepare("lock table promotions.test IN ROW SHARE MODE"); 
$lck-> execute();  
while(1){ 
$nf = $dbh->selectrow_array("select count(*) from promotions.test"); 
$sth = $dbh->prepare("commit"); $sth->execute(); 
if ($dbh->err =~ /3113|3114/ ){ exit; }; 
if ($nf eq 0){ 	sleep(1);  } else{  
$query = $dbh->prepare("select rowid,testfield from promotions.test where rownum <= ? for update skip locked"); $query->execute(10);   
while ( @row = $query->fetchrow_array() ) { 	
open( F ,">>/opt/vasapp/logs/test_$ARGV[0].log") || die("cannot open file: " . $!); 
print F "$row[1],$ARGV[0]\n"; 	
close F;  	
$upd = $dbh->prepare("delete from promotions.test where rowid = ?"); 	
$upd->execute("$row[0]"); } 
$sth = $dbh->prepare("commit"); $sth->execute();  }  } $dbh->disconnect;
Don't know if there is an issue with the commit, or what else. If someone can point me to the right direction, will be great. Or if there is a best way to do this.