powershell - power shell moving Files to Amazon s3 -


i have below powershell script move files amazon bucket me , works ok few small files, when copying larger files loop continues loop , starts copy before others have finished , doesn't take long before have 100s of files transferring @ once.

what want able limit number of simultaneous file transfers 5 or 10?

foreach ($line in $csv) {    #--------------------transfer files put in each loop here--------------------------- $sourcefolder  =$line.destination $sourcefile = $line.name  if(test-path -path $sourcefolder){     write-s3object -bucketname $bucketname  -key $sourcefile  -file  $sourcefolder      #check fro missing files         $s3getrequest = new-object amazon.s3.model.s3object  #get-s3object  -bucketname   $bucketname  -key $sourcefile         $s3getrequest = get-s3object  -bucketname $bucketname  -key $sourcefile          if($s3getrequest -eq $null){             write-error "error: amazon s3 requrest failed. script halted."             $sourcefile + ",transfer error" |out-file $log_loc -append     } }else {$sourcefolder + ",missing file error" |out-file $log_loc -append}  } 

from description, sounds larger files triggering multipart upload. write-s3object documentation:

if uploading large files, write-s3object cmdlet use multipart upload fulfill request. if multipart upload interrupted, write-s3object cmdlet attempt abort multipart upload.

unfortunately, write-s3object doesn't have native way handle use case. however, multipart upload overview describes behavior may able leverage:

multipart uploading three-step process: initiate upload, upload object parts, , after have uploaded parts, complete multipart upload. upon receiving complete multipart upload request, amazon s3 constructs object uploaded parts, , can access object other object in bucket.

this leads me suspect can ping our objects get-s3object see if exist yet. if not, should wait on uploading more files until do.

i've created script below -- iterates through collection of files , collects names upload them. once exceed 5 uploaded files, script check if exist , continue on if do. otherwise, continue checking exist.

$bucketname = "mys3bucket" $s3directory = "c:\users\$env:username\documents\s3test" $concurrentlimit = 5 $inprogressfiles = @()  foreach ($i in get-childitem $s3directory)  {    # write file s3 , add filename collection.   write-s3object -bucketname $bucketname -key $i.name -file $i.fullname    $inprogressfiles += $i.name    # wait continue iterating through files if there many concurrent uploads   while($inprogressfiles.count -gt $concurrentlimit)    {     write-host "before: "$($inprogressfiles.count)      # reassign array excluding files have completed upload s3.     $inprogressfiles = @($inprogressfiles | ? { @(get-s3object -bucketname $bucketname -key $_).count -eq 0 })      write-host "after: "$($inprogressfiles.count)      start-sleep -s 1   }    start-sleep -s 1 } 

you can modify needs changing foreach loop use csv content. added sleep statements able watch , see how works -- feel free change/remove them.


Comments

Popular posts from this blog

Android layout hidden on keyboard show -

google app engine - 403 Forbidden POST - Flask WTForms -

c - Why would PK11_GenerateRandom() return an error -8023? -