PyOpenCL vs Parallel-Python -


since pyopencl , parallel python both python modules dedicated parallel processing, please provide example(s) of why programmer use 1 on other?

from package listings:

pyopencl - python module access opencl parallel computation api

pp - parallel , distributed programming toolkit python


programmers create "jobs" , "job servers" pp distribute work across multi-core, multi-processor, and/or clustering computing environments.

opencl superset language (specifically application programming interface (api)) of cuda, not in syntax, in usage. opencl interface programming many different types of devices connected computer. devices include different architectures of graphics cards, cuda nvidia chipset cards. there pycuda if programs may ever run on systems nvida gpus.

the use of these modules dependent on hardware being accessed , problem being solved. can used or separately needed.


here example of parallelpython program.

#!/usr/bin/python # file: dynamic_ncpus.py # author: vitalii vanovschi # desc: program demonstrates parallel computations pp module  # , dynamic cpu allocation feature. # program calculates partial sum 1-1/2+1/3-1/4+1/5-1/6+... (in limit ln(2)) # parallel python software: http://www.parallelpython.com  import math, sys, md5, time import pp  def part_sum(start, end):     """calculates partial sum"""     sum = 0     x in xrange(start, end):         if x % 2 == 0:            sum -= 1.0 / x          else:            sum += 1.0 / x      return sum  print """usage: python dynamic_ncpus.py""" print   start = 1 end = 20000000  # divide task 64 subtasks parts = 64 step = (end - start) / parts + 1  # create jobserver job_server = pp.server()  # execute same task different amount of active workers , measure time ncpus in (1, 2, 4, 8, 16, 1):     job_server.set_ncpus(ncpus)     jobs = []     start_time = time.time()     print "starting ", job_server.get_ncpus(), " workers"     index in xrange(parts):         starti = start+index*step         endi = min(start+(index+1)*step, end)         # submit job calculate partial sum          # part_sum - function         # (starti, endi) - tuple arguments part_sum         # () - tuple functions on function part_sum depends         # () - tuple module names must imported before part_sum execution         jobs.append(job_server.submit(part_sum, (starti, endi)))      # retrieve results , calculate sum     part_sum1 = sum([job() job in jobs])     # print partial sum     print "partial sum is", part_sum1, "| diff =", math.log(2) - part_sum1      print "time elapsed: ", time.time() - start_time, "s"     print job_server.print_stats()  # parallel python software: http://www.parallelpython.com 

Comments

Popular posts from this blog

Android layout hidden on keyboard show -

google app engine - 403 Forbidden POST - Flask WTForms -

c - Why would PK11_GenerateRandom() return an error -8023? -