Quantcast
Channel: Forums - Python
Viewing all articles
Browse latest Browse all 2485

Multiprocessing viewsheds (easy)

$
0
0
Hi there,
I have been searching the forum for this but eventhogh it seems that the solution should be quite easy to come up with, I couldn't find one.

My system: Arcgis 10.1 Sp1, Python 2.7.2, I7 4770k 3,5 ghz, 16gb ddr3 ram, ssd drive


So here is my (simple) problem: I want to take advantage of the multiprocessing module to speed up viewshed calculations

In order to get to know this function I set up a folder containung 156 shapefiles each containing one single point. No I want to calculate viewsheds for each of these points using the multiprocessing module. It starts out fine: all cores are in use up to 100% and the right viewsheds are being calculated. but then it fails telling me either that "viewshe_demt1" already exists or that a second person or application is using this folder. Below are the error messages and my current code. I discovered that the code is producing folders called "viewshe_demtX" in the "scriptx"-folder which I don't really understand, because I specified the folder for the output of the viewshed analysis to be "scriptxy"

1) ExecuteError: ERROR 010429: "Error in GRID IO: CellLyrCreateInternal: Grid c:\studium\project\scriptx\viewshe_demt1 already exists."
2) (Couldn't find the error-code): "Another person or application is accessing this directory"

Code:

import arcinfo, os, re, multiprocessing
import arcpy
from arcpy import env
from arcpy.sa import *

arcpy.CheckOutExtension("Spatial")
env.workspace = "C:/studium/project/scriptx"
arcpy.env.overwriteOutput ='True'

arcpy.CreateFolder_management("C:/studium/project", "scriptxy")

def update_shapefiles(shapefile):
    '''Worker function'''
    outViewshed = Viewshed("C:/studium/00_Master/clean/demtora_okay1", shapefile, 2, "CURVED_EARTH", 0.15)
    mystring = "C:/studium/project/scriptxy/"+shapefile[26:37]
    outViewshed.save(mystring)
       
# End update_shapefiles
def main():
    ''' Create a pool class and run the jobs.'''
    arcpy.env.workspace = "C:/studium/project/scriptx"
    print arcpy.env.workspace
    fcs = arcpy.ListFeatureClasses('turbine*')
    fc_list = [os.path.join("C:/studium/project/scriptx", fc) for fc in fcs]
    print fc_list
    pool = multiprocessing.Pool()
    pool.map(update_shapefiles, fc_list)

    # Synchronize the main process with the job processes to
    # ensure proper cleanup.
    pool.close()
    pool.join()
    # End main
 
if __name__ == '__main__':
    main()

I basically used the first code example from http://blogs.esri.com/esri/arcgis/20...arcgis-part-1/ and modified it (obviously poorly) to produce viewsheds. My normal Viewshed-analysis takes about 8 hours for 156 viewsheds and the CPU utilization never exceeds 12%.
I think the problem with the new code has something to do with my file-naming and is otherwise fine.
please help me get it to work properly.
any advise is welcome

Viewing all articles
Browse latest Browse all 2485

Trending Articles