enviroment.txt 2.1 KB

12345678910111213141516171819202122232425
  1. python==3.8
  2. onnx==1.11
  3. CPU时常:130-140ms。GPU待解决
  4. CPU使用情况:总的为600%
  5. onnxruntime-gpu版本待定均有报错:
  6. 1、1.11及1.10版本可以使用CPU但无法使用cuda,报错信息为:
  7. 2024-09-04 15:33:24.369159907 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 Get] Failed to load library libonnxruntime_providers_cuda.so with error: libcublas.so.10: cannot open shared object file: No such file or directory
  8. 2024-09-04 15:33:24.369221636 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
  9. 2、1.12.1及1.17(测试了两个版本)
  10. 不再有libcublas.so.10: cannot open shared object file: No such file or directory的错误,但是有新的报错:
  11. Model correct
  12. Using CUDA for inference.
  13. 2024-09-04 16:15:58.913593544 [E:onnxruntime:, sequential_executor.cc:368 Execute] Non-zero status code returned while running Conv node. Name:'/model.1/conv/Conv' Status Message: :0: cudaFuncSetAttribute(kernel_entry, cudaFuncAttributeMaxDynamicSharedMemorySize, integer_cast<int32_t>(launch_configs[0].smemSizeInBytes)): invalid device function
  14. Traceback (most recent call last):
  15. File "testonnxvideo.py", line 263, in <module>
  16. main()
  17. File "testonnxvideo.py", line 242, in main
  18. output, org_img = model.inference(frame)
  19. File "testonnxvideo.py", line 86, in inference
  20. pred = self.onnx_session.run(None, input_feed)[0]
  21. File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
  22. return self._sess.run(output_names, input_feed, run_options)
  23. onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. Name:'/model.1/conv/Conv' Status Message: :0: cudaFuncSetAttribute(kernel_entry, cudaFuncAttributeMaxDynamicSharedMemorySize, integer_cast<int32_t>(launch_configs[0].smemSizeInBytes)): invalid device function